00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2000 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3266 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.026 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.027 The recommended git tool is: git 00:00:00.027 using credential 00000000-0000-0000-0000-000000000002 00:00:00.029 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.040 Fetching changes from the remote Git repository 00:00:00.047 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.063 Using shallow fetch with depth 1 00:00:00.063 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.063 > git --version # timeout=10 00:00:00.081 > git --version # 'git version 2.39.2' 00:00:00.081 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.100 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.100 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.978 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.989 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.000 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:04.000 > git config core.sparsecheckout # timeout=10 00:00:04.009 > git read-tree -mu HEAD # timeout=10 00:00:04.023 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:04.042 Commit message: "inventory: add WCP3 to free inventory" 00:00:04.042 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:04.139 [Pipeline] Start of Pipeline 00:00:04.152 [Pipeline] library 00:00:04.153 Loading library shm_lib@master 00:00:04.153 Library shm_lib@master is cached. Copying from home. 00:00:04.166 [Pipeline] node 00:00:04.178 Running on VM-host-SM16 in /var/jenkins/workspace/ubuntu22-vg-autotest 00:00:04.180 [Pipeline] { 00:00:04.188 [Pipeline] catchError 00:00:04.189 [Pipeline] { 00:00:04.198 [Pipeline] wrap 00:00:04.205 [Pipeline] { 00:00:04.211 [Pipeline] stage 00:00:04.212 [Pipeline] { (Prologue) 00:00:04.227 [Pipeline] echo 00:00:04.228 Node: VM-host-SM16 00:00:04.234 [Pipeline] cleanWs 00:00:04.244 [WS-CLEANUP] Deleting project workspace... 00:00:04.244 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.250 [WS-CLEANUP] done 00:00:04.409 [Pipeline] setCustomBuildProperty 00:00:04.470 [Pipeline] httpRequest 00:00:04.485 [Pipeline] echo 00:00:04.486 Sorcerer 10.211.164.101 is alive 00:00:04.492 [Pipeline] httpRequest 00:00:04.496 HttpMethod: GET 00:00:04.496 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:04.497 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:04.500 Response Code: HTTP/1.1 200 OK 00:00:04.501 Success: Status code 200 is in the accepted range: 200,404 00:00:04.501 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.867 [Pipeline] sh 00:00:06.144 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.158 [Pipeline] httpRequest 00:00:06.175 [Pipeline] echo 00:00:06.177 Sorcerer 10.211.164.101 is alive 00:00:06.182 [Pipeline] httpRequest 00:00:06.185 HttpMethod: GET 00:00:06.186 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:06.187 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:06.212 Response Code: HTTP/1.1 200 OK 00:00:06.212 Success: Status code 200 is in the accepted range: 200,404 00:00:06.213 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:13.856 [Pipeline] sh 00:01:14.134 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:16.676 [Pipeline] sh 00:01:16.955 + git -C spdk log --oneline -n5 00:01:16.955 719d03c6a sock/uring: only register net impl if supported 00:01:16.955 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:16.955 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:16.955 6c7c1f57e accel: add sequence outstanding stat 00:01:16.955 3bc8e6a26 accel: add utility to put task 00:01:16.973 [Pipeline] withCredentials 00:01:16.983 > git --version # timeout=10 00:01:16.995 > git --version # 'git version 2.39.2' 00:01:17.010 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:17.012 [Pipeline] { 00:01:17.021 [Pipeline] retry 00:01:17.023 [Pipeline] { 00:01:17.039 [Pipeline] sh 00:01:17.318 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:18.266 [Pipeline] } 00:01:18.289 [Pipeline] // retry 00:01:18.295 [Pipeline] } 00:01:18.316 [Pipeline] // withCredentials 00:01:18.326 [Pipeline] httpRequest 00:01:18.344 [Pipeline] echo 00:01:18.346 Sorcerer 10.211.164.101 is alive 00:01:18.355 [Pipeline] httpRequest 00:01:18.360 HttpMethod: GET 00:01:18.360 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:18.361 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:18.361 Response Code: HTTP/1.1 200 OK 00:01:18.362 Success: Status code 200 is in the accepted range: 200,404 00:01:18.362 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:27.013 [Pipeline] sh 00:01:27.291 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:28.677 [Pipeline] sh 00:01:28.954 + git -C dpdk log --oneline -n5 00:01:28.954 caf0f5d395 version: 22.11.4 00:01:28.954 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:28.954 dc9c799c7d vhost: fix missing spinlock unlock 00:01:28.954 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:28.954 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:28.970 [Pipeline] writeFile 00:01:28.985 [Pipeline] sh 00:01:29.263 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:29.274 [Pipeline] sh 00:01:29.552 + cat autorun-spdk.conf 00:01:29.552 SPDK_TEST_UNITTEST=1 00:01:29.552 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.552 SPDK_TEST_NVME=1 00:01:29.552 SPDK_TEST_BLOCKDEV=1 00:01:29.552 SPDK_RUN_ASAN=1 00:01:29.552 SPDK_RUN_UBSAN=1 00:01:29.552 SPDK_TEST_RAID5=1 00:01:29.552 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:29.552 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:29.552 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:29.558 RUN_NIGHTLY=1 00:01:29.560 [Pipeline] } 00:01:29.578 [Pipeline] // stage 00:01:29.593 [Pipeline] stage 00:01:29.595 [Pipeline] { (Run VM) 00:01:29.608 [Pipeline] sh 00:01:29.890 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:29.890 + echo 'Start stage prepare_nvme.sh' 00:01:29.890 Start stage prepare_nvme.sh 00:01:29.890 + [[ -n 7 ]] 00:01:29.890 + disk_prefix=ex7 00:01:29.890 + [[ -n /var/jenkins/workspace/ubuntu22-vg-autotest ]] 00:01:29.890 + [[ -e /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf ]] 00:01:29.890 + source /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf 00:01:29.890 ++ SPDK_TEST_UNITTEST=1 00:01:29.890 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.890 ++ SPDK_TEST_NVME=1 00:01:29.890 ++ SPDK_TEST_BLOCKDEV=1 00:01:29.890 ++ SPDK_RUN_ASAN=1 00:01:29.890 ++ SPDK_RUN_UBSAN=1 00:01:29.890 ++ SPDK_TEST_RAID5=1 00:01:29.890 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:29.890 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:29.890 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:29.890 ++ RUN_NIGHTLY=1 00:01:29.890 + cd /var/jenkins/workspace/ubuntu22-vg-autotest 00:01:29.890 + nvme_files=() 00:01:29.890 + declare -A nvme_files 00:01:29.890 + backend_dir=/var/lib/libvirt/images/backends 00:01:29.890 + nvme_files['nvme.img']=5G 00:01:29.890 + nvme_files['nvme-cmb.img']=5G 00:01:29.890 + nvme_files['nvme-multi0.img']=4G 00:01:29.890 + nvme_files['nvme-multi1.img']=4G 00:01:29.890 + nvme_files['nvme-multi2.img']=4G 00:01:29.890 + nvme_files['nvme-openstack.img']=8G 00:01:29.890 + nvme_files['nvme-zns.img']=5G 00:01:29.890 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:29.890 + (( SPDK_TEST_FTL == 1 )) 00:01:29.890 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:29.890 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:29.890 + for nvme in "${!nvme_files[@]}" 00:01:29.890 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:01:29.890 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:29.890 + for nvme in "${!nvme_files[@]}" 00:01:29.890 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:01:30.457 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:30.457 + for nvme in "${!nvme_files[@]}" 00:01:30.457 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:01:30.457 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:30.457 + for nvme in "${!nvme_files[@]}" 00:01:30.457 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:01:30.457 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:30.457 + for nvme in "${!nvme_files[@]}" 00:01:30.457 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:01:30.457 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:30.457 + for nvme in "${!nvme_files[@]}" 00:01:30.457 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:01:30.457 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:30.457 + for nvme in "${!nvme_files[@]}" 00:01:30.457 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:01:31.056 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:31.056 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:01:31.056 + echo 'End stage prepare_nvme.sh' 00:01:31.056 End stage prepare_nvme.sh 00:01:31.071 [Pipeline] sh 00:01:31.349 + DISTRO=ubuntu2204 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:31.349 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -H -a -v -f ubuntu2204 00:01:31.349 00:01:31.349 DIR=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk/scripts/vagrant 00:01:31.349 SPDK_DIR=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk 00:01:31.349 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu22-vg-autotest 00:01:31.349 HELP=0 00:01:31.349 DRY_RUN=0 00:01:31.349 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img, 00:01:31.349 NVME_DISKS_TYPE=nvme, 00:01:31.349 NVME_AUTO_CREATE=0 00:01:31.349 NVME_DISKS_NAMESPACES=, 00:01:31.349 NVME_CMB=, 00:01:31.349 NVME_PMR=, 00:01:31.349 NVME_ZNS=, 00:01:31.349 NVME_MS=, 00:01:31.349 NVME_FDP=, 00:01:31.349 SPDK_VAGRANT_DISTRO=ubuntu2204 00:01:31.349 SPDK_VAGRANT_VMCPU=10 00:01:31.349 SPDK_VAGRANT_VMRAM=12288 00:01:31.349 SPDK_VAGRANT_PROVIDER=libvirt 00:01:31.349 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:31.349 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:31.349 SPDK_OPENSTACK_NETWORK=0 00:01:31.349 VAGRANT_PACKAGE_BOX=0 00:01:31.349 VAGRANTFILE=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:31.349 FORCE_DISTRO=true 00:01:31.349 VAGRANT_BOX_VERSION= 00:01:31.349 EXTRA_VAGRANTFILES= 00:01:31.349 NIC_MODEL=e1000 00:01:31.349 00:01:31.349 mkdir: created directory '/var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt' 00:01:31.349 /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt /var/jenkins/workspace/ubuntu22-vg-autotest 00:01:33.874 Bringing machine 'default' up with 'libvirt' provider... 00:01:34.439 ==> default: Creating image (snapshot of base box volume). 00:01:34.696 ==> default: Creating domain with the following settings... 00:01:34.696 ==> default: -- Name: ubuntu2204-22.04-1711172311-2200_default_1720910843_29e6069fe974a5626514 00:01:34.696 ==> default: -- Domain type: kvm 00:01:34.696 ==> default: -- Cpus: 10 00:01:34.696 ==> default: -- Feature: acpi 00:01:34.696 ==> default: -- Feature: apic 00:01:34.696 ==> default: -- Feature: pae 00:01:34.696 ==> default: -- Memory: 12288M 00:01:34.696 ==> default: -- Memory Backing: hugepages: 00:01:34.696 ==> default: -- Management MAC: 00:01:34.696 ==> default: -- Loader: 00:01:34.696 ==> default: -- Nvram: 00:01:34.696 ==> default: -- Base box: spdk/ubuntu2204 00:01:34.696 ==> default: -- Storage pool: default 00:01:34.696 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2204-22.04-1711172311-2200_default_1720910843_29e6069fe974a5626514.img (20G) 00:01:34.696 ==> default: -- Volume Cache: default 00:01:34.696 ==> default: -- Kernel: 00:01:34.696 ==> default: -- Initrd: 00:01:34.696 ==> default: -- Graphics Type: vnc 00:01:34.697 ==> default: -- Graphics Port: -1 00:01:34.697 ==> default: -- Graphics IP: 127.0.0.1 00:01:34.697 ==> default: -- Graphics Password: Not defined 00:01:34.697 ==> default: -- Video Type: cirrus 00:01:34.697 ==> default: -- Video VRAM: 9216 00:01:34.697 ==> default: -- Sound Type: 00:01:34.697 ==> default: -- Keymap: en-us 00:01:34.697 ==> default: -- TPM Path: 00:01:34.697 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:34.697 ==> default: -- Command line args: 00:01:34.697 ==> default: -> value=-device, 00:01:34.697 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:34.697 ==> default: -> value=-drive, 00:01:34.697 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:01:34.697 ==> default: -> value=-device, 00:01:34.697 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:34.956 ==> default: Creating shared folders metadata... 00:01:34.956 ==> default: Starting domain. 00:01:36.332 ==> default: Waiting for domain to get an IP address... 00:01:46.322 ==> default: Waiting for SSH to become available... 00:01:47.258 ==> default: Configuring and enabling network interfaces... 00:01:51.447 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:56.709 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:00.894 ==> default: Mounting SSHFS shared folder... 00:02:01.828 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output => /home/vagrant/spdk_repo/output 00:02:01.828 ==> default: Checking Mount.. 00:02:02.392 ==> default: Folder Successfully Mounted! 00:02:02.392 ==> default: Running provisioner: file... 00:02:02.959 default: ~/.gitconfig => .gitconfig 00:02:03.217 00:02:03.217 SUCCESS! 00:02:03.217 00:02:03.217 cd to /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt and type "vagrant ssh" to use. 00:02:03.217 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:03.217 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt" to destroy all trace of vm. 00:02:03.217 00:02:03.226 [Pipeline] } 00:02:03.242 [Pipeline] // stage 00:02:03.249 [Pipeline] dir 00:02:03.249 Running in /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt 00:02:03.251 [Pipeline] { 00:02:03.262 [Pipeline] catchError 00:02:03.264 [Pipeline] { 00:02:03.276 [Pipeline] sh 00:02:03.550 + vagrant ssh-config --host vagrant 00:02:03.550 + sed -ne /^Host/,$p 00:02:03.550 + tee ssh_conf 00:02:07.732 Host vagrant 00:02:07.732 HostName 192.168.121.206 00:02:07.732 User vagrant 00:02:07.732 Port 22 00:02:07.732 UserKnownHostsFile /dev/null 00:02:07.732 StrictHostKeyChecking no 00:02:07.732 PasswordAuthentication no 00:02:07.732 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2204/22.04-1711172311-2200/libvirt/ubuntu2204 00:02:07.732 IdentitiesOnly yes 00:02:07.732 LogLevel FATAL 00:02:07.732 ForwardAgent yes 00:02:07.732 ForwardX11 yes 00:02:07.732 00:02:07.750 [Pipeline] withEnv 00:02:07.754 [Pipeline] { 00:02:07.773 [Pipeline] sh 00:02:08.178 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:08.179 source /etc/os-release 00:02:08.179 [[ -e /image.version ]] && img=$(< /image.version) 00:02:08.179 # Minimal, systemd-like check. 00:02:08.179 if [[ -e /.dockerenv ]]; then 00:02:08.179 # Clear garbage from the node's name: 00:02:08.179 # agt-er_autotest_547-896 -> autotest_547-896 00:02:08.179 # $HOSTNAME is the actual container id 00:02:08.179 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:08.179 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:08.179 # We can assume this is a mount from a host where container is running, 00:02:08.179 # so fetch its hostname to easily identify the target swarm worker. 00:02:08.179 container="$(< /etc/hostname) ($agent)" 00:02:08.179 else 00:02:08.179 # Fallback 00:02:08.179 container=$agent 00:02:08.179 fi 00:02:08.179 fi 00:02:08.179 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:08.179 00:02:08.190 [Pipeline] } 00:02:08.210 [Pipeline] // withEnv 00:02:08.220 [Pipeline] setCustomBuildProperty 00:02:08.236 [Pipeline] stage 00:02:08.239 [Pipeline] { (Tests) 00:02:08.259 [Pipeline] sh 00:02:08.539 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:08.810 [Pipeline] sh 00:02:09.089 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:09.365 [Pipeline] timeout 00:02:09.365 Timeout set to expire in 1 hr 30 min 00:02:09.368 [Pipeline] { 00:02:09.386 [Pipeline] sh 00:02:09.664 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:10.231 HEAD is now at 719d03c6a sock/uring: only register net impl if supported 00:02:10.245 [Pipeline] sh 00:02:10.525 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:10.797 [Pipeline] sh 00:02:11.080 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:11.356 [Pipeline] sh 00:02:11.636 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu22-vg-autotest ./autoruner.sh spdk_repo 00:02:11.894 ++ readlink -f spdk_repo 00:02:11.894 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:11.894 + [[ -n /home/vagrant/spdk_repo ]] 00:02:11.894 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:11.894 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:11.894 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:11.894 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:11.894 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:11.894 + [[ ubuntu22-vg-autotest == pkgdep-* ]] 00:02:11.894 + cd /home/vagrant/spdk_repo 00:02:11.894 + source /etc/os-release 00:02:11.894 ++ PRETTY_NAME='Ubuntu 22.04.4 LTS' 00:02:11.894 ++ NAME=Ubuntu 00:02:11.894 ++ VERSION_ID=22.04 00:02:11.894 ++ VERSION='22.04.4 LTS (Jammy Jellyfish)' 00:02:11.894 ++ VERSION_CODENAME=jammy 00:02:11.894 ++ ID=ubuntu 00:02:11.894 ++ ID_LIKE=debian 00:02:11.894 ++ HOME_URL=https://www.ubuntu.com/ 00:02:11.894 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:02:11.894 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:02:11.894 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:02:11.894 ++ UBUNTU_CODENAME=jammy 00:02:11.894 + uname -a 00:02:11.894 Linux ubuntu2204-cloud-1711172311-2200 5.15.0-101-generic #111-Ubuntu SMP Tue Mar 5 20:16:58 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:02:11.894 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:12.153 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:02:12.153 Hugepages 00:02:12.153 node hugesize free / total 00:02:12.153 node0 1048576kB 0 / 0 00:02:12.153 node0 2048kB 0 / 0 00:02:12.153 00:02:12.153 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:12.153 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:12.153 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:12.153 + rm -f /tmp/spdk-ld-path 00:02:12.153 + source autorun-spdk.conf 00:02:12.153 ++ SPDK_TEST_UNITTEST=1 00:02:12.153 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:12.153 ++ SPDK_TEST_NVME=1 00:02:12.153 ++ SPDK_TEST_BLOCKDEV=1 00:02:12.153 ++ SPDK_RUN_ASAN=1 00:02:12.153 ++ SPDK_RUN_UBSAN=1 00:02:12.153 ++ SPDK_TEST_RAID5=1 00:02:12.153 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:12.153 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:12.153 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:12.153 ++ RUN_NIGHTLY=1 00:02:12.153 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:12.153 + [[ -n '' ]] 00:02:12.153 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:12.153 + for M in /var/spdk/build-*-manifest.txt 00:02:12.153 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:12.153 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:12.153 + for M in /var/spdk/build-*-manifest.txt 00:02:12.153 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:12.153 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:12.153 ++ uname 00:02:12.153 + [[ Linux == \L\i\n\u\x ]] 00:02:12.153 + sudo dmesg -T 00:02:12.153 + sudo dmesg --clear 00:02:12.153 + dmesg_pid=2277 00:02:12.153 + sudo dmesg -Tw 00:02:12.153 + [[ Ubuntu == FreeBSD ]] 00:02:12.153 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:12.153 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:12.153 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:12.153 + [[ -x /usr/src/fio-static/fio ]] 00:02:12.153 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:12.153 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:12.153 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:12.153 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:02:12.153 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:12.153 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:12.153 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:12.153 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:12.412 Test configuration: 00:02:12.412 SPDK_TEST_UNITTEST=1 00:02:12.412 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:12.412 SPDK_TEST_NVME=1 00:02:12.412 SPDK_TEST_BLOCKDEV=1 00:02:12.412 SPDK_RUN_ASAN=1 00:02:12.412 SPDK_RUN_UBSAN=1 00:02:12.412 SPDK_TEST_RAID5=1 00:02:12.412 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:12.412 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:12.412 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:12.412 RUN_NIGHTLY=1 22:48:00 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:12.412 22:48:00 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:12.412 22:48:00 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:12.412 22:48:00 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:12.412 22:48:00 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:12.412 22:48:00 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:12.412 22:48:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:12.412 22:48:00 -- paths/export.sh@5 -- $ export PATH 00:02:12.412 22:48:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:12.412 22:48:00 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:12.412 22:48:00 -- common/autobuild_common.sh@444 -- $ date +%s 00:02:12.412 22:48:00 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720910881.XXXXXX 00:02:12.412 22:48:01 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720910881.kwN16y 00:02:12.412 22:48:01 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:02:12.412 22:48:01 -- common/autobuild_common.sh@450 -- $ '[' -n v22.11.4 ']' 00:02:12.412 22:48:01 -- common/autobuild_common.sh@451 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:12.412 22:48:01 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:12.412 22:48:01 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:12.412 22:48:01 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:12.412 22:48:01 -- common/autobuild_common.sh@460 -- $ get_config_params 00:02:12.412 22:48:01 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:12.412 22:48:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.412 22:48:01 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:12.412 22:48:01 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:02:12.412 22:48:01 -- pm/common@17 -- $ local monitor 00:02:12.412 22:48:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.412 22:48:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.412 22:48:01 -- pm/common@25 -- $ sleep 1 00:02:12.412 22:48:01 -- pm/common@21 -- $ date +%s 00:02:12.412 22:48:01 -- pm/common@21 -- $ date +%s 00:02:12.412 22:48:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720910881 00:02:12.412 22:48:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720910881 00:02:12.412 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720910881_collect-vmstat.pm.log 00:02:12.412 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720910881_collect-cpu-load.pm.log 00:02:13.347 22:48:02 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:02:13.347 22:48:02 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:13.347 22:48:02 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:13.347 22:48:02 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:13.347 22:48:02 -- spdk/autobuild.sh@16 -- $ date -u 00:02:13.347 Sat Jul 13 22:48:02 UTC 2024 00:02:13.347 22:48:02 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:13.347 v24.09-pre-202-g719d03c6a 00:02:13.347 22:48:02 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:13.347 22:48:02 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:13.347 22:48:02 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:13.347 22:48:02 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:13.347 22:48:02 -- common/autotest_common.sh@10 -- $ set +x 00:02:13.347 ************************************ 00:02:13.347 START TEST asan 00:02:13.347 ************************************ 00:02:13.347 using asan 00:02:13.347 22:48:02 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:02:13.347 00:02:13.347 real 0m0.000s 00:02:13.347 user 0m0.000s 00:02:13.347 sys 0m0.000s 00:02:13.347 22:48:02 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:13.347 22:48:02 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:13.347 ************************************ 00:02:13.347 END TEST asan 00:02:13.347 ************************************ 00:02:13.607 22:48:02 -- common/autotest_common.sh@1142 -- $ return 0 00:02:13.607 22:48:02 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:13.607 22:48:02 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:13.607 22:48:02 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:13.607 22:48:02 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:13.607 22:48:02 -- common/autotest_common.sh@10 -- $ set +x 00:02:13.607 ************************************ 00:02:13.607 START TEST ubsan 00:02:13.607 ************************************ 00:02:13.607 using ubsan 00:02:13.607 22:48:02 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:13.607 00:02:13.607 real 0m0.000s 00:02:13.607 user 0m0.000s 00:02:13.607 sys 0m0.000s 00:02:13.607 22:48:02 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:13.607 ************************************ 00:02:13.607 END TEST ubsan 00:02:13.607 ************************************ 00:02:13.607 22:48:02 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:13.607 22:48:02 -- common/autotest_common.sh@1142 -- $ return 0 00:02:13.607 22:48:02 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:13.607 22:48:02 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:13.607 22:48:02 -- common/autobuild_common.sh@436 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:13.607 22:48:02 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:02:13.607 22:48:02 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:13.607 22:48:02 -- common/autotest_common.sh@10 -- $ set +x 00:02:13.608 ************************************ 00:02:13.608 START TEST build_native_dpdk 00:02:13.608 ************************************ 00:02:13.608 22:48:02 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=11 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=11 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:13.608 caf0f5d395 version: 22.11.4 00:02:13.608 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:13.608 dc9c799c7d vhost: fix missing spinlock unlock 00:02:13.608 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:13.608 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 11 -ge 5 ]] 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 11 -ge 10 ]] 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:13.608 22:48:02 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:13.608 22:48:02 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:13.608 22:48:02 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:13.608 22:48:02 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:13.608 22:48:02 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:13.608 22:48:02 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:13.608 22:48:02 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:13.608 22:48:02 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:13.608 22:48:02 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:02:13.608 22:48:02 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:13.608 22:48:02 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:13.608 22:48:02 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:13.608 22:48:02 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:13.608 22:48:02 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:13.608 22:48:02 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:13.608 22:48:02 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:02:13.608 22:48:02 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:02:13.608 22:48:02 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:13.608 22:48:02 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:02:13.608 22:48:02 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:02:13.608 22:48:02 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:02:13.608 22:48:02 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:02:13.608 22:48:02 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:13.608 22:48:02 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:02:13.608 22:48:02 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:02:13.608 22:48:02 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:13.608 22:48:02 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:13.608 patching file config/rte_config.h 00:02:13.608 Hunk #1 succeeded at 60 (offset 1 line). 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:13.608 22:48:02 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:17.800 The Meson build system 00:02:17.800 Version: 1.4.0 00:02:17.800 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:17.800 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:17.800 Build type: native build 00:02:17.800 Program cat found: YES (/usr/bin/cat) 00:02:17.800 Project name: DPDK 00:02:17.800 Project version: 22.11.4 00:02:17.800 C compiler for the host machine: gcc (gcc 11.4.0 "gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0") 00:02:17.800 C linker for the host machine: gcc ld.bfd 2.38 00:02:17.800 Host machine cpu family: x86_64 00:02:17.800 Host machine cpu: x86_64 00:02:17.800 Message: ## Building in Developer Mode ## 00:02:17.800 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:17.800 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:17.800 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:17.800 Program objdump found: YES (/usr/bin/objdump) 00:02:17.800 Program python3 found: YES (/usr/bin/python3) 00:02:17.800 Program cat found: YES (/usr/bin/cat) 00:02:17.800 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:17.800 Checking for size of "void *" : 8 00:02:17.800 Checking for size of "void *" : 8 (cached) 00:02:17.800 Library m found: YES 00:02:17.800 Library numa found: YES 00:02:17.800 Has header "numaif.h" : YES 00:02:17.800 Library fdt found: NO 00:02:17.800 Library execinfo found: NO 00:02:17.800 Has header "execinfo.h" : YES 00:02:17.800 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.2 00:02:17.800 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:17.800 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:17.800 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:17.800 Run-time dependency openssl found: YES 3.0.2 00:02:17.800 Run-time dependency libpcap found: NO (tried pkgconfig) 00:02:17.800 Library pcap found: NO 00:02:17.800 Compiler for C supports arguments -Wcast-qual: YES 00:02:17.800 Compiler for C supports arguments -Wdeprecated: YES 00:02:17.800 Compiler for C supports arguments -Wformat: YES 00:02:17.800 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:17.800 Compiler for C supports arguments -Wformat-security: YES 00:02:17.800 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:17.800 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:17.800 Compiler for C supports arguments -Wnested-externs: YES 00:02:17.800 Compiler for C supports arguments -Wold-style-definition: YES 00:02:17.800 Compiler for C supports arguments -Wpointer-arith: YES 00:02:17.800 Compiler for C supports arguments -Wsign-compare: YES 00:02:17.800 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:17.800 Compiler for C supports arguments -Wundef: YES 00:02:17.800 Compiler for C supports arguments -Wwrite-strings: YES 00:02:17.800 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:17.800 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:17.800 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:17.800 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:17.800 Compiler for C supports arguments -mavx512f: YES 00:02:17.800 Checking if "AVX512 checking" compiles: YES 00:02:17.800 Fetching value of define "__SSE4_2__" : 1 00:02:17.800 Fetching value of define "__AES__" : 1 00:02:17.800 Fetching value of define "__AVX__" : 1 00:02:17.800 Fetching value of define "__AVX2__" : 1 00:02:17.800 Fetching value of define "__AVX512BW__" : (undefined) 00:02:17.800 Fetching value of define "__AVX512CD__" : (undefined) 00:02:17.800 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:17.800 Fetching value of define "__AVX512F__" : (undefined) 00:02:17.800 Fetching value of define "__AVX512VL__" : (undefined) 00:02:17.800 Fetching value of define "__PCLMUL__" : 1 00:02:17.800 Fetching value of define "__RDRND__" : 1 00:02:17.800 Fetching value of define "__RDSEED__" : 1 00:02:17.800 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:17.800 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:17.800 Message: lib/kvargs: Defining dependency "kvargs" 00:02:17.800 Message: lib/telemetry: Defining dependency "telemetry" 00:02:17.800 Checking for function "getentropy" : YES 00:02:17.800 Message: lib/eal: Defining dependency "eal" 00:02:17.800 Message: lib/ring: Defining dependency "ring" 00:02:17.800 Message: lib/rcu: Defining dependency "rcu" 00:02:17.800 Message: lib/mempool: Defining dependency "mempool" 00:02:17.800 Message: lib/mbuf: Defining dependency "mbuf" 00:02:17.800 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:17.800 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:17.800 Compiler for C supports arguments -mpclmul: YES 00:02:17.800 Compiler for C supports arguments -maes: YES 00:02:17.800 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:17.800 Compiler for C supports arguments -mavx512bw: YES 00:02:17.800 Compiler for C supports arguments -mavx512dq: YES 00:02:17.800 Compiler for C supports arguments -mavx512vl: YES 00:02:17.800 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:17.800 Compiler for C supports arguments -mavx2: YES 00:02:17.800 Compiler for C supports arguments -mavx: YES 00:02:17.800 Message: lib/net: Defining dependency "net" 00:02:17.800 Message: lib/meter: Defining dependency "meter" 00:02:17.800 Message: lib/ethdev: Defining dependency "ethdev" 00:02:17.800 Message: lib/pci: Defining dependency "pci" 00:02:17.800 Message: lib/cmdline: Defining dependency "cmdline" 00:02:17.800 Message: lib/metrics: Defining dependency "metrics" 00:02:17.800 Message: lib/hash: Defining dependency "hash" 00:02:17.800 Message: lib/timer: Defining dependency "timer" 00:02:17.800 Fetching value of define "__AVX2__" : 1 (cached) 00:02:17.800 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:17.800 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:17.800 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:17.800 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:17.800 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:17.800 Message: lib/acl: Defining dependency "acl" 00:02:17.800 Message: lib/bbdev: Defining dependency "bbdev" 00:02:17.800 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:17.800 Run-time dependency libelf found: YES 0.186 00:02:17.800 lib/bpf/meson.build:43: WARNING: libpcap is missing, rte_bpf_convert API will be disabled 00:02:17.800 Message: lib/bpf: Defining dependency "bpf" 00:02:17.800 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:17.800 Message: lib/compressdev: Defining dependency "compressdev" 00:02:17.800 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:17.800 Message: lib/distributor: Defining dependency "distributor" 00:02:17.800 Message: lib/efd: Defining dependency "efd" 00:02:17.800 Message: lib/eventdev: Defining dependency "eventdev" 00:02:17.800 Message: lib/gpudev: Defining dependency "gpudev" 00:02:17.800 Message: lib/gro: Defining dependency "gro" 00:02:17.800 Message: lib/gso: Defining dependency "gso" 00:02:17.800 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:17.800 Message: lib/jobstats: Defining dependency "jobstats" 00:02:17.800 Message: lib/latencystats: Defining dependency "latencystats" 00:02:17.800 Message: lib/lpm: Defining dependency "lpm" 00:02:17.800 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:17.800 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:17.800 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:17.800 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:17.800 Message: lib/member: Defining dependency "member" 00:02:17.800 Message: lib/pcapng: Defining dependency "pcapng" 00:02:17.800 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:17.800 Message: lib/power: Defining dependency "power" 00:02:17.800 Message: lib/rawdev: Defining dependency "rawdev" 00:02:17.800 Message: lib/regexdev: Defining dependency "regexdev" 00:02:17.800 Message: lib/dmadev: Defining dependency "dmadev" 00:02:17.800 Message: lib/rib: Defining dependency "rib" 00:02:17.800 Message: lib/reorder: Defining dependency "reorder" 00:02:17.800 Message: lib/sched: Defining dependency "sched" 00:02:17.800 Message: lib/security: Defining dependency "security" 00:02:17.800 Message: lib/stack: Defining dependency "stack" 00:02:17.800 Has header "linux/userfaultfd.h" : YES 00:02:17.800 Message: lib/vhost: Defining dependency "vhost" 00:02:17.800 Message: lib/ipsec: Defining dependency "ipsec" 00:02:17.800 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:17.800 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:17.800 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:17.800 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:17.800 Message: lib/fib: Defining dependency "fib" 00:02:17.800 Message: lib/port: Defining dependency "port" 00:02:17.800 Message: lib/pdump: Defining dependency "pdump" 00:02:17.800 Message: lib/table: Defining dependency "table" 00:02:17.800 Message: lib/pipeline: Defining dependency "pipeline" 00:02:17.800 Message: lib/graph: Defining dependency "graph" 00:02:17.800 Message: lib/node: Defining dependency "node" 00:02:17.800 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:17.800 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:17.800 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:17.800 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:17.800 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:17.800 Compiler for C supports arguments -Wno-unused-value: YES 00:02:17.800 Compiler for C supports arguments -Wno-format: YES 00:02:17.800 Compiler for C supports arguments -Wno-format-security: YES 00:02:19.703 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:19.703 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:19.703 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:19.703 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:19.703 Fetching value of define "__AVX2__" : 1 (cached) 00:02:19.703 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:19.703 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:19.703 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:19.703 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:19.703 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:19.703 Program doxygen found: YES (/usr/bin/doxygen) 00:02:19.703 Configuring doxy-api.conf using configuration 00:02:19.703 Program sphinx-build found: NO 00:02:19.703 Configuring rte_build_config.h using configuration 00:02:19.703 Message: 00:02:19.703 ================= 00:02:19.703 Applications Enabled 00:02:19.703 ================= 00:02:19.703 00:02:19.703 apps: 00:02:19.703 pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, test-eventdev, 00:02:19.703 test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, test-security-perf, 00:02:19.703 00:02:19.703 00:02:19.703 Message: 00:02:19.703 ================= 00:02:19.703 Libraries Enabled 00:02:19.703 ================= 00:02:19.703 00:02:19.703 libs: 00:02:19.703 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:19.703 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:19.703 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:19.703 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:19.703 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:19.703 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:19.703 table, pipeline, graph, node, 00:02:19.703 00:02:19.703 Message: 00:02:19.703 =============== 00:02:19.703 Drivers Enabled 00:02:19.703 =============== 00:02:19.703 00:02:19.703 common: 00:02:19.703 00:02:19.703 bus: 00:02:19.703 pci, vdev, 00:02:19.703 mempool: 00:02:19.703 ring, 00:02:19.703 dma: 00:02:19.703 00:02:19.703 net: 00:02:19.703 i40e, 00:02:19.703 raw: 00:02:19.703 00:02:19.703 crypto: 00:02:19.703 00:02:19.703 compress: 00:02:19.703 00:02:19.703 regex: 00:02:19.703 00:02:19.703 vdpa: 00:02:19.703 00:02:19.703 event: 00:02:19.703 00:02:19.703 baseband: 00:02:19.703 00:02:19.703 gpu: 00:02:19.703 00:02:19.703 00:02:19.703 Message: 00:02:19.703 ================= 00:02:19.703 Content Skipped 00:02:19.703 ================= 00:02:19.703 00:02:19.703 apps: 00:02:19.703 dumpcap: missing dependency, "libpcap" 00:02:19.703 00:02:19.703 libs: 00:02:19.703 kni: explicitly disabled via build config (deprecated lib) 00:02:19.703 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:19.703 00:02:19.703 drivers: 00:02:19.703 common/cpt: not in enabled drivers build config 00:02:19.703 common/dpaax: not in enabled drivers build config 00:02:19.703 common/iavf: not in enabled drivers build config 00:02:19.703 common/idpf: not in enabled drivers build config 00:02:19.703 common/mvep: not in enabled drivers build config 00:02:19.703 common/octeontx: not in enabled drivers build config 00:02:19.703 bus/auxiliary: not in enabled drivers build config 00:02:19.703 bus/dpaa: not in enabled drivers build config 00:02:19.703 bus/fslmc: not in enabled drivers build config 00:02:19.703 bus/ifpga: not in enabled drivers build config 00:02:19.703 bus/vmbus: not in enabled drivers build config 00:02:19.703 common/cnxk: not in enabled drivers build config 00:02:19.703 common/mlx5: not in enabled drivers build config 00:02:19.704 common/qat: not in enabled drivers build config 00:02:19.704 common/sfc_efx: not in enabled drivers build config 00:02:19.704 mempool/bucket: not in enabled drivers build config 00:02:19.704 mempool/cnxk: not in enabled drivers build config 00:02:19.704 mempool/dpaa: not in enabled drivers build config 00:02:19.704 mempool/dpaa2: not in enabled drivers build config 00:02:19.704 mempool/octeontx: not in enabled drivers build config 00:02:19.704 mempool/stack: not in enabled drivers build config 00:02:19.704 dma/cnxk: not in enabled drivers build config 00:02:19.704 dma/dpaa: not in enabled drivers build config 00:02:19.704 dma/dpaa2: not in enabled drivers build config 00:02:19.704 dma/hisilicon: not in enabled drivers build config 00:02:19.704 dma/idxd: not in enabled drivers build config 00:02:19.704 dma/ioat: not in enabled drivers build config 00:02:19.704 dma/skeleton: not in enabled drivers build config 00:02:19.704 net/af_packet: not in enabled drivers build config 00:02:19.704 net/af_xdp: not in enabled drivers build config 00:02:19.704 net/ark: not in enabled drivers build config 00:02:19.704 net/atlantic: not in enabled drivers build config 00:02:19.704 net/avp: not in enabled drivers build config 00:02:19.704 net/axgbe: not in enabled drivers build config 00:02:19.704 net/bnx2x: not in enabled drivers build config 00:02:19.704 net/bnxt: not in enabled drivers build config 00:02:19.704 net/bonding: not in enabled drivers build config 00:02:19.704 net/cnxk: not in enabled drivers build config 00:02:19.704 net/cxgbe: not in enabled drivers build config 00:02:19.704 net/dpaa: not in enabled drivers build config 00:02:19.704 net/dpaa2: not in enabled drivers build config 00:02:19.704 net/e1000: not in enabled drivers build config 00:02:19.704 net/ena: not in enabled drivers build config 00:02:19.704 net/enetc: not in enabled drivers build config 00:02:19.704 net/enetfec: not in enabled drivers build config 00:02:19.704 net/enic: not in enabled drivers build config 00:02:19.704 net/failsafe: not in enabled drivers build config 00:02:19.704 net/fm10k: not in enabled drivers build config 00:02:19.704 net/gve: not in enabled drivers build config 00:02:19.704 net/hinic: not in enabled drivers build config 00:02:19.704 net/hns3: not in enabled drivers build config 00:02:19.704 net/iavf: not in enabled drivers build config 00:02:19.704 net/ice: not in enabled drivers build config 00:02:19.704 net/idpf: not in enabled drivers build config 00:02:19.704 net/igc: not in enabled drivers build config 00:02:19.704 net/ionic: not in enabled drivers build config 00:02:19.704 net/ipn3ke: not in enabled drivers build config 00:02:19.704 net/ixgbe: not in enabled drivers build config 00:02:19.704 net/kni: not in enabled drivers build config 00:02:19.704 net/liquidio: not in enabled drivers build config 00:02:19.704 net/mana: not in enabled drivers build config 00:02:19.704 net/memif: not in enabled drivers build config 00:02:19.704 net/mlx4: not in enabled drivers build config 00:02:19.704 net/mlx5: not in enabled drivers build config 00:02:19.704 net/mvneta: not in enabled drivers build config 00:02:19.704 net/mvpp2: not in enabled drivers build config 00:02:19.704 net/netvsc: not in enabled drivers build config 00:02:19.704 net/nfb: not in enabled drivers build config 00:02:19.704 net/nfp: not in enabled drivers build config 00:02:19.704 net/ngbe: not in enabled drivers build config 00:02:19.704 net/null: not in enabled drivers build config 00:02:19.704 net/octeontx: not in enabled drivers build config 00:02:19.704 net/octeon_ep: not in enabled drivers build config 00:02:19.704 net/pcap: not in enabled drivers build config 00:02:19.704 net/pfe: not in enabled drivers build config 00:02:19.704 net/qede: not in enabled drivers build config 00:02:19.704 net/ring: not in enabled drivers build config 00:02:19.704 net/sfc: not in enabled drivers build config 00:02:19.704 net/softnic: not in enabled drivers build config 00:02:19.704 net/tap: not in enabled drivers build config 00:02:19.704 net/thunderx: not in enabled drivers build config 00:02:19.704 net/txgbe: not in enabled drivers build config 00:02:19.704 net/vdev_netvsc: not in enabled drivers build config 00:02:19.704 net/vhost: not in enabled drivers build config 00:02:19.704 net/virtio: not in enabled drivers build config 00:02:19.704 net/vmxnet3: not in enabled drivers build config 00:02:19.704 raw/cnxk_bphy: not in enabled drivers build config 00:02:19.704 raw/cnxk_gpio: not in enabled drivers build config 00:02:19.704 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:19.704 raw/ifpga: not in enabled drivers build config 00:02:19.704 raw/ntb: not in enabled drivers build config 00:02:19.704 raw/skeleton: not in enabled drivers build config 00:02:19.704 crypto/armv8: not in enabled drivers build config 00:02:19.704 crypto/bcmfs: not in enabled drivers build config 00:02:19.704 crypto/caam_jr: not in enabled drivers build config 00:02:19.704 crypto/ccp: not in enabled drivers build config 00:02:19.704 crypto/cnxk: not in enabled drivers build config 00:02:19.704 crypto/dpaa_sec: not in enabled drivers build config 00:02:19.704 crypto/dpaa2_sec: not in enabled drivers build config 00:02:19.704 crypto/ipsec_mb: not in enabled drivers build config 00:02:19.704 crypto/mlx5: not in enabled drivers build config 00:02:19.704 crypto/mvsam: not in enabled drivers build config 00:02:19.704 crypto/nitrox: not in enabled drivers build config 00:02:19.704 crypto/null: not in enabled drivers build config 00:02:19.704 crypto/octeontx: not in enabled drivers build config 00:02:19.704 crypto/openssl: not in enabled drivers build config 00:02:19.704 crypto/scheduler: not in enabled drivers build config 00:02:19.704 crypto/uadk: not in enabled drivers build config 00:02:19.704 crypto/virtio: not in enabled drivers build config 00:02:19.704 compress/isal: not in enabled drivers build config 00:02:19.704 compress/mlx5: not in enabled drivers build config 00:02:19.704 compress/octeontx: not in enabled drivers build config 00:02:19.704 compress/zlib: not in enabled drivers build config 00:02:19.704 regex/mlx5: not in enabled drivers build config 00:02:19.704 regex/cn9k: not in enabled drivers build config 00:02:19.704 vdpa/ifc: not in enabled drivers build config 00:02:19.704 vdpa/mlx5: not in enabled drivers build config 00:02:19.704 vdpa/sfc: not in enabled drivers build config 00:02:19.704 event/cnxk: not in enabled drivers build config 00:02:19.704 event/dlb2: not in enabled drivers build config 00:02:19.704 event/dpaa: not in enabled drivers build config 00:02:19.704 event/dpaa2: not in enabled drivers build config 00:02:19.704 event/dsw: not in enabled drivers build config 00:02:19.704 event/opdl: not in enabled drivers build config 00:02:19.704 event/skeleton: not in enabled drivers build config 00:02:19.704 event/sw: not in enabled drivers build config 00:02:19.704 event/octeontx: not in enabled drivers build config 00:02:19.704 baseband/acc: not in enabled drivers build config 00:02:19.704 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:19.704 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:19.704 baseband/la12xx: not in enabled drivers build config 00:02:19.704 baseband/null: not in enabled drivers build config 00:02:19.704 baseband/turbo_sw: not in enabled drivers build config 00:02:19.704 gpu/cuda: not in enabled drivers build config 00:02:19.704 00:02:19.704 00:02:19.704 Build targets in project: 313 00:02:19.704 00:02:19.704 DPDK 22.11.4 00:02:19.704 00:02:19.704 User defined options 00:02:19.704 libdir : lib 00:02:19.704 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:19.704 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:19.704 c_link_args : 00:02:19.704 enable_docs : false 00:02:19.704 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:19.704 enable_kmods : false 00:02:19.704 machine : native 00:02:19.704 tests : false 00:02:19.704 00:02:19.704 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:19.704 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:19.704 22:48:08 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:19.704 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:19.704 [1/740] Generating lib/rte_kvargs_def with a custom command 00:02:19.704 [2/740] Generating lib/rte_kvargs_mingw with a custom command 00:02:19.704 [3/740] Generating lib/rte_telemetry_mingw with a custom command 00:02:19.704 [4/740] Generating lib/rte_telemetry_def with a custom command 00:02:19.704 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:19.704 [6/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:19.704 [7/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:19.704 [8/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:19.704 [9/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:19.704 [10/740] Linking static target lib/librte_kvargs.a 00:02:19.704 [11/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:19.704 [12/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:19.704 [13/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:19.704 [14/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:19.704 [15/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:19.704 [16/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:19.704 [17/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:19.704 [18/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:19.704 [19/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:19.962 [20/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:19.962 [21/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:19.962 [22/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:19.962 [23/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:19.962 [24/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.962 [25/740] Linking target lib/librte_kvargs.so.23.0 00:02:19.962 [26/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:19.962 [27/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:19.962 [28/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:19.962 [29/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:19.962 [30/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:19.962 [31/740] Linking static target lib/librte_telemetry.a 00:02:20.220 [32/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:20.220 [33/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:20.220 [34/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:20.220 [35/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:20.220 [36/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:20.220 [37/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:20.220 [38/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:20.220 [39/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:20.220 [40/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:20.220 [41/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:20.477 [42/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:20.477 [43/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:20.477 [44/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:20.477 [45/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.477 [46/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:20.477 [47/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:20.477 [48/740] Linking target lib/librte_telemetry.so.23.0 00:02:20.477 [49/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:20.477 [50/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:20.477 [51/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:20.477 [52/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:20.735 [53/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:20.735 [54/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:20.735 [55/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:20.735 [56/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:20.735 [57/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:20.735 [58/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:20.735 [59/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:20.735 [60/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:20.735 [61/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:20.735 [62/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:20.735 [63/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:20.735 [64/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:20.735 [65/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:20.735 [66/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:20.735 [67/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:20.735 [68/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:20.735 [69/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:20.735 [70/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:20.994 [71/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:20.994 [72/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:20.994 [73/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:20.994 [74/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:20.994 [75/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:20.994 [76/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:20.994 [77/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:20.994 [78/740] Generating lib/rte_eal_def with a custom command 00:02:20.994 [79/740] Generating lib/rte_eal_mingw with a custom command 00:02:20.994 [80/740] Generating lib/rte_ring_def with a custom command 00:02:20.994 [81/740] Generating lib/rte_ring_mingw with a custom command 00:02:20.994 [82/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:20.994 [83/740] Generating lib/rte_rcu_def with a custom command 00:02:20.994 [84/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:20.994 [85/740] Generating lib/rte_rcu_mingw with a custom command 00:02:20.994 [86/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:21.252 [87/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:21.252 [88/740] Linking static target lib/librte_ring.a 00:02:21.252 [89/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:21.252 [90/740] Generating lib/rte_mempool_def with a custom command 00:02:21.252 [91/740] Generating lib/rte_mempool_mingw with a custom command 00:02:21.252 [92/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:21.252 [93/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:21.252 [94/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.510 [95/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:21.510 [96/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:21.510 [97/740] Generating lib/rte_mbuf_def with a custom command 00:02:21.510 [98/740] Linking static target lib/librte_eal.a 00:02:21.510 [99/740] Generating lib/rte_mbuf_mingw with a custom command 00:02:21.510 [100/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:21.510 [101/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:21.769 [102/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:21.769 [103/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:21.769 [104/740] Linking static target lib/librte_rcu.a 00:02:21.769 [105/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:21.769 [106/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:21.769 [107/740] Linking static target lib/librte_mempool.a 00:02:22.026 [108/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:22.026 [109/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:22.026 [110/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:22.026 [111/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:22.026 [112/740] Generating lib/rte_net_def with a custom command 00:02:22.026 [113/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.026 [114/740] Generating lib/rte_net_mingw with a custom command 00:02:22.026 [115/740] Generating lib/rte_meter_def with a custom command 00:02:22.026 [116/740] Generating lib/rte_meter_mingw with a custom command 00:02:22.026 [117/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:22.026 [118/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:22.026 [119/740] Linking static target lib/librte_meter.a 00:02:22.026 [120/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:22.286 [121/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:22.286 [122/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:22.286 [123/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.286 [124/740] Linking static target lib/librte_net.a 00:02:22.600 [125/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:22.600 [126/740] Linking static target lib/librte_mbuf.a 00:02:22.600 [127/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:22.600 [128/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.600 [129/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:22.600 [130/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:22.600 [131/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:22.600 [132/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:22.858 [133/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.858 [134/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:22.858 [135/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.115 [136/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:23.115 [137/740] Generating lib/rte_ethdev_def with a custom command 00:02:23.115 [138/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:23.115 [139/740] Generating lib/rte_ethdev_mingw with a custom command 00:02:23.115 [140/740] Generating lib/rte_pci_def with a custom command 00:02:23.115 [141/740] Generating lib/rte_pci_mingw with a custom command 00:02:23.115 [142/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:23.115 [143/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:23.115 [144/740] Linking static target lib/librte_pci.a 00:02:23.374 [145/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:23.374 [146/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:23.374 [147/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:23.374 [148/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:23.374 [149/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:23.374 [150/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.632 [151/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:23.632 [152/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:23.632 [153/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:23.632 [154/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:23.632 [155/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:23.632 [156/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:23.632 [157/740] Generating lib/rte_cmdline_def with a custom command 00:02:23.632 [158/740] Generating lib/rte_cmdline_mingw with a custom command 00:02:23.632 [159/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:23.632 [160/740] Generating lib/rte_metrics_def with a custom command 00:02:23.632 [161/740] Generating lib/rte_metrics_mingw with a custom command 00:02:23.632 [162/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:23.632 [163/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:23.632 [164/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:23.632 [165/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:23.890 [166/740] Generating lib/rte_hash_def with a custom command 00:02:23.890 [167/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:23.890 [168/740] Generating lib/rte_hash_mingw with a custom command 00:02:23.890 [169/740] Linking static target lib/librte_cmdline.a 00:02:23.890 [170/740] Generating lib/rte_timer_def with a custom command 00:02:23.890 [171/740] Generating lib/rte_timer_mingw with a custom command 00:02:23.890 [172/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:23.890 [173/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:24.148 [174/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:24.148 [175/740] Linking static target lib/librte_metrics.a 00:02:24.148 [176/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:24.148 [177/740] Linking static target lib/librte_timer.a 00:02:24.405 [178/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.662 [179/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:24.663 [180/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.663 [181/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:24.920 [182/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:24.920 [183/740] Linking static target lib/librte_ethdev.a 00:02:24.920 [184/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.920 [185/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:24.920 [186/740] Generating lib/rte_acl_def with a custom command 00:02:24.920 [187/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:24.920 [188/740] Generating lib/rte_acl_mingw with a custom command 00:02:24.920 [189/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:24.920 [190/740] Generating lib/rte_bbdev_def with a custom command 00:02:25.178 [191/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:25.178 [192/740] Generating lib/rte_bbdev_mingw with a custom command 00:02:25.178 [193/740] Generating lib/rte_bitratestats_def with a custom command 00:02:25.178 [194/740] Generating lib/rte_bitratestats_mingw with a custom command 00:02:25.436 [195/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:25.436 [196/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:25.436 [197/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:25.436 [198/740] Linking static target lib/librte_bitratestats.a 00:02:25.694 [199/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.694 [200/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:25.694 [201/740] Linking static target lib/librte_bbdev.a 00:02:25.952 [202/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:25.952 [203/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:25.952 [204/740] Linking static target lib/librte_hash.a 00:02:26.211 [205/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:26.470 [206/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:26.470 [207/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.470 [208/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:26.470 [209/740] Generating lib/rte_bpf_def with a custom command 00:02:26.470 [210/740] Generating lib/rte_bpf_mingw with a custom command 00:02:26.470 [211/740] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:26.470 [212/740] Linking static target lib/acl/libavx512_tmp.a 00:02:26.729 [213/740] Generating lib/rte_cfgfile_def with a custom command 00:02:26.729 [214/740] Generating lib/rte_cfgfile_mingw with a custom command 00:02:26.729 [215/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:26.729 [216/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.729 [217/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:26.729 [218/740] Linking static target lib/librte_cfgfile.a 00:02:26.987 [219/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:26.987 [220/740] Generating lib/rte_compressdev_def with a custom command 00:02:26.987 [221/740] Generating lib/rte_compressdev_mingw with a custom command 00:02:26.987 [222/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:27.246 [223/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:27.246 [224/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.246 [225/740] Generating lib/rte_cryptodev_def with a custom command 00:02:27.246 [226/740] Generating lib/rte_cryptodev_mingw with a custom command 00:02:27.246 [227/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:27.246 [228/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:27.246 [229/740] Linking static target lib/librte_compressdev.a 00:02:27.505 [230/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:27.505 [231/740] Linking static target lib/librte_acl.a 00:02:27.505 [232/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:27.505 [233/740] Linking static target lib/librte_bpf.a 00:02:27.505 [234/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:27.764 [235/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:27.764 [236/740] Generating lib/rte_distributor_def with a custom command 00:02:27.764 [237/740] Generating lib/rte_distributor_mingw with a custom command 00:02:27.764 [238/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.764 [239/740] Generating lib/rte_efd_def with a custom command 00:02:27.764 [240/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.764 [241/740] Generating lib/rte_efd_mingw with a custom command 00:02:27.764 [242/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:28.023 [243/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:28.023 [244/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:28.023 [245/740] Linking static target lib/librte_distributor.a 00:02:28.295 [246/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.295 [247/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:28.295 [248/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.295 [249/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:28.553 [250/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:28.553 [251/740] Generating lib/rte_eventdev_def with a custom command 00:02:28.553 [252/740] Generating lib/rte_eventdev_mingw with a custom command 00:02:29.120 [253/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:29.120 [254/740] Linking static target lib/librte_efd.a 00:02:29.120 [255/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:29.120 [256/740] Generating lib/rte_gpudev_def with a custom command 00:02:29.120 [257/740] Generating lib/rte_gpudev_mingw with a custom command 00:02:29.120 [258/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.120 [259/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:29.120 [260/740] Linking static target lib/librte_cryptodev.a 00:02:29.378 [261/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:29.378 [262/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:29.378 [263/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.378 [264/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:29.378 [265/740] Linking static target lib/librte_gpudev.a 00:02:29.636 [266/740] Linking target lib/librte_eal.so.23.0 00:02:29.636 [267/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:29.636 [268/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:29.636 [269/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:29.636 [270/740] Generating lib/rte_gro_def with a custom command 00:02:29.636 [271/740] Linking target lib/librte_ring.so.23.0 00:02:29.636 [272/740] Linking target lib/librte_meter.so.23.0 00:02:29.636 [273/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:29.895 [274/740] Linking target lib/librte_pci.so.23.0 00:02:29.895 [275/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:29.895 [276/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:29.895 [277/740] Linking target lib/librte_rcu.so.23.0 00:02:29.895 [278/740] Linking target lib/librte_mempool.so.23.0 00:02:29.895 [279/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:29.895 [280/740] Linking target lib/librte_timer.so.23.0 00:02:29.895 [281/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:29.895 [282/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:29.895 [283/740] Linking target lib/librte_acl.so.23.0 00:02:29.895 [284/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:29.895 [285/740] Linking target lib/librte_mbuf.so.23.0 00:02:30.153 [286/740] Linking target lib/librte_cfgfile.so.23.0 00:02:30.153 [287/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:30.153 [288/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.153 [289/740] Generating lib/rte_gro_mingw with a custom command 00:02:30.153 [290/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:30.153 [291/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:30.153 [292/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:30.153 [293/740] Linking target lib/librte_net.so.23.0 00:02:30.153 [294/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:30.153 [295/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:30.153 [296/740] Linking target lib/librte_bbdev.so.23.0 00:02:30.153 [297/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.412 [298/740] Linking target lib/librte_cmdline.so.23.0 00:02:30.412 [299/740] Linking target lib/librte_ethdev.so.23.0 00:02:30.412 [300/740] Linking target lib/librte_hash.so.23.0 00:02:30.412 [301/740] Linking target lib/librte_compressdev.so.23.0 00:02:30.412 [302/740] Linking target lib/librte_distributor.so.23.0 00:02:30.412 [303/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:30.412 [304/740] Linking static target lib/librte_eventdev.a 00:02:30.412 [305/740] Linking target lib/librte_gpudev.so.23.0 00:02:30.412 [306/740] Linking static target lib/librte_gro.a 00:02:30.412 [307/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:30.412 [308/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:30.412 [309/740] Linking target lib/librte_metrics.so.23.0 00:02:30.412 [310/740] Linking target lib/librte_bpf.so.23.0 00:02:30.412 [311/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:30.412 [312/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:30.412 [313/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:30.412 [314/740] Linking target lib/librte_efd.so.23.0 00:02:30.670 [315/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:30.670 [316/740] Generating lib/rte_gso_mingw with a custom command 00:02:30.670 [317/740] Generating lib/rte_gso_def with a custom command 00:02:30.670 [318/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:30.670 [319/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:30.670 [320/740] Linking target lib/librte_bitratestats.so.23.0 00:02:30.670 [321/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.670 [322/740] Linking target lib/librte_gro.so.23.0 00:02:30.670 [323/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:30.670 [324/740] Generating lib/rte_ip_frag_def with a custom command 00:02:30.671 [325/740] Generating lib/rte_ip_frag_mingw with a custom command 00:02:30.929 [326/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:30.929 [327/740] Linking static target lib/librte_jobstats.a 00:02:30.929 [328/740] Generating lib/rte_jobstats_def with a custom command 00:02:30.929 [329/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:30.929 [330/740] Generating lib/rte_jobstats_mingw with a custom command 00:02:30.929 [331/740] Linking static target lib/librte_gso.a 00:02:30.929 [332/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:30.929 [333/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:31.188 [334/740] Generating lib/rte_latencystats_def with a custom command 00:02:31.188 [335/740] Generating lib/rte_latencystats_mingw with a custom command 00:02:31.188 [336/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.188 [337/740] Linking target lib/librte_gso.so.23.0 00:02:31.188 [338/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:31.189 [339/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:31.189 [340/740] Generating lib/rte_lpm_def with a custom command 00:02:31.189 [341/740] Generating lib/rte_lpm_mingw with a custom command 00:02:31.189 [342/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.189 [343/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:31.189 [344/740] Linking target lib/librte_jobstats.so.23.0 00:02:31.189 [345/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:31.189 [346/740] Linking static target lib/librte_ip_frag.a 00:02:31.447 [347/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.705 [348/740] Linking target lib/librte_ip_frag.so.23.0 00:02:31.705 [349/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:31.705 [350/740] Linking static target lib/librte_latencystats.a 00:02:31.705 [351/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:31.705 [352/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:31.705 [353/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:31.705 [354/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:31.705 [355/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:31.705 [356/740] Generating lib/rte_member_def with a custom command 00:02:31.705 [357/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.705 [358/740] Generating lib/rte_member_mingw with a custom command 00:02:31.705 [359/740] Generating lib/rte_pcapng_def with a custom command 00:02:31.705 [360/740] Linking target lib/librte_cryptodev.so.23.0 00:02:31.705 [361/740] Generating lib/rte_pcapng_mingw with a custom command 00:02:31.963 [362/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.963 [363/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:31.963 [364/740] Linking target lib/librte_latencystats.so.23.0 00:02:31.963 [365/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:31.963 [366/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:31.963 [367/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:32.222 [368/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:32.222 [369/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:32.222 [370/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:32.222 [371/740] Linking static target lib/librte_lpm.a 00:02:32.222 [372/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:32.222 [373/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:32.222 [374/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:32.222 [375/740] Generating lib/rte_power_def with a custom command 00:02:32.480 [376/740] Generating lib/rte_power_mingw with a custom command 00:02:32.480 [377/740] Generating lib/rte_rawdev_def with a custom command 00:02:32.480 [378/740] Generating lib/rte_rawdev_mingw with a custom command 00:02:32.480 [379/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:32.480 [380/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:32.480 [381/740] Generating lib/rte_regexdev_def with a custom command 00:02:32.480 [382/740] Linking static target lib/librte_pcapng.a 00:02:32.480 [383/740] Generating lib/rte_regexdev_mingw with a custom command 00:02:32.480 [384/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.480 [385/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.480 [386/740] Linking target lib/librte_eventdev.so.23.0 00:02:32.480 [387/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:32.480 [388/740] Linking target lib/librte_lpm.so.23.0 00:02:32.738 [389/740] Generating lib/rte_dmadev_def with a custom command 00:02:32.738 [390/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:32.738 [391/740] Generating lib/rte_dmadev_mingw with a custom command 00:02:32.738 [392/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:32.738 [393/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:32.738 [394/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:32.738 [395/740] Linking static target lib/librte_rawdev.a 00:02:32.738 [396/740] Generating lib/rte_rib_def with a custom command 00:02:32.738 [397/740] Generating lib/rte_rib_mingw with a custom command 00:02:32.738 [398/740] Generating lib/rte_reorder_def with a custom command 00:02:32.738 [399/740] Generating lib/rte_reorder_mingw with a custom command 00:02:32.738 [400/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.738 [401/740] Linking target lib/librte_pcapng.so.23.0 00:02:32.738 [402/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:32.738 [403/740] Linking static target lib/librte_power.a 00:02:32.996 [404/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:32.996 [405/740] Linking static target lib/librte_dmadev.a 00:02:32.996 [406/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:32.996 [407/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:32.997 [408/740] Linking static target lib/librte_member.a 00:02:32.997 [409/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:32.997 [410/740] Linking static target lib/librte_regexdev.a 00:02:32.997 [411/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:33.254 [412/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:33.255 [413/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.255 [414/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.255 [415/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:33.255 [416/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:33.255 [417/740] Linking target lib/librte_rawdev.so.23.0 00:02:33.255 [418/740] Generating lib/rte_sched_def with a custom command 00:02:33.255 [419/740] Linking target lib/librte_member.so.23.0 00:02:33.255 [420/740] Generating lib/rte_sched_mingw with a custom command 00:02:33.255 [421/740] Generating lib/rte_security_def with a custom command 00:02:33.255 [422/740] Generating lib/rte_security_mingw with a custom command 00:02:33.255 [423/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:33.255 [424/740] Linking static target lib/librte_reorder.a 00:02:33.513 [425/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:33.513 [426/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.513 [427/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:33.513 [428/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:33.513 [429/740] Linking static target lib/librte_stack.a 00:02:33.513 [430/740] Generating lib/rte_stack_def with a custom command 00:02:33.513 [431/740] Generating lib/rte_stack_mingw with a custom command 00:02:33.513 [432/740] Linking target lib/librte_dmadev.so.23.0 00:02:33.513 [433/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:33.513 [434/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.513 [435/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:33.513 [436/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:33.513 [437/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.513 [438/740] Linking static target lib/librte_rib.a 00:02:33.513 [439/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.513 [440/740] Linking target lib/librte_reorder.so.23.0 00:02:33.513 [441/740] Linking target lib/librte_stack.so.23.0 00:02:33.513 [442/740] Linking target lib/librte_regexdev.so.23.0 00:02:33.771 [443/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.771 [444/740] Linking target lib/librte_power.so.23.0 00:02:34.029 [445/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:34.029 [446/740] Linking static target lib/librte_security.a 00:02:34.029 [447/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:34.029 [448/740] Generating lib/rte_vhost_def with a custom command 00:02:34.029 [449/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.029 [450/740] Generating lib/rte_vhost_mingw with a custom command 00:02:34.029 [451/740] Linking target lib/librte_rib.so.23.0 00:02:34.029 [452/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:34.294 [453/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:34.294 [454/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:34.294 [455/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:34.294 [456/740] Linking static target lib/librte_sched.a 00:02:34.294 [457/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.573 [458/740] Linking target lib/librte_security.so.23.0 00:02:34.573 [459/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:34.831 [460/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:34.831 [461/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:34.831 [462/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:34.831 [463/740] Generating lib/rte_ipsec_def with a custom command 00:02:34.831 [464/740] Generating lib/rte_ipsec_mingw with a custom command 00:02:34.831 [465/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.831 [466/740] Linking target lib/librte_sched.so.23.0 00:02:34.831 [467/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:35.089 [468/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:35.089 [469/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:35.089 [470/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:35.347 [471/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:35.347 [472/740] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:35.347 [473/740] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:35.347 [474/740] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:35.347 [475/740] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:35.347 [476/740] Generating lib/rte_fib_def with a custom command 00:02:35.347 [477/740] Generating lib/rte_fib_mingw with a custom command 00:02:35.347 [478/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:35.605 [479/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:35.605 [480/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:35.605 [481/740] Linking static target lib/librte_ipsec.a 00:02:35.864 [482/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.864 [483/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:36.122 [484/740] Linking target lib/librte_ipsec.so.23.0 00:02:36.122 [485/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:36.122 [486/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:36.122 [487/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:36.122 [488/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:36.122 [489/740] Linking static target lib/librte_fib.a 00:02:36.122 [490/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:36.381 [491/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.381 [492/740] Linking target lib/librte_fib.so.23.0 00:02:36.381 [493/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:36.639 [494/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:36.639 [495/740] Generating lib/rte_port_def with a custom command 00:02:36.639 [496/740] Generating lib/rte_port_mingw with a custom command 00:02:36.639 [497/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:36.898 [498/740] Generating lib/rte_pdump_def with a custom command 00:02:36.898 [499/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:36.898 [500/740] Generating lib/rte_pdump_mingw with a custom command 00:02:36.898 [501/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:36.898 [502/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:36.898 [503/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:36.898 [504/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:37.157 [505/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:37.157 [506/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:37.157 [507/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:37.416 [508/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:37.416 [509/740] Linking static target lib/librte_port.a 00:02:37.416 [510/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:37.416 [511/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:37.416 [512/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:37.674 [513/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:37.674 [514/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:37.674 [515/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:37.674 [516/740] Linking static target lib/librte_pdump.a 00:02:37.932 [517/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.933 [518/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.933 [519/740] Linking target lib/librte_port.so.23.0 00:02:37.933 [520/740] Linking target lib/librte_pdump.so.23.0 00:02:38.191 [521/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:38.191 [522/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:38.191 [523/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:38.191 [524/740] Generating lib/rte_table_def with a custom command 00:02:38.191 [525/740] Generating lib/rte_table_mingw with a custom command 00:02:38.191 [526/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:38.449 [527/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:38.449 [528/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:38.449 [529/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:38.449 [530/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:38.707 [531/740] Generating lib/rte_pipeline_def with a custom command 00:02:38.707 [532/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:38.707 [533/740] Generating lib/rte_pipeline_mingw with a custom command 00:02:38.707 [534/740] Linking static target lib/librte_table.a 00:02:38.707 [535/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:38.966 [536/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:39.224 [537/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:39.224 [538/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:39.224 [539/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:39.224 [540/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.224 [541/740] Linking target lib/librte_table.so.23.0 00:02:39.483 [542/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:39.483 [543/740] Generating lib/rte_graph_def with a custom command 00:02:39.483 [544/740] Generating lib/rte_graph_mingw with a custom command 00:02:39.483 [545/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:39.483 [546/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:39.742 [547/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:39.742 [548/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:39.742 [549/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:39.742 [550/740] Linking static target lib/librte_graph.a 00:02:40.000 [551/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:40.000 [552/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:40.000 [553/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:40.259 [554/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:40.259 [555/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:40.259 [556/740] Generating lib/rte_node_def with a custom command 00:02:40.517 [557/740] Generating lib/rte_node_mingw with a custom command 00:02:40.517 [558/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:40.517 [559/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:40.517 [560/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:40.775 [561/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:40.775 [562/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.775 [563/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:40.775 [564/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:40.775 [565/740] Linking target lib/librte_graph.so.23.0 00:02:40.775 [566/740] Generating drivers/rte_bus_pci_def with a custom command 00:02:40.775 [567/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:40.775 [568/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:40.775 [569/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:41.032 [570/740] Generating drivers/rte_bus_vdev_def with a custom command 00:02:41.032 [571/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:41.032 [572/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:41.032 [573/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:41.032 [574/740] Linking static target lib/librte_node.a 00:02:41.032 [575/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:41.032 [576/740] Generating drivers/rte_mempool_ring_def with a custom command 00:02:41.032 [577/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:41.032 [578/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:41.032 [579/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:41.032 [580/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:41.289 [581/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:41.289 [582/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:41.289 [583/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.289 [584/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:41.289 [585/740] Linking target lib/librte_node.so.23.0 00:02:41.289 [586/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:41.289 [587/740] Linking static target drivers/librte_bus_vdev.a 00:02:41.289 [588/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:41.289 [589/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:41.289 [590/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:41.289 [591/740] Linking static target drivers/librte_bus_pci.a 00:02:41.547 [592/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.547 [593/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:41.547 [594/740] Linking target drivers/librte_bus_vdev.so.23.0 00:02:41.805 [595/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:41.805 [596/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:41.805 [597/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:41.805 [598/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.805 [599/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:41.805 [600/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:41.805 [601/740] Linking target drivers/librte_bus_pci.so.23.0 00:02:41.805 [602/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:42.063 [603/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:42.063 [604/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:42.063 [605/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:42.063 [606/740] Linking static target drivers/librte_mempool_ring.a 00:02:42.063 [607/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:42.063 [608/740] Linking target drivers/librte_mempool_ring.so.23.0 00:02:42.063 [609/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:42.321 [610/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:42.886 [611/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:43.144 [612/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:43.144 [613/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:43.144 [614/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:43.402 [615/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:43.402 [616/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:43.660 [617/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:43.660 [618/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:43.916 [619/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:44.172 [620/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:44.172 [621/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:44.172 [622/740] Generating drivers/rte_net_i40e_def with a custom command 00:02:44.172 [623/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:44.429 [624/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:44.992 [625/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:44.992 [626/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:45.248 [627/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:45.248 [628/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:45.248 [629/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:45.248 [630/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:45.248 [631/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:45.505 [632/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:45.505 [633/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:45.762 [634/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:46.019 [635/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:46.275 [636/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:46.275 [637/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:46.275 [638/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:46.275 [639/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:46.532 [640/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:46.532 [641/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:46.532 [642/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:46.532 [643/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:46.532 [644/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:46.532 [645/740] Linking static target drivers/librte_net_i40e.a 00:02:46.790 [646/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:46.790 [647/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:46.790 [648/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:47.052 [649/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:47.052 [650/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:47.328 [651/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:47.328 [652/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.599 [653/740] Linking target drivers/librte_net_i40e.so.23.0 00:02:47.599 [654/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:47.599 [655/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:47.599 [656/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:47.599 [657/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:47.599 [658/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:47.599 [659/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:47.856 [660/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:47.856 [661/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:48.141 [662/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:48.141 [663/740] Linking static target lib/librte_vhost.a 00:02:48.141 [664/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:48.141 [665/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:48.141 [666/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:48.399 [667/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:48.399 [668/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:48.657 [669/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:48.916 [670/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:49.173 [671/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:49.173 [672/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:49.173 [673/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.430 [674/740] Linking target lib/librte_vhost.so.23.0 00:02:49.430 [675/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:49.430 [676/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:49.430 [677/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:49.688 [678/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:49.688 [679/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:49.688 [680/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:49.946 [681/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:49.946 [682/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:49.946 [683/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:49.946 [684/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:49.946 [685/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:50.205 [686/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:50.464 [687/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:50.464 [688/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:50.464 [689/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:50.464 [690/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:50.464 [691/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:50.722 [692/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:50.722 [693/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:50.722 [694/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:50.981 [695/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:51.240 [696/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:51.240 [697/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:51.498 [698/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:51.498 [699/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:51.498 [700/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:51.756 [701/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:52.014 [702/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:52.273 [703/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:52.273 [704/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:52.273 [705/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:52.531 [706/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:52.531 [707/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:52.531 [708/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:52.789 [709/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:53.048 [710/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:53.306 [711/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:53.306 [712/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:53.565 [713/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:53.565 [714/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:53.565 [715/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:53.565 [716/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:53.565 [717/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:53.565 [718/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:54.129 [719/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:54.129 [720/740] Linking static target lib/librte_pipeline.a 00:02:54.385 [721/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:54.385 [722/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:54.385 [723/740] Linking target app/dpdk-proc-info 00:02:54.385 [724/740] Linking target app/dpdk-test-acl 00:02:54.385 [725/740] Linking target app/dpdk-test-cmdline 00:02:54.385 [726/740] Linking target app/dpdk-test-compress-perf 00:02:54.644 [727/740] Linking target app/dpdk-test-crypto-perf 00:02:54.644 [728/740] Linking target app/dpdk-test-bbdev 00:02:54.644 [729/740] Linking target app/dpdk-pdump 00:02:54.901 [730/740] Linking target app/dpdk-test-eventdev 00:02:54.901 [731/740] Linking target app/dpdk-test-fib 00:02:54.901 [732/740] Linking target app/dpdk-test-flow-perf 00:02:54.901 [733/740] Linking target app/dpdk-test-pipeline 00:02:54.901 [734/740] Linking target app/dpdk-test-gpudev 00:02:54.901 [735/740] Linking target app/dpdk-test-regex 00:02:54.901 [736/740] Linking target app/dpdk-test-sad 00:02:54.901 [737/740] Linking target app/dpdk-test-security-perf 00:02:54.901 [738/740] Linking target app/dpdk-testpmd 00:02:57.429 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.687 [740/740] Linking target lib/librte_pipeline.so.23.0 00:02:57.688 22:48:46 build_native_dpdk -- common/autobuild_common.sh@188 -- $ uname -s 00:02:57.688 22:48:46 build_native_dpdk -- common/autobuild_common.sh@188 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:57.688 22:48:46 build_native_dpdk -- common/autobuild_common.sh@201 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:57.688 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:57.688 [0/1] Installing files. 00:02:57.947 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:57.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:57.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:58.210 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:58.210 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:58.210 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:58.210 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:58.210 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:58.210 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:58.210 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:58.210 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:58.210 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:58.210 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:58.210 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:58.210 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:58.210 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.210 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.210 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.210 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.210 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.210 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.210 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.210 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.210 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.210 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.210 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.210 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.210 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.210 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.210 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.210 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:58.210 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.211 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:58.212 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:58.213 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:58.213 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.213 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.213 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.213 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.213 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.213 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.213 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.213 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.213 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.213 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.213 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.213 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.213 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.213 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.473 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.473 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.473 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.473 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.473 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.473 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.473 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.473 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.473 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.473 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.473 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.473 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.473 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.473 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.473 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.473 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.473 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.473 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.473 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.473 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.473 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.473 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.473 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.473 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.473 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.473 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.473 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.473 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.473 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:58.474 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:58.474 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:58.474 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.474 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:58.474 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.474 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.474 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.474 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.474 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.474 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.474 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.474 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.736 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.736 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.736 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.736 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.736 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.736 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.736 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.736 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.736 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.737 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.738 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.739 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.739 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.739 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.739 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.739 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.739 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.739 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.739 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.739 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.739 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:58.739 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:58.739 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:02:58.739 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:58.739 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:02:58.739 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:58.739 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:02:58.739 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:58.739 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:02:58.739 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:58.739 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:02:58.739 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:58.739 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:02:58.739 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:58.739 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:02:58.739 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:58.739 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:02:58.739 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:58.739 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:02:58.739 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:58.739 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:02:58.739 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:58.739 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:02:58.739 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:58.739 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:02:58.739 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:58.739 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:02:58.739 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:58.739 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:02:58.739 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:58.739 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:02:58.739 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:58.739 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:02:58.739 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:58.739 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:02:58.739 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:58.739 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:02:58.739 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:58.739 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:02:58.739 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:58.739 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:02:58.739 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:58.739 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:02:58.739 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:58.739 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:02:58.739 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:58.739 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:02:58.739 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:58.739 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:02:58.739 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:58.739 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:02:58.739 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:58.739 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:02:58.739 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:58.739 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:58.739 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:58.739 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:58.739 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:58.739 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:58.739 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:58.739 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:58.739 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:58.739 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:58.739 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:58.739 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:58.739 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:58.739 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:02:58.739 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:58.739 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:02:58.739 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:58.739 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:02:58.739 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:58.739 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:02:58.739 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:58.739 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:02:58.739 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:58.739 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:02:58.739 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:58.739 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:02:58.739 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:58.739 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:02:58.739 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:58.739 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:02:58.739 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:58.739 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:02:58.739 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:58.739 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:02:58.739 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:58.739 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:02:58.739 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:58.739 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:02:58.739 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:58.739 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:02:58.739 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:58.739 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:02:58.739 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:58.739 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:02:58.739 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:58.739 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:02:58.739 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:58.739 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:02:58.739 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:58.739 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:02:58.739 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:58.739 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:02:58.739 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:58.739 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:02:58.739 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:58.739 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:02:58.740 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:58.740 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:02:58.740 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:58.740 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:02:58.740 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:58.740 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:02:58.740 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:58.740 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:02:58.740 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:58.740 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:58.740 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:58.740 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:58.740 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:58.740 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:58.740 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:58.740 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:58.740 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:58.740 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:58.740 22:48:48 build_native_dpdk -- common/autobuild_common.sh@207 -- $ cat 00:02:58.740 22:48:48 build_native_dpdk -- common/autobuild_common.sh@212 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:58.740 00:02:58.740 real 0m45.845s 00:02:58.740 user 5m5.252s 00:02:58.740 sys 0m44.818s 00:02:58.740 22:48:48 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:58.740 22:48:48 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:58.740 ************************************ 00:02:58.740 END TEST build_native_dpdk 00:02:58.740 ************************************ 00:02:58.740 22:48:48 -- common/autotest_common.sh@1142 -- $ return 0 00:02:58.740 22:48:48 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:58.740 22:48:48 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:58.740 22:48:48 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:58.740 22:48:48 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:58.740 22:48:48 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:02:58.740 22:48:48 -- spdk/autobuild.sh@58 -- $ unittest_build 00:02:58.740 22:48:48 -- common/autobuild_common.sh@420 -- $ run_test unittest_build _unittest_build 00:02:58.740 22:48:48 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:02:58.740 22:48:48 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:58.740 22:48:48 -- common/autotest_common.sh@10 -- $ set +x 00:02:58.740 ************************************ 00:02:58.740 START TEST unittest_build 00:02:58.740 ************************************ 00:02:58.740 22:48:48 unittest_build -- common/autotest_common.sh@1123 -- $ _unittest_build 00:02:58.740 22:48:48 unittest_build -- common/autobuild_common.sh@411 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --without-shared 00:02:58.999 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:58.999 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.999 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:58.999 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:59.258 Using 'verbs' RDMA provider 00:03:14.700 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:24.692 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:24.950 Creating mk/config.mk...done. 00:03:24.950 Creating mk/cc.flags.mk...done. 00:03:24.950 Type 'make' to build. 00:03:24.950 22:49:14 unittest_build -- common/autobuild_common.sh@412 -- $ make -j10 00:03:25.208 make[1]: Nothing to be done for 'all'. 00:03:43.281 CC lib/log/log.o 00:03:43.282 CC lib/log/log_deprecated.o 00:03:43.282 CC lib/log/log_flags.o 00:03:43.282 CC lib/ut/ut.o 00:03:43.282 CC lib/ut_mock/mock.o 00:03:43.540 LIB libspdk_ut.a 00:03:43.540 LIB libspdk_ut_mock.a 00:03:43.540 LIB libspdk_log.a 00:03:43.540 CC lib/ioat/ioat.o 00:03:43.540 CC lib/util/base64.o 00:03:43.540 CC lib/util/cpuset.o 00:03:43.540 CC lib/dma/dma.o 00:03:43.540 CC lib/util/bit_array.o 00:03:43.540 CC lib/util/crc16.o 00:03:43.540 CXX lib/trace_parser/trace.o 00:03:43.540 CC lib/util/crc32.o 00:03:43.540 CC lib/util/crc32c.o 00:03:43.798 CC lib/vfio_user/host/vfio_user_pci.o 00:03:43.798 CC lib/vfio_user/host/vfio_user.o 00:03:43.798 CC lib/util/crc32_ieee.o 00:03:43.798 CC lib/util/crc64.o 00:03:43.798 CC lib/util/dif.o 00:03:43.798 LIB libspdk_dma.a 00:03:44.055 CC lib/util/fd.o 00:03:44.055 CC lib/util/file.o 00:03:44.055 CC lib/util/hexlify.o 00:03:44.055 LIB libspdk_ioat.a 00:03:44.055 CC lib/util/iov.o 00:03:44.055 CC lib/util/math.o 00:03:44.055 CC lib/util/pipe.o 00:03:44.055 CC lib/util/strerror_tls.o 00:03:44.055 LIB libspdk_vfio_user.a 00:03:44.055 CC lib/util/string.o 00:03:44.055 CC lib/util/uuid.o 00:03:44.055 CC lib/util/fd_group.o 00:03:44.055 CC lib/util/xor.o 00:03:44.055 CC lib/util/zipf.o 00:03:44.619 LIB libspdk_util.a 00:03:44.619 LIB libspdk_trace_parser.a 00:03:44.876 CC lib/idxd/idxd.o 00:03:44.876 CC lib/json/json_parse.o 00:03:44.876 CC lib/idxd/idxd_user.o 00:03:44.876 CC lib/json/json_util.o 00:03:44.876 CC lib/rdma_utils/rdma_utils.o 00:03:44.876 CC lib/rdma_provider/common.o 00:03:44.876 CC lib/vmd/vmd.o 00:03:44.876 CC lib/env_dpdk/env.o 00:03:44.876 CC lib/conf/conf.o 00:03:44.876 CC lib/env_dpdk/memory.o 00:03:44.876 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:44.876 LIB libspdk_conf.a 00:03:45.134 CC lib/env_dpdk/init.o 00:03:45.134 CC lib/json/json_write.o 00:03:45.134 CC lib/env_dpdk/pci.o 00:03:45.134 CC lib/env_dpdk/threads.o 00:03:45.134 LIB libspdk_rdma_utils.a 00:03:45.134 LIB libspdk_rdma_provider.a 00:03:45.134 CC lib/env_dpdk/pci_ioat.o 00:03:45.134 CC lib/env_dpdk/pci_virtio.o 00:03:45.134 CC lib/env_dpdk/pci_vmd.o 00:03:45.134 CC lib/env_dpdk/pci_idxd.o 00:03:45.392 CC lib/env_dpdk/pci_event.o 00:03:45.392 CC lib/env_dpdk/sigbus_handler.o 00:03:45.392 LIB libspdk_json.a 00:03:45.392 CC lib/env_dpdk/pci_dpdk.o 00:03:45.392 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:45.392 LIB libspdk_idxd.a 00:03:45.392 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:45.392 CC lib/vmd/led.o 00:03:45.392 CC lib/jsonrpc/jsonrpc_server.o 00:03:45.392 CC lib/jsonrpc/jsonrpc_client.o 00:03:45.650 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:45.650 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:45.650 LIB libspdk_vmd.a 00:03:45.908 LIB libspdk_jsonrpc.a 00:03:46.166 CC lib/rpc/rpc.o 00:03:46.166 LIB libspdk_rpc.a 00:03:46.423 LIB libspdk_env_dpdk.a 00:03:46.423 CC lib/keyring/keyring.o 00:03:46.423 CC lib/keyring/keyring_rpc.o 00:03:46.423 CC lib/notify/notify_rpc.o 00:03:46.423 CC lib/notify/notify.o 00:03:46.423 CC lib/trace/trace.o 00:03:46.423 CC lib/trace/trace_flags.o 00:03:46.423 CC lib/trace/trace_rpc.o 00:03:46.681 LIB libspdk_notify.a 00:03:46.681 LIB libspdk_keyring.a 00:03:46.681 LIB libspdk_trace.a 00:03:46.939 CC lib/sock/sock.o 00:03:46.939 CC lib/sock/sock_rpc.o 00:03:46.939 CC lib/thread/thread.o 00:03:46.939 CC lib/thread/iobuf.o 00:03:47.513 LIB libspdk_sock.a 00:03:47.777 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:47.777 CC lib/nvme/nvme_ctrlr.o 00:03:47.777 CC lib/nvme/nvme_fabric.o 00:03:47.777 CC lib/nvme/nvme_ns.o 00:03:47.777 CC lib/nvme/nvme_ns_cmd.o 00:03:47.777 CC lib/nvme/nvme_pcie_common.o 00:03:47.777 CC lib/nvme/nvme_pcie.o 00:03:47.777 CC lib/nvme/nvme_qpair.o 00:03:47.777 CC lib/nvme/nvme.o 00:03:48.343 CC lib/nvme/nvme_quirks.o 00:03:48.343 CC lib/nvme/nvme_transport.o 00:03:48.601 CC lib/nvme/nvme_discovery.o 00:03:48.601 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:48.601 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:48.601 CC lib/nvme/nvme_tcp.o 00:03:48.601 CC lib/nvme/nvme_opal.o 00:03:48.601 LIB libspdk_thread.a 00:03:48.858 CC lib/nvme/nvme_io_msg.o 00:03:48.858 CC lib/accel/accel.o 00:03:48.858 CC lib/accel/accel_rpc.o 00:03:49.116 CC lib/accel/accel_sw.o 00:03:49.116 CC lib/blob/blobstore.o 00:03:49.116 CC lib/blob/request.o 00:03:49.116 CC lib/blob/zeroes.o 00:03:49.116 CC lib/blob/blob_bs_dev.o 00:03:49.374 CC lib/nvme/nvme_poll_group.o 00:03:49.374 CC lib/nvme/nvme_zns.o 00:03:49.374 CC lib/init/json_config.o 00:03:49.374 CC lib/init/subsystem.o 00:03:49.374 CC lib/init/subsystem_rpc.o 00:03:49.374 CC lib/virtio/virtio.o 00:03:49.632 CC lib/virtio/virtio_vhost_user.o 00:03:49.632 CC lib/init/rpc.o 00:03:49.632 CC lib/nvme/nvme_stubs.o 00:03:49.890 LIB libspdk_init.a 00:03:49.890 CC lib/virtio/virtio_vfio_user.o 00:03:49.890 CC lib/nvme/nvme_auth.o 00:03:49.890 CC lib/nvme/nvme_cuse.o 00:03:49.890 CC lib/virtio/virtio_pci.o 00:03:49.890 CC lib/event/app.o 00:03:50.147 CC lib/event/reactor.o 00:03:50.147 CC lib/nvme/nvme_rdma.o 00:03:50.147 LIB libspdk_accel.a 00:03:50.147 CC lib/event/log_rpc.o 00:03:50.147 CC lib/event/app_rpc.o 00:03:50.147 LIB libspdk_virtio.a 00:03:50.404 CC lib/event/scheduler_static.o 00:03:50.404 CC lib/bdev/bdev.o 00:03:50.404 CC lib/bdev/bdev_rpc.o 00:03:50.404 CC lib/bdev/bdev_zone.o 00:03:50.404 CC lib/bdev/part.o 00:03:50.404 CC lib/bdev/scsi_nvme.o 00:03:50.404 LIB libspdk_event.a 00:03:51.338 LIB libspdk_nvme.a 00:03:52.710 LIB libspdk_blob.a 00:03:52.967 CC lib/blobfs/blobfs.o 00:03:52.967 CC lib/blobfs/tree.o 00:03:52.967 CC lib/lvol/lvol.o 00:03:53.534 LIB libspdk_bdev.a 00:03:53.791 CC lib/nvmf/ctrlr.o 00:03:53.791 CC lib/nvmf/ctrlr_discovery.o 00:03:53.791 CC lib/nvmf/subsystem.o 00:03:53.791 CC lib/nvmf/nvmf.o 00:03:53.791 CC lib/nvmf/ctrlr_bdev.o 00:03:53.791 CC lib/nbd/nbd.o 00:03:53.791 CC lib/scsi/dev.o 00:03:53.791 CC lib/ftl/ftl_core.o 00:03:53.791 LIB libspdk_blobfs.a 00:03:54.047 CC lib/scsi/lun.o 00:03:54.047 CC lib/ftl/ftl_init.o 00:03:54.047 LIB libspdk_lvol.a 00:03:54.047 CC lib/ftl/ftl_layout.o 00:03:54.047 CC lib/nvmf/nvmf_rpc.o 00:03:54.303 CC lib/nbd/nbd_rpc.o 00:03:54.303 CC lib/ftl/ftl_debug.o 00:03:54.303 CC lib/scsi/port.o 00:03:54.303 CC lib/scsi/scsi.o 00:03:54.303 LIB libspdk_nbd.a 00:03:54.560 CC lib/scsi/scsi_bdev.o 00:03:54.560 CC lib/scsi/scsi_pr.o 00:03:54.560 CC lib/scsi/scsi_rpc.o 00:03:54.560 CC lib/scsi/task.o 00:03:54.560 CC lib/ftl/ftl_io.o 00:03:54.560 CC lib/ftl/ftl_sb.o 00:03:54.560 CC lib/ftl/ftl_l2p.o 00:03:54.560 CC lib/ftl/ftl_l2p_flat.o 00:03:54.818 CC lib/nvmf/transport.o 00:03:54.818 CC lib/nvmf/tcp.o 00:03:54.818 CC lib/ftl/ftl_nv_cache.o 00:03:54.818 CC lib/ftl/ftl_band.o 00:03:54.818 CC lib/nvmf/stubs.o 00:03:54.818 CC lib/ftl/ftl_band_ops.o 00:03:55.076 LIB libspdk_scsi.a 00:03:55.076 CC lib/ftl/ftl_writer.o 00:03:55.076 CC lib/nvmf/mdns_server.o 00:03:55.076 CC lib/nvmf/rdma.o 00:03:55.333 CC lib/nvmf/auth.o 00:03:55.333 CC lib/ftl/ftl_rq.o 00:03:55.333 CC lib/ftl/ftl_reloc.o 00:03:55.333 CC lib/iscsi/conn.o 00:03:55.590 CC lib/iscsi/init_grp.o 00:03:55.590 CC lib/iscsi/iscsi.o 00:03:55.590 CC lib/iscsi/md5.o 00:03:55.847 CC lib/iscsi/param.o 00:03:55.847 CC lib/iscsi/portal_grp.o 00:03:55.847 CC lib/iscsi/tgt_node.o 00:03:55.847 CC lib/ftl/ftl_l2p_cache.o 00:03:55.847 CC lib/vhost/vhost.o 00:03:56.105 CC lib/vhost/vhost_rpc.o 00:03:56.105 CC lib/iscsi/iscsi_subsystem.o 00:03:56.105 CC lib/ftl/ftl_p2l.o 00:03:56.105 CC lib/ftl/mngt/ftl_mngt.o 00:03:56.363 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:56.363 CC lib/vhost/vhost_scsi.o 00:03:56.621 CC lib/iscsi/iscsi_rpc.o 00:03:56.621 CC lib/iscsi/task.o 00:03:56.621 CC lib/vhost/vhost_blk.o 00:03:56.621 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:56.621 CC lib/vhost/rte_vhost_user.o 00:03:56.621 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:56.621 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:56.621 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:56.621 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:56.879 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:56.879 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:56.879 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:56.879 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:56.879 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:57.137 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:57.137 LIB libspdk_iscsi.a 00:03:57.137 CC lib/ftl/utils/ftl_conf.o 00:03:57.137 CC lib/ftl/utils/ftl_md.o 00:03:57.137 CC lib/ftl/utils/ftl_mempool.o 00:03:57.137 CC lib/ftl/utils/ftl_bitmap.o 00:03:57.395 CC lib/ftl/utils/ftl_property.o 00:03:57.395 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:57.395 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:57.395 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:57.395 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:57.395 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:57.395 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:57.652 LIB libspdk_vhost.a 00:03:57.652 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:57.652 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:57.652 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:57.652 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:57.652 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:57.652 CC lib/ftl/base/ftl_base_dev.o 00:03:57.652 LIB libspdk_nvmf.a 00:03:57.652 CC lib/ftl/base/ftl_base_bdev.o 00:03:57.652 CC lib/ftl/ftl_trace.o 00:03:57.909 LIB libspdk_ftl.a 00:03:58.473 CC module/env_dpdk/env_dpdk_rpc.o 00:03:58.473 CC module/accel/error/accel_error.o 00:03:58.473 CC module/accel/ioat/accel_ioat.o 00:03:58.473 CC module/accel/dsa/accel_dsa.o 00:03:58.473 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:58.473 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:58.473 CC module/blob/bdev/blob_bdev.o 00:03:58.473 CC module/accel/iaa/accel_iaa.o 00:03:58.473 CC module/keyring/file/keyring.o 00:03:58.473 CC module/sock/posix/posix.o 00:03:58.473 LIB libspdk_env_dpdk_rpc.a 00:03:58.473 CC module/accel/iaa/accel_iaa_rpc.o 00:03:58.730 LIB libspdk_scheduler_dpdk_governor.a 00:03:58.730 CC module/keyring/file/keyring_rpc.o 00:03:58.730 CC module/accel/error/accel_error_rpc.o 00:03:58.730 CC module/accel/ioat/accel_ioat_rpc.o 00:03:58.730 CC module/accel/dsa/accel_dsa_rpc.o 00:03:58.730 LIB libspdk_accel_iaa.a 00:03:58.730 LIB libspdk_scheduler_dynamic.a 00:03:58.730 LIB libspdk_blob_bdev.a 00:03:58.730 LIB libspdk_keyring_file.a 00:03:58.730 LIB libspdk_accel_ioat.a 00:03:58.730 LIB libspdk_accel_error.a 00:03:58.730 CC module/scheduler/gscheduler/gscheduler.o 00:03:58.730 LIB libspdk_accel_dsa.a 00:03:58.987 CC module/keyring/linux/keyring.o 00:03:58.987 CC module/keyring/linux/keyring_rpc.o 00:03:58.987 CC module/blobfs/bdev/blobfs_bdev.o 00:03:58.987 CC module/bdev/lvol/vbdev_lvol.o 00:03:58.987 CC module/bdev/error/vbdev_error.o 00:03:58.987 CC module/bdev/delay/vbdev_delay.o 00:03:58.987 LIB libspdk_scheduler_gscheduler.a 00:03:58.987 CC module/bdev/malloc/bdev_malloc.o 00:03:58.987 CC module/bdev/gpt/gpt.o 00:03:58.987 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:58.987 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:58.987 LIB libspdk_keyring_linux.a 00:03:59.301 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:59.301 CC module/bdev/gpt/vbdev_gpt.o 00:03:59.301 CC module/bdev/error/vbdev_error_rpc.o 00:03:59.301 CC module/bdev/null/bdev_null.o 00:03:59.301 LIB libspdk_blobfs_bdev.a 00:03:59.301 LIB libspdk_sock_posix.a 00:03:59.301 CC module/bdev/nvme/bdev_nvme.o 00:03:59.301 LIB libspdk_bdev_error.a 00:03:59.301 LIB libspdk_bdev_delay.a 00:03:59.301 CC module/bdev/passthru/vbdev_passthru.o 00:03:59.301 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:59.301 CC module/bdev/null/bdev_null_rpc.o 00:03:59.301 LIB libspdk_bdev_malloc.a 00:03:59.558 LIB libspdk_bdev_gpt.a 00:03:59.558 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:59.558 CC module/bdev/raid/bdev_raid.o 00:03:59.558 CC module/bdev/split/vbdev_split.o 00:03:59.558 LIB libspdk_bdev_null.a 00:03:59.558 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:59.559 CC module/bdev/aio/bdev_aio.o 00:03:59.559 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:59.559 CC module/bdev/ftl/bdev_ftl.o 00:03:59.816 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:59.816 CC module/bdev/split/vbdev_split_rpc.o 00:03:59.816 CC module/bdev/aio/bdev_aio_rpc.o 00:03:59.816 LIB libspdk_bdev_passthru.a 00:03:59.816 CC module/bdev/nvme/nvme_rpc.o 00:03:59.816 LIB libspdk_bdev_lvol.a 00:04:00.073 LIB libspdk_bdev_zone_block.a 00:04:00.073 LIB libspdk_bdev_split.a 00:04:00.073 CC module/bdev/nvme/bdev_mdns_client.o 00:04:00.073 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:00.073 CC module/bdev/nvme/vbdev_opal.o 00:04:00.073 LIB libspdk_bdev_aio.a 00:04:00.073 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:00.073 CC module/bdev/raid/bdev_raid_rpc.o 00:04:00.073 CC module/bdev/raid/bdev_raid_sb.o 00:04:00.073 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:00.073 CC module/bdev/iscsi/bdev_iscsi.o 00:04:00.330 LIB libspdk_bdev_ftl.a 00:04:00.330 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:00.330 CC module/bdev/raid/raid0.o 00:04:00.330 CC module/bdev/raid/raid1.o 00:04:00.330 CC module/bdev/raid/concat.o 00:04:00.330 CC module/bdev/raid/raid5f.o 00:04:00.330 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:00.330 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:00.330 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:00.587 LIB libspdk_bdev_iscsi.a 00:04:00.845 LIB libspdk_bdev_raid.a 00:04:00.845 LIB libspdk_bdev_virtio.a 00:04:01.781 LIB libspdk_bdev_nvme.a 00:04:02.039 CC module/event/subsystems/scheduler/scheduler.o 00:04:02.039 CC module/event/subsystems/vmd/vmd.o 00:04:02.039 CC module/event/subsystems/iobuf/iobuf.o 00:04:02.039 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:02.039 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:02.039 CC module/event/subsystems/sock/sock.o 00:04:02.039 CC module/event/subsystems/keyring/keyring.o 00:04:02.039 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:02.297 LIB libspdk_event_keyring.a 00:04:02.297 LIB libspdk_event_scheduler.a 00:04:02.297 LIB libspdk_event_sock.a 00:04:02.297 LIB libspdk_event_vhost_blk.a 00:04:02.297 LIB libspdk_event_vmd.a 00:04:02.297 LIB libspdk_event_iobuf.a 00:04:02.555 CC module/event/subsystems/accel/accel.o 00:04:02.813 LIB libspdk_event_accel.a 00:04:03.071 CC module/event/subsystems/bdev/bdev.o 00:04:03.071 LIB libspdk_event_bdev.a 00:04:03.329 CC module/event/subsystems/nbd/nbd.o 00:04:03.329 CC module/event/subsystems/scsi/scsi.o 00:04:03.329 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:03.329 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:03.588 LIB libspdk_event_nbd.a 00:04:03.588 LIB libspdk_event_scsi.a 00:04:03.588 LIB libspdk_event_nvmf.a 00:04:03.846 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:03.846 CC module/event/subsystems/iscsi/iscsi.o 00:04:03.846 LIB libspdk_event_vhost_scsi.a 00:04:04.103 LIB libspdk_event_iscsi.a 00:04:04.103 TEST_HEADER include/spdk/accel.h 00:04:04.103 TEST_HEADER include/spdk/accel_module.h 00:04:04.103 TEST_HEADER include/spdk/assert.h 00:04:04.103 CXX app/trace/trace.o 00:04:04.103 TEST_HEADER include/spdk/barrier.h 00:04:04.103 TEST_HEADER include/spdk/base64.h 00:04:04.103 CC test/rpc_client/rpc_client_test.o 00:04:04.103 TEST_HEADER include/spdk/bdev.h 00:04:04.103 TEST_HEADER include/spdk/bdev_module.h 00:04:04.103 TEST_HEADER include/spdk/bdev_zone.h 00:04:04.103 TEST_HEADER include/spdk/bit_array.h 00:04:04.103 TEST_HEADER include/spdk/bit_pool.h 00:04:04.103 TEST_HEADER include/spdk/blob.h 00:04:04.103 TEST_HEADER include/spdk/blob_bdev.h 00:04:04.103 TEST_HEADER include/spdk/blobfs.h 00:04:04.103 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:04.103 TEST_HEADER include/spdk/conf.h 00:04:04.103 TEST_HEADER include/spdk/config.h 00:04:04.361 TEST_HEADER include/spdk/cpuset.h 00:04:04.361 TEST_HEADER include/spdk/crc16.h 00:04:04.361 TEST_HEADER include/spdk/crc32.h 00:04:04.361 TEST_HEADER include/spdk/crc64.h 00:04:04.361 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:04.361 TEST_HEADER include/spdk/dif.h 00:04:04.361 TEST_HEADER include/spdk/dma.h 00:04:04.361 TEST_HEADER include/spdk/endian.h 00:04:04.361 TEST_HEADER include/spdk/env.h 00:04:04.361 TEST_HEADER include/spdk/env_dpdk.h 00:04:04.361 TEST_HEADER include/spdk/event.h 00:04:04.361 TEST_HEADER include/spdk/fd.h 00:04:04.361 TEST_HEADER include/spdk/fd_group.h 00:04:04.361 TEST_HEADER include/spdk/file.h 00:04:04.361 TEST_HEADER include/spdk/ftl.h 00:04:04.361 TEST_HEADER include/spdk/gpt_spec.h 00:04:04.361 TEST_HEADER include/spdk/hexlify.h 00:04:04.361 TEST_HEADER include/spdk/histogram_data.h 00:04:04.361 CC test/thread/poller_perf/poller_perf.o 00:04:04.361 TEST_HEADER include/spdk/idxd.h 00:04:04.361 TEST_HEADER include/spdk/idxd_spec.h 00:04:04.361 CC examples/ioat/perf/perf.o 00:04:04.361 TEST_HEADER include/spdk/init.h 00:04:04.361 TEST_HEADER include/spdk/ioat.h 00:04:04.361 TEST_HEADER include/spdk/ioat_spec.h 00:04:04.361 TEST_HEADER include/spdk/iscsi_spec.h 00:04:04.361 TEST_HEADER include/spdk/json.h 00:04:04.361 TEST_HEADER include/spdk/jsonrpc.h 00:04:04.361 TEST_HEADER include/spdk/keyring.h 00:04:04.361 TEST_HEADER include/spdk/keyring_module.h 00:04:04.361 TEST_HEADER include/spdk/likely.h 00:04:04.361 TEST_HEADER include/spdk/log.h 00:04:04.361 TEST_HEADER include/spdk/lvol.h 00:04:04.361 TEST_HEADER include/spdk/memory.h 00:04:04.361 CC examples/util/zipf/zipf.o 00:04:04.361 TEST_HEADER include/spdk/mmio.h 00:04:04.361 TEST_HEADER include/spdk/nbd.h 00:04:04.361 CC test/dma/test_dma/test_dma.o 00:04:04.361 TEST_HEADER include/spdk/notify.h 00:04:04.361 TEST_HEADER include/spdk/nvme.h 00:04:04.361 TEST_HEADER include/spdk/nvme_intel.h 00:04:04.361 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:04.361 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:04.361 CC test/app/bdev_svc/bdev_svc.o 00:04:04.361 TEST_HEADER include/spdk/nvme_spec.h 00:04:04.361 TEST_HEADER include/spdk/nvme_zns.h 00:04:04.361 TEST_HEADER include/spdk/nvmf.h 00:04:04.361 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:04.361 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:04.361 TEST_HEADER include/spdk/nvmf_spec.h 00:04:04.361 TEST_HEADER include/spdk/nvmf_transport.h 00:04:04.361 TEST_HEADER include/spdk/opal.h 00:04:04.361 TEST_HEADER include/spdk/opal_spec.h 00:04:04.361 TEST_HEADER include/spdk/pci_ids.h 00:04:04.361 TEST_HEADER include/spdk/pipe.h 00:04:04.361 TEST_HEADER include/spdk/queue.h 00:04:04.361 TEST_HEADER include/spdk/reduce.h 00:04:04.361 CC test/env/mem_callbacks/mem_callbacks.o 00:04:04.361 TEST_HEADER include/spdk/rpc.h 00:04:04.361 TEST_HEADER include/spdk/scheduler.h 00:04:04.361 TEST_HEADER include/spdk/scsi.h 00:04:04.361 TEST_HEADER include/spdk/scsi_spec.h 00:04:04.361 TEST_HEADER include/spdk/sock.h 00:04:04.361 TEST_HEADER include/spdk/stdinc.h 00:04:04.361 TEST_HEADER include/spdk/string.h 00:04:04.361 TEST_HEADER include/spdk/thread.h 00:04:04.361 TEST_HEADER include/spdk/trace.h 00:04:04.361 TEST_HEADER include/spdk/trace_parser.h 00:04:04.361 TEST_HEADER include/spdk/tree.h 00:04:04.361 TEST_HEADER include/spdk/ublk.h 00:04:04.361 TEST_HEADER include/spdk/util.h 00:04:04.361 TEST_HEADER include/spdk/uuid.h 00:04:04.361 TEST_HEADER include/spdk/version.h 00:04:04.361 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:04.361 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:04.361 TEST_HEADER include/spdk/vhost.h 00:04:04.361 TEST_HEADER include/spdk/vmd.h 00:04:04.361 TEST_HEADER include/spdk/xor.h 00:04:04.361 TEST_HEADER include/spdk/zipf.h 00:04:04.361 CXX test/cpp_headers/accel.o 00:04:04.361 LINK poller_perf 00:04:04.361 LINK interrupt_tgt 00:04:04.361 LINK rpc_client_test 00:04:04.619 LINK zipf 00:04:04.619 LINK bdev_svc 00:04:04.619 LINK ioat_perf 00:04:04.619 LINK mem_callbacks 00:04:04.619 CXX test/cpp_headers/accel_module.o 00:04:04.619 LINK spdk_trace 00:04:04.619 LINK test_dma 00:04:04.876 CXX test/cpp_headers/assert.o 00:04:04.876 CXX test/cpp_headers/barrier.o 00:04:04.876 CXX test/cpp_headers/base64.o 00:04:05.134 CC test/env/vtophys/vtophys.o 00:04:05.135 CC app/trace_record/trace_record.o 00:04:05.135 CXX test/cpp_headers/bdev.o 00:04:05.135 LINK vtophys 00:04:05.135 CC examples/ioat/verify/verify.o 00:04:05.135 CC examples/thread/thread/thread_ex.o 00:04:05.392 CC examples/sock/hello_world/hello_sock.o 00:04:05.392 CC test/thread/lock/spdk_lock.o 00:04:05.392 CXX test/cpp_headers/bdev_module.o 00:04:05.392 LINK spdk_trace_record 00:04:05.392 LINK verify 00:04:05.392 LINK thread 00:04:05.650 CXX test/cpp_headers/bdev_zone.o 00:04:05.650 LINK hello_sock 00:04:05.650 CXX test/cpp_headers/bit_array.o 00:04:05.907 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:05.907 CXX test/cpp_headers/bit_pool.o 00:04:05.907 CXX test/cpp_headers/blob.o 00:04:05.907 CC app/nvmf_tgt/nvmf_main.o 00:04:06.165 LINK env_dpdk_post_init 00:04:06.165 CXX test/cpp_headers/blob_bdev.o 00:04:06.165 LINK nvmf_tgt 00:04:06.165 CC app/iscsi_tgt/iscsi_tgt.o 00:04:06.423 CXX test/cpp_headers/blobfs.o 00:04:06.423 CC examples/vmd/lsvmd/lsvmd.o 00:04:06.423 CXX test/cpp_headers/blobfs_bdev.o 00:04:06.423 LINK iscsi_tgt 00:04:06.423 LINK lsvmd 00:04:06.681 CXX test/cpp_headers/conf.o 00:04:06.681 CC app/spdk_tgt/spdk_tgt.o 00:04:06.681 CXX test/cpp_headers/config.o 00:04:06.681 CXX test/cpp_headers/cpuset.o 00:04:06.681 LINK spdk_tgt 00:04:06.939 CXX test/cpp_headers/crc16.o 00:04:07.198 CXX test/cpp_headers/crc32.o 00:04:07.198 LINK spdk_lock 00:04:07.198 CC test/env/memory/memory_ut.o 00:04:07.198 CXX test/cpp_headers/crc64.o 00:04:07.456 CXX test/cpp_headers/dif.o 00:04:07.456 CXX test/cpp_headers/dma.o 00:04:07.715 CC examples/vmd/led/led.o 00:04:07.715 CXX test/cpp_headers/endian.o 00:04:07.715 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:07.715 LINK led 00:04:07.973 CXX test/cpp_headers/env.o 00:04:07.973 CXX test/cpp_headers/env_dpdk.o 00:04:07.973 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:04:07.973 LINK memory_ut 00:04:07.973 CXX test/cpp_headers/event.o 00:04:08.231 LINK histogram_ut 00:04:08.231 LINK nvme_fuzz 00:04:08.231 CC test/accel/dif/dif.o 00:04:08.231 CXX test/cpp_headers/fd.o 00:04:08.489 CC test/env/pci/pci_ut.o 00:04:08.489 CXX test/cpp_headers/fd_group.o 00:04:08.489 CXX test/cpp_headers/file.o 00:04:08.747 CC test/unit/lib/log/log.c/log_ut.o 00:04:08.747 CC test/blobfs/mkfs/mkfs.o 00:04:08.747 CXX test/cpp_headers/ftl.o 00:04:08.747 LINK dif 00:04:08.747 LINK pci_ut 00:04:09.006 LINK mkfs 00:04:09.006 LINK log_ut 00:04:09.006 CXX test/cpp_headers/gpt_spec.o 00:04:09.264 CC examples/idxd/perf/perf.o 00:04:09.264 CXX test/cpp_headers/hexlify.o 00:04:09.264 CXX test/cpp_headers/histogram_data.o 00:04:09.264 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:09.264 CC test/unit/lib/rdma/common.c/common_ut.o 00:04:09.531 CXX test/cpp_headers/idxd.o 00:04:09.532 LINK idxd_perf 00:04:09.532 CC test/event/event_perf/event_perf.o 00:04:09.532 CXX test/cpp_headers/idxd_spec.o 00:04:09.804 CC test/event/reactor/reactor.o 00:04:09.804 LINK event_perf 00:04:09.804 CXX test/cpp_headers/init.o 00:04:09.804 LINK reactor 00:04:10.063 CXX test/cpp_headers/ioat.o 00:04:10.063 LINK common_ut 00:04:10.063 CC test/lvol/esnap/esnap.o 00:04:10.063 CXX test/cpp_headers/ioat_spec.o 00:04:10.321 CC examples/accel/perf/accel_perf.o 00:04:10.321 CXX test/cpp_headers/iscsi_spec.o 00:04:10.321 CC test/unit/lib/util/base64.c/base64_ut.o 00:04:10.580 CXX test/cpp_headers/json.o 00:04:10.580 CC test/nvme/aer/aer.o 00:04:10.580 CC app/spdk_lspci/spdk_lspci.o 00:04:10.580 CC test/event/reactor_perf/reactor_perf.o 00:04:10.580 LINK base64_ut 00:04:10.580 CXX test/cpp_headers/jsonrpc.o 00:04:10.839 LINK spdk_lspci 00:04:10.839 LINK reactor_perf 00:04:10.839 LINK accel_perf 00:04:10.839 CXX test/cpp_headers/keyring.o 00:04:10.839 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:04:10.839 LINK aer 00:04:11.097 CXX test/cpp_headers/keyring_module.o 00:04:11.097 CXX test/cpp_headers/likely.o 00:04:11.355 LINK iscsi_fuzz 00:04:11.355 CXX test/cpp_headers/log.o 00:04:11.614 LINK bit_array_ut 00:04:11.614 CXX test/cpp_headers/lvol.o 00:04:11.614 CC test/event/app_repeat/app_repeat.o 00:04:11.873 CXX test/cpp_headers/memory.o 00:04:11.873 LINK app_repeat 00:04:11.873 CXX test/cpp_headers/mmio.o 00:04:11.873 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:04:11.873 CC app/spdk_nvme_perf/perf.o 00:04:11.873 CXX test/cpp_headers/nbd.o 00:04:11.873 CC examples/blob/hello_world/hello_blob.o 00:04:11.873 CC app/spdk_nvme_identify/identify.o 00:04:11.873 CXX test/cpp_headers/notify.o 00:04:12.131 LINK cpuset_ut 00:04:12.131 CC test/bdev/bdevio/bdevio.o 00:04:12.131 CC test/nvme/reset/reset.o 00:04:12.131 CXX test/cpp_headers/nvme.o 00:04:12.131 LINK hello_blob 00:04:12.389 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:04:12.389 CXX test/cpp_headers/nvme_intel.o 00:04:12.389 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:12.389 LINK reset 00:04:12.389 LINK crc16_ut 00:04:12.389 CXX test/cpp_headers/nvme_ocssd.o 00:04:12.389 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:12.648 LINK bdevio 00:04:12.648 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:04:12.648 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:12.906 LINK spdk_nvme_perf 00:04:12.906 LINK crc32_ieee_ut 00:04:12.906 CC test/event/scheduler/scheduler.o 00:04:12.906 LINK spdk_nvme_identify 00:04:12.906 CXX test/cpp_headers/nvme_spec.o 00:04:12.906 LINK vhost_fuzz 00:04:13.164 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:04:13.164 CXX test/cpp_headers/nvme_zns.o 00:04:13.164 LINK scheduler 00:04:13.164 LINK crc32c_ut 00:04:13.423 CXX test/cpp_headers/nvmf.o 00:04:13.423 CC test/nvme/sgl/sgl.o 00:04:13.423 CXX test/cpp_headers/nvmf_cmd.o 00:04:13.682 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:04:13.682 LINK crc64_ut 00:04:13.682 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:13.946 LINK sgl 00:04:13.946 CXX test/cpp_headers/nvmf_spec.o 00:04:13.946 CC test/unit/lib/util/dif.c/dif_ut.o 00:04:13.946 CC test/app/histogram_perf/histogram_perf.o 00:04:14.206 CXX test/cpp_headers/nvmf_transport.o 00:04:14.206 LINK histogram_perf 00:04:14.206 CC app/spdk_nvme_discover/discovery_aer.o 00:04:14.206 CXX test/cpp_headers/opal.o 00:04:14.464 CC examples/blob/cli/blobcli.o 00:04:14.464 LINK spdk_nvme_discover 00:04:14.464 CXX test/cpp_headers/opal_spec.o 00:04:14.722 CXX test/cpp_headers/pci_ids.o 00:04:14.979 CXX test/cpp_headers/pipe.o 00:04:14.979 LINK blobcli 00:04:14.979 CXX test/cpp_headers/queue.o 00:04:14.979 CC test/app/jsoncat/jsoncat.o 00:04:14.979 CXX test/cpp_headers/reduce.o 00:04:15.237 CC test/nvme/e2edp/nvme_dp.o 00:04:15.237 LINK dif_ut 00:04:15.237 LINK jsoncat 00:04:15.237 CXX test/cpp_headers/rpc.o 00:04:15.237 CXX test/cpp_headers/scheduler.o 00:04:15.495 LINK nvme_dp 00:04:15.495 CXX test/cpp_headers/scsi.o 00:04:15.495 CC test/app/stub/stub.o 00:04:15.495 CC test/unit/lib/util/iov.c/iov_ut.o 00:04:15.753 CXX test/cpp_headers/scsi_spec.o 00:04:15.753 CC app/spdk_top/spdk_top.o 00:04:15.753 LINK stub 00:04:15.753 CC test/unit/lib/util/math.c/math_ut.o 00:04:16.011 LINK iov_ut 00:04:16.011 CXX test/cpp_headers/sock.o 00:04:16.011 CXX test/cpp_headers/stdinc.o 00:04:16.011 LINK math_ut 00:04:16.011 CXX test/cpp_headers/string.o 00:04:16.011 CXX test/cpp_headers/thread.o 00:04:16.269 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:04:16.269 CC test/unit/lib/util/string.c/string_ut.o 00:04:16.269 CXX test/cpp_headers/trace.o 00:04:16.269 CC test/unit/lib/util/xor.c/xor_ut.o 00:04:16.269 CXX test/cpp_headers/trace_parser.o 00:04:16.526 LINK esnap 00:04:16.526 CC app/vhost/vhost.o 00:04:16.526 CXX test/cpp_headers/tree.o 00:04:16.526 LINK string_ut 00:04:16.526 CXX test/cpp_headers/ublk.o 00:04:16.840 CC test/nvme/overhead/overhead.o 00:04:16.840 LINK vhost 00:04:16.841 CXX test/cpp_headers/util.o 00:04:16.841 CXX test/cpp_headers/uuid.o 00:04:16.841 LINK spdk_top 00:04:16.841 LINK xor_ut 00:04:16.841 LINK pipe_ut 00:04:17.097 CC app/spdk_dd/spdk_dd.o 00:04:17.097 CXX test/cpp_headers/version.o 00:04:17.097 CXX test/cpp_headers/vfio_user_pci.o 00:04:17.097 CC app/fio/nvme/fio_plugin.o 00:04:17.354 CXX test/cpp_headers/vfio_user_spec.o 00:04:17.354 LINK overhead 00:04:17.354 CC app/fio/bdev/fio_plugin.o 00:04:17.354 CXX test/cpp_headers/vhost.o 00:04:17.354 CXX test/cpp_headers/vmd.o 00:04:17.354 CC test/unit/lib/dma/dma.c/dma_ut.o 00:04:17.612 CXX test/cpp_headers/xor.o 00:04:17.612 CXX test/cpp_headers/zipf.o 00:04:17.612 LINK spdk_dd 00:04:17.871 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:04:17.871 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:04:17.871 LINK spdk_bdev 00:04:17.871 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:04:17.871 LINK spdk_nvme 00:04:18.130 CC examples/nvme/hello_world/hello_world.o 00:04:18.388 LINK dma_ut 00:04:18.388 LINK pci_event_ut 00:04:18.388 LINK hello_world 00:04:18.388 LINK ioat_ut 00:04:18.645 CC test/nvme/err_injection/err_injection.o 00:04:18.645 LINK idxd_user_ut 00:04:18.645 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:04:18.645 CC examples/nvme/reconnect/reconnect.o 00:04:18.903 LINK err_injection 00:04:18.903 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:04:18.903 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:04:19.161 LINK reconnect 00:04:19.161 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:19.724 LINK json_util_ut 00:04:19.724 CC test/nvme/startup/startup.o 00:04:19.724 LINK nvme_manage 00:04:19.724 LINK idxd_ut 00:04:19.980 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:04:19.980 LINK startup 00:04:19.980 CC examples/nvme/arbitration/arbitration.o 00:04:20.237 CC examples/nvme/hotplug/hotplug.o 00:04:20.237 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:20.237 CC examples/nvme/abort/abort.o 00:04:20.237 LINK cmb_copy 00:04:20.493 LINK arbitration 00:04:20.493 LINK hotplug 00:04:20.750 LINK json_write_ut 00:04:20.750 LINK abort 00:04:21.008 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:21.008 CC test/nvme/reserve/reserve.o 00:04:21.008 CC test/nvme/simple_copy/simple_copy.o 00:04:21.008 LINK pmr_persistence 00:04:21.266 CC test/nvme/connect_stress/connect_stress.o 00:04:21.266 LINK reserve 00:04:21.266 CC test/nvme/boot_partition/boot_partition.o 00:04:21.266 LINK simple_copy 00:04:21.266 LINK connect_stress 00:04:21.524 LINK boot_partition 00:04:21.524 CC test/nvme/compliance/nvme_compliance.o 00:04:21.524 LINK json_parse_ut 00:04:21.524 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:21.524 CC test/nvme/fused_ordering/fused_ordering.o 00:04:21.782 LINK doorbell_aers 00:04:21.782 LINK fused_ordering 00:04:21.782 LINK nvme_compliance 00:04:22.040 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:04:22.040 CC test/nvme/fdp/fdp.o 00:04:22.299 CC test/nvme/cuse/cuse.o 00:04:22.557 LINK fdp 00:04:22.557 LINK jsonrpc_server_ut 00:04:22.557 CC examples/bdev/hello_world/hello_bdev.o 00:04:22.557 CC examples/bdev/bdevperf/bdevperf.o 00:04:22.815 LINK hello_bdev 00:04:22.815 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:04:23.748 LINK bdevperf 00:04:23.749 LINK cuse 00:04:23.749 LINK rpc_ut 00:04:24.321 CC test/unit/lib/thread/thread.c/thread_ut.o 00:04:24.321 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:04:24.321 CC test/unit/lib/sock/posix.c/posix_ut.o 00:04:24.321 CC test/unit/lib/sock/sock.c/sock_ut.o 00:04:24.321 CC test/unit/lib/notify/notify.c/notify_ut.o 00:04:24.321 CC test/unit/lib/keyring/keyring.c/keyring_ut.o 00:04:24.886 LINK keyring_ut 00:04:25.143 LINK notify_ut 00:04:25.400 LINK iobuf_ut 00:04:25.657 LINK posix_ut 00:04:26.224 LINK sock_ut 00:04:26.791 LINK thread_ut 00:04:26.791 CC examples/nvmf/nvmf/nvmf.o 00:04:26.791 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:04:26.791 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:04:26.791 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:04:26.791 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:04:26.791 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:04:26.791 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:04:26.791 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:04:26.791 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:04:27.050 LINK nvmf 00:04:27.050 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:04:27.984 LINK nvme_ns_ut 00:04:27.985 LINK nvme_ctrlr_ocssd_cmd_ut 00:04:28.243 LINK nvme_ctrlr_cmd_ut 00:04:28.243 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:04:28.243 LINK nvme_poll_group_ut 00:04:28.501 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:04:28.501 LINK nvme_ut 00:04:28.501 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:04:28.759 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:04:28.759 LINK nvme_ns_ocssd_cmd_ut 00:04:28.759 LINK nvme_quirks_ut 00:04:28.759 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:04:29.017 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:04:29.017 LINK nvme_ns_cmd_ut 00:04:29.017 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:04:29.275 LINK nvme_pcie_ut 00:04:29.275 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:04:29.533 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:04:29.791 LINK nvme_qpair_ut 00:04:29.791 LINK nvme_transport_ut 00:04:30.048 LINK nvme_io_msg_ut 00:04:30.048 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:04:30.049 CC test/unit/lib/accel/accel.c/accel_ut.o 00:04:30.306 LINK nvme_fabric_ut 00:04:30.306 LINK nvme_opal_ut 00:04:30.564 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:04:30.564 LINK nvme_pcie_common_ut 00:04:30.564 CC test/unit/lib/blob/blob.c/blob_ut.o 00:04:30.564 LINK nvme_ctrlr_ut 00:04:30.823 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:04:30.823 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:04:31.391 LINK blob_bdev_ut 00:04:31.391 LINK rpc_ut 00:04:31.649 LINK subsystem_ut 00:04:31.924 LINK nvme_tcp_ut 00:04:32.191 LINK nvme_cuse_ut 00:04:32.191 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:04:32.191 CC test/unit/lib/event/app.c/app_ut.o 00:04:32.450 LINK nvme_rdma_ut 00:04:33.016 LINK app_ut 00:04:33.274 LINK accel_ut 00:04:33.274 LINK reactor_ut 00:04:33.841 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:04:33.841 CC test/unit/lib/bdev/part.c/part_ut.o 00:04:33.841 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:04:33.841 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:04:33.841 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:04:33.841 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:04:33.841 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:04:33.841 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:04:33.841 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:04:34.099 LINK scsi_nvme_ut 00:04:34.356 LINK bdev_zone_ut 00:04:34.614 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:04:34.614 LINK gpt_ut 00:04:34.615 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:04:34.873 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:04:34.873 LINK vbdev_zone_block_ut 00:04:35.439 LINK bdev_raid_sb_ut 00:04:35.439 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:04:35.439 LINK vbdev_lvol_ut 00:04:35.697 LINK concat_ut 00:04:35.697 CC test/unit/lib/bdev/raid/raid0.c/raid0_ut.o 00:04:35.955 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:04:36.212 LINK raid1_ut 00:04:36.212 LINK bdev_raid_ut 00:04:36.777 LINK raid0_ut 00:04:37.344 LINK raid5f_ut 00:04:38.274 LINK part_ut 00:04:38.532 LINK bdev_ut 00:04:39.106 LINK blob_ut 00:04:39.673 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:04:39.673 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:04:39.673 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:04:39.673 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:04:39.673 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:04:39.930 LINK blobfs_bdev_ut 00:04:39.930 LINK tree_ut 00:04:40.188 LINK bdev_nvme_ut 00:04:40.477 LINK bdev_ut 00:04:41.043 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:04:41.043 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:04:41.043 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:04:41.043 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:04:41.043 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:04:41.043 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:04:41.043 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:04:41.300 LINK blobfs_async_ut 00:04:41.300 LINK blobfs_sync_ut 00:04:41.558 LINK scsi_ut 00:04:41.558 LINK ftl_l2p_ut 00:04:41.558 LINK dev_ut 00:04:41.558 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:04:41.816 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:04:41.816 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:04:41.816 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:04:41.816 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:04:42.074 LINK lvol_ut 00:04:42.074 LINK lun_ut 00:04:42.333 LINK scsi_pr_ut 00:04:42.333 LINK scsi_bdev_ut 00:04:42.333 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:04:42.333 CC test/unit/lib/ftl/ftl_p2l.c/ftl_p2l_ut.o 00:04:42.590 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:04:42.590 LINK ftl_bitmap_ut 00:04:42.590 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:04:42.847 LINK ftl_io_ut 00:04:42.847 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:04:43.105 LINK ftl_mempool_ut 00:04:43.105 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:04:43.669 LINK ftl_mngt_ut 00:04:43.669 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:04:43.669 LINK ftl_band_ut 00:04:43.669 LINK ftl_p2l_ut 00:04:43.927 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:04:43.927 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:04:44.185 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:04:44.185 LINK ctrlr_discovery_ut 00:04:44.444 LINK subsystem_ut 00:04:44.444 CC test/unit/lib/nvmf/auth.c/auth_ut.o 00:04:44.444 LINK ftl_sb_ut 00:04:44.702 LINK ctrlr_ut 00:04:44.702 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:04:44.960 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:04:44.960 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:04:44.960 LINK ftl_layout_upgrade_ut 00:04:45.218 LINK ctrlr_bdev_ut 00:04:45.218 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:04:45.476 LINK conn_ut 00:04:45.476 CC test/unit/lib/iscsi/param.c/param_ut.o 00:04:45.476 LINK init_grp_ut 00:04:45.735 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:04:45.735 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:04:45.735 LINK nvmf_ut 00:04:45.735 LINK tcp_ut 00:04:45.994 LINK param_ut 00:04:46.561 LINK auth_ut 00:04:46.561 LINK vhost_ut 00:04:47.129 LINK portal_grp_ut 00:04:47.129 LINK tgt_node_ut 00:04:48.117 LINK iscsi_ut 00:04:48.701 LINK transport_ut 00:04:48.960 LINK rdma_ut 00:04:49.218 00:04:49.218 real 1m50.359s 00:04:49.218 user 9m27.970s 00:04:49.218 sys 1m41.785s 00:04:49.218 22:50:38 unittest_build -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:04:49.218 22:50:38 unittest_build -- common/autotest_common.sh@10 -- $ set +x 00:04:49.218 ************************************ 00:04:49.218 END TEST unittest_build 00:04:49.218 ************************************ 00:04:49.218 22:50:38 -- common/autotest_common.sh@1142 -- $ return 0 00:04:49.218 22:50:38 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:49.218 22:50:38 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:49.218 22:50:38 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:49.218 22:50:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:49.218 22:50:38 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:49.218 22:50:38 -- pm/common@44 -- $ pid=2316 00:04:49.218 22:50:38 -- pm/common@50 -- $ kill -TERM 2316 00:04:49.218 22:50:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:49.218 22:50:38 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:49.218 22:50:38 -- pm/common@44 -- $ pid=2317 00:04:49.218 22:50:38 -- pm/common@50 -- $ kill -TERM 2317 00:04:49.218 22:50:38 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:49.218 22:50:38 -- nvmf/common.sh@7 -- # uname -s 00:04:49.218 22:50:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:49.218 22:50:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:49.218 22:50:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:49.218 22:50:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:49.218 22:50:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:49.219 22:50:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:49.219 22:50:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:49.219 22:50:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:49.219 22:50:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:49.219 22:50:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:49.219 22:50:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4ba10dab-3c77-48d9-be14-c7b864f129c5 00:04:49.219 22:50:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=4ba10dab-3c77-48d9-be14-c7b864f129c5 00:04:49.219 22:50:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:49.219 22:50:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:49.219 22:50:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:49.219 22:50:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:49.219 22:50:38 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:49.219 22:50:38 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:49.219 22:50:38 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:49.219 22:50:38 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:49.219 22:50:38 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:49.219 22:50:38 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:49.219 22:50:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:49.219 22:50:38 -- paths/export.sh@5 -- # export PATH 00:04:49.219 22:50:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:49.219 22:50:38 -- nvmf/common.sh@47 -- # : 0 00:04:49.219 22:50:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:49.219 22:50:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:49.219 22:50:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:49.219 22:50:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:49.219 22:50:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:49.219 22:50:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:49.219 22:50:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:49.219 22:50:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:49.219 22:50:38 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:49.219 22:50:38 -- spdk/autotest.sh@32 -- # uname -s 00:04:49.219 22:50:38 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:49.219 22:50:38 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:04:49.219 22:50:38 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:49.219 22:50:38 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:49.219 22:50:38 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:49.219 22:50:38 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:49.219 22:50:38 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:49.219 22:50:38 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:04:49.219 22:50:38 -- spdk/autotest.sh@48 -- # udevadm_pid=111486 00:04:49.219 22:50:38 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:04:49.219 22:50:38 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:49.219 22:50:38 -- pm/common@17 -- # local monitor 00:04:49.219 22:50:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:49.219 22:50:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:49.219 22:50:38 -- pm/common@25 -- # sleep 1 00:04:49.219 22:50:38 -- pm/common@21 -- # date +%s 00:04:49.219 22:50:38 -- pm/common@21 -- # date +%s 00:04:49.219 22:50:38 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720911038 00:04:49.219 22:50:38 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720911038 00:04:49.508 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720911038_collect-vmstat.pm.log 00:04:49.508 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720911038_collect-cpu-load.pm.log 00:04:50.443 22:50:39 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:50.443 22:50:39 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:50.443 22:50:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:50.443 22:50:39 -- common/autotest_common.sh@10 -- # set +x 00:04:50.443 22:50:39 -- spdk/autotest.sh@59 -- # create_test_list 00:04:50.443 22:50:39 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:50.443 22:50:39 -- common/autotest_common.sh@10 -- # set +x 00:04:50.443 22:50:39 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:50.443 22:50:39 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:50.443 22:50:39 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:50.443 22:50:39 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:50.443 22:50:39 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:50.443 22:50:39 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:50.443 22:50:39 -- common/autotest_common.sh@1455 -- # uname 00:04:50.443 22:50:39 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:50.443 22:50:39 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:50.443 22:50:39 -- common/autotest_common.sh@1475 -- # uname 00:04:50.443 22:50:39 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:50.443 22:50:39 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:50.443 22:50:39 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:50.443 22:50:39 -- spdk/autotest.sh@72 -- # hash lcov 00:04:50.443 22:50:39 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:50.443 22:50:39 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:50.443 --rc lcov_branch_coverage=1 00:04:50.443 --rc lcov_function_coverage=1 00:04:50.443 --rc genhtml_branch_coverage=1 00:04:50.443 --rc genhtml_function_coverage=1 00:04:50.443 --rc genhtml_legend=1 00:04:50.443 --rc geninfo_all_blocks=1 00:04:50.443 ' 00:04:50.443 22:50:39 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:50.443 --rc lcov_branch_coverage=1 00:04:50.443 --rc lcov_function_coverage=1 00:04:50.443 --rc genhtml_branch_coverage=1 00:04:50.443 --rc genhtml_function_coverage=1 00:04:50.443 --rc genhtml_legend=1 00:04:50.443 --rc geninfo_all_blocks=1 00:04:50.443 ' 00:04:50.443 22:50:39 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:50.443 --rc lcov_branch_coverage=1 00:04:50.443 --rc lcov_function_coverage=1 00:04:50.443 --rc genhtml_branch_coverage=1 00:04:50.443 --rc genhtml_function_coverage=1 00:04:50.444 --rc genhtml_legend=1 00:04:50.444 --rc geninfo_all_blocks=1 00:04:50.444 --no-external' 00:04:50.444 22:50:39 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:50.444 --rc lcov_branch_coverage=1 00:04:50.444 --rc lcov_function_coverage=1 00:04:50.444 --rc genhtml_branch_coverage=1 00:04:50.444 --rc genhtml_function_coverage=1 00:04:50.444 --rc genhtml_legend=1 00:04:50.444 --rc geninfo_all_blocks=1 00:04:50.444 --no-external' 00:04:50.444 22:50:39 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:50.444 lcov: LCOV version 1.15 00:04:50.444 22:50:39 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:57.005 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:57.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:35.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:35.759 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:35.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:35.760 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:35.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:35.760 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:35.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:35.760 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:35.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:35.760 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:35.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:35.760 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:35.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:35.760 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:35.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:35.760 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:35.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:35.760 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:35.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:35.760 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:35.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:35.760 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:35.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:35.760 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:35.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:35.760 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:35.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:35.760 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:35.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:35.760 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:35.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:35.760 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:35.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:35.760 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:35.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:35.760 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:35.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:35.760 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:35.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:35.760 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:35.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:35.760 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:35.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:35.760 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:35.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:35.760 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:35.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:35.760 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:35.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:35.760 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:35.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:35.760 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:35.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:35.760 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:35.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:35.760 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:35.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:35.760 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:35.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:35.760 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:35.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:35.760 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:35.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:35.760 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:35.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:35.760 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:35.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:35.760 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:35.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:35.760 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:35.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:35.760 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:35.760 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:38.290 22:51:27 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:38.290 22:51:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:38.290 22:51:27 -- common/autotest_common.sh@10 -- # set +x 00:05:38.290 22:51:27 -- spdk/autotest.sh@91 -- # rm -f 00:05:38.290 22:51:27 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:38.549 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:38.549 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:38.549 22:51:27 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:38.549 22:51:27 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:38.549 22:51:27 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:38.549 22:51:27 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:38.549 22:51:27 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:38.549 22:51:27 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:38.549 22:51:27 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:38.549 22:51:27 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:38.549 22:51:27 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:38.549 22:51:27 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:38.549 22:51:27 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:38.549 22:51:27 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:38.549 22:51:27 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:38.549 22:51:27 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:38.549 22:51:27 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:38.549 No valid GPT data, bailing 00:05:38.549 22:51:27 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:38.549 22:51:27 -- scripts/common.sh@391 -- # pt= 00:05:38.549 22:51:27 -- scripts/common.sh@392 -- # return 1 00:05:38.549 22:51:27 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:38.549 1+0 records in 00:05:38.549 1+0 records out 00:05:38.549 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00506725 s, 207 MB/s 00:05:38.549 22:51:27 -- spdk/autotest.sh@118 -- # sync 00:05:38.549 22:51:27 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:38.549 22:51:27 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:38.549 22:51:27 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:39.925 22:51:29 -- spdk/autotest.sh@124 -- # uname -s 00:05:39.925 22:51:29 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:39.925 22:51:29 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:39.925 22:51:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.926 22:51:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.926 22:51:29 -- common/autotest_common.sh@10 -- # set +x 00:05:39.926 ************************************ 00:05:39.926 START TEST setup.sh 00:05:39.926 ************************************ 00:05:39.926 22:51:29 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:39.926 * Looking for test storage... 00:05:39.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:39.926 22:51:29 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:39.926 22:51:29 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:39.926 22:51:29 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:39.926 22:51:29 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.926 22:51:29 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.926 22:51:29 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:39.926 ************************************ 00:05:39.926 START TEST acl 00:05:39.926 ************************************ 00:05:39.926 22:51:29 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:40.184 * Looking for test storage... 00:05:40.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:40.184 22:51:29 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:40.185 22:51:29 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:40.185 22:51:29 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:40.185 22:51:29 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:40.185 22:51:29 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:40.185 22:51:29 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:40.185 22:51:29 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:40.185 22:51:29 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:40.185 22:51:29 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:40.185 22:51:29 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:40.185 22:51:29 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:40.185 22:51:29 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:40.185 22:51:29 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:40.185 22:51:29 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:40.185 22:51:29 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:40.185 22:51:29 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:40.443 22:51:29 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:40.443 22:51:29 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:40.444 22:51:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:40.444 22:51:29 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:40.444 22:51:29 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:40.444 22:51:29 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:41.011 22:51:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:41.011 22:51:30 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:41.011 22:51:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:41.011 Hugepages 00:05:41.011 node hugesize free / total 00:05:41.011 22:51:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:41.011 22:51:30 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:41.011 22:51:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:41.011 00:05:41.011 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:41.011 22:51:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:41.011 22:51:30 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:41.011 22:51:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:41.011 22:51:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:41.011 22:51:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:41.011 22:51:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:41.011 22:51:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:41.011 22:51:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:05:41.011 22:51:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:41.011 22:51:30 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:41.011 22:51:30 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:41.011 22:51:30 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:41.011 22:51:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:41.011 22:51:30 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:05:41.011 22:51:30 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:41.011 22:51:30 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.011 22:51:30 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.011 22:51:30 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:41.270 ************************************ 00:05:41.270 START TEST denied 00:05:41.270 ************************************ 00:05:41.270 22:51:30 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:05:41.270 22:51:30 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:05:41.270 22:51:30 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:05:41.270 22:51:30 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:41.270 22:51:30 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:41.270 22:51:30 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:42.646 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:05:42.646 22:51:31 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:05:42.646 22:51:31 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:42.646 22:51:31 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:42.646 22:51:31 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:05:42.646 22:51:31 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:05:42.646 22:51:31 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:42.646 22:51:31 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:42.646 22:51:31 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:42.646 22:51:31 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:42.646 22:51:31 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:42.904 00:05:42.904 real 0m1.824s 00:05:42.904 user 0m0.481s 00:05:42.904 sys 0m1.390s 00:05:42.904 22:51:32 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.904 22:51:32 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:42.904 ************************************ 00:05:42.904 END TEST denied 00:05:42.904 ************************************ 00:05:42.904 22:51:32 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:05:42.904 22:51:32 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:42.904 22:51:32 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.904 22:51:32 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.904 22:51:32 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:42.904 ************************************ 00:05:42.904 START TEST allowed 00:05:42.904 ************************************ 00:05:42.904 22:51:32 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:05:42.905 22:51:32 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:05:42.905 22:51:32 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:42.905 22:51:32 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:42.905 22:51:32 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:05:42.905 22:51:32 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:44.806 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:44.806 22:51:33 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:05:44.806 22:51:33 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:44.806 22:51:33 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:44.806 22:51:33 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:44.806 22:51:33 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:45.068 00:05:45.068 real 0m1.954s 00:05:45.068 user 0m0.446s 00:05:45.068 sys 0m1.525s 00:05:45.068 22:51:34 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.068 ************************************ 00:05:45.068 END TEST allowed 00:05:45.068 ************************************ 00:05:45.069 22:51:34 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:45.069 22:51:34 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:05:45.069 ************************************ 00:05:45.069 END TEST acl 00:05:45.069 00:05:45.069 real 0m4.960s 00:05:45.069 user 0m1.571s 00:05:45.069 sys 0m3.511s 00:05:45.069 22:51:34 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.069 22:51:34 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:45.069 ************************************ 00:05:45.069 22:51:34 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:45.069 22:51:34 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:45.069 22:51:34 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.069 22:51:34 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.069 22:51:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:45.069 ************************************ 00:05:45.069 START TEST hugepages 00:05:45.069 ************************************ 00:05:45.069 22:51:34 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:45.069 * Looking for test storage... 00:05:45.069 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 1814328 kB' 'MemAvailable: 7393252 kB' 'Buffers: 40420 kB' 'Cached: 5628092 kB' 'SwapCached: 0 kB' 'Active: 1403672 kB' 'Inactive: 4383924 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 129696 kB' 'Active(file): 1402620 kB' 'Inactive(file): 4254228 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 444 kB' 'Writeback: 0 kB' 'AnonPages: 148092 kB' 'Mapped: 68004 kB' 'Shmem: 2600 kB' 'KReclaimable: 243380 kB' 'Slab: 312580 kB' 'SReclaimable: 243380 kB' 'SUnreclaim: 69200 kB' 'KernelStack: 4616 kB' 'PageTables: 3880 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4024336 kB' 'Committed_AS: 504836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.069 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:45.070 22:51:34 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:45.070 22:51:34 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.070 22:51:34 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.070 22:51:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:45.338 ************************************ 00:05:45.338 START TEST default_setup 00:05:45.338 ************************************ 00:05:45.338 22:51:34 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:05:45.338 22:51:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:45.338 22:51:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:45.338 22:51:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:45.338 22:51:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:45.338 22:51:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:45.338 22:51:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:45.338 22:51:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:45.338 22:51:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:45.338 22:51:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:45.338 22:51:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:45.338 22:51:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:45.338 22:51:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:45.338 22:51:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:45.338 22:51:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:45.338 22:51:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:45.338 22:51:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:45.338 22:51:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:45.338 22:51:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:45.338 22:51:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:45.338 22:51:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:45.338 22:51:34 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:45.338 22:51:34 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:45.595 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:45.595 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 3897924 kB' 'MemAvailable: 9476720 kB' 'Buffers: 40420 kB' 'Cached: 5628088 kB' 'SwapCached: 0 kB' 'Active: 1403728 kB' 'Inactive: 4399972 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 145816 kB' 'Active(file): 1402688 kB' 'Inactive(file): 4254156 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 456 kB' 'Writeback: 0 kB' 'AnonPages: 164480 kB' 'Mapped: 67984 kB' 'Shmem: 2596 kB' 'KReclaimable: 243256 kB' 'Slab: 312368 kB' 'SReclaimable: 243256 kB' 'SUnreclaim: 69112 kB' 'KernelStack: 4384 kB' 'PageTables: 3448 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 520536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19492 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.160 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:46.161 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 3898444 kB' 'MemAvailable: 9477244 kB' 'Buffers: 40420 kB' 'Cached: 5628092 kB' 'SwapCached: 0 kB' 'Active: 1403736 kB' 'Inactive: 4399960 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 145800 kB' 'Active(file): 1402688 kB' 'Inactive(file): 4254160 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 456 kB' 'Writeback: 0 kB' 'AnonPages: 164688 kB' 'Mapped: 67944 kB' 'Shmem: 2596 kB' 'KReclaimable: 243256 kB' 'Slab: 312368 kB' 'SReclaimable: 243256 kB' 'SUnreclaim: 69112 kB' 'KernelStack: 4416 kB' 'PageTables: 3524 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 520536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19492 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.423 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.424 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 3898644 kB' 'MemAvailable: 9477444 kB' 'Buffers: 40420 kB' 'Cached: 5628092 kB' 'SwapCached: 0 kB' 'Active: 1403728 kB' 'Inactive: 4399952 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 145792 kB' 'Active(file): 1402688 kB' 'Inactive(file): 4254160 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 456 kB' 'Writeback: 0 kB' 'AnonPages: 164396 kB' 'Mapped: 67936 kB' 'Shmem: 2596 kB' 'KReclaimable: 243256 kB' 'Slab: 312368 kB' 'SReclaimable: 243256 kB' 'SUnreclaim: 69112 kB' 'KernelStack: 4384 kB' 'PageTables: 3436 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 520536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19492 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.425 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:46.426 nr_hugepages=1024 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:46.426 resv_hugepages=0 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:46.426 surplus_hugepages=0 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:46.426 anon_hugepages=0 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 3898644 kB' 'MemAvailable: 9477444 kB' 'Buffers: 40420 kB' 'Cached: 5628092 kB' 'SwapCached: 0 kB' 'Active: 1403728 kB' 'Inactive: 4399616 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 145456 kB' 'Active(file): 1402688 kB' 'Inactive(file): 4254160 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 456 kB' 'Writeback: 0 kB' 'AnonPages: 164320 kB' 'Mapped: 67976 kB' 'Shmem: 2596 kB' 'KReclaimable: 243256 kB' 'Slab: 312368 kB' 'SReclaimable: 243256 kB' 'SUnreclaim: 69112 kB' 'KernelStack: 4420 kB' 'PageTables: 3612 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 520536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.426 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.427 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 3898644 kB' 'MemUsed: 8344336 kB' 'SwapCached: 0 kB' 'Active: 1403728 kB' 'Inactive: 4400136 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 145976 kB' 'Active(file): 1402688 kB' 'Inactive(file): 4254160 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 456 kB' 'Writeback: 0 kB' 'FilePages: 5668512 kB' 'Mapped: 67976 kB' 'AnonPages: 164580 kB' 'Shmem: 2596 kB' 'KernelStack: 4488 kB' 'PageTables: 3612 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 243256 kB' 'Slab: 312368 kB' 'SReclaimable: 243256 kB' 'SUnreclaim: 69112 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.428 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:46.429 node0=1024 expecting 1024 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:46.429 00:05:46.429 real 0m1.203s 00:05:46.429 user 0m0.395s 00:05:46.429 sys 0m0.795s 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.429 22:51:35 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:46.429 ************************************ 00:05:46.429 END TEST default_setup 00:05:46.429 ************************************ 00:05:46.429 22:51:35 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:46.429 22:51:35 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:46.429 22:51:35 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.429 22:51:35 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.429 22:51:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:46.429 ************************************ 00:05:46.429 START TEST per_node_1G_alloc 00:05:46.429 ************************************ 00:05:46.429 22:51:35 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:05:46.429 22:51:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:46.429 22:51:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:46.429 22:51:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:46.429 22:51:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:46.429 22:51:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:46.429 22:51:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:46.429 22:51:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:46.429 22:51:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:46.430 22:51:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:46.430 22:51:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:46.430 22:51:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:46.430 22:51:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:46.430 22:51:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:46.430 22:51:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:46.430 22:51:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:46.430 22:51:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:46.430 22:51:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:46.430 22:51:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:46.430 22:51:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:46.430 22:51:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:46.430 22:51:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:46.430 22:51:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:46.430 22:51:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:46.430 22:51:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:46.430 22:51:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:46.688 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:46.688 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:46.946 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 4945080 kB' 'MemAvailable: 10523884 kB' 'Buffers: 40420 kB' 'Cached: 5628096 kB' 'SwapCached: 0 kB' 'Active: 1403740 kB' 'Inactive: 4400236 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 146076 kB' 'Active(file): 1402692 kB' 'Inactive(file): 4254160 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 456 kB' 'Writeback: 0 kB' 'AnonPages: 164772 kB' 'Mapped: 67976 kB' 'Shmem: 2596 kB' 'KReclaimable: 243256 kB' 'Slab: 312456 kB' 'SReclaimable: 243256 kB' 'SUnreclaim: 69200 kB' 'KernelStack: 4436 kB' 'PageTables: 3680 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 520536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19556 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.947 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.210 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.211 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 4945080 kB' 'MemAvailable: 10523888 kB' 'Buffers: 40420 kB' 'Cached: 5628096 kB' 'SwapCached: 0 kB' 'Active: 1403732 kB' 'Inactive: 4400324 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 146160 kB' 'Active(file): 1402692 kB' 'Inactive(file): 4254164 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 472 kB' 'Writeback: 0 kB' 'AnonPages: 164560 kB' 'Mapped: 67976 kB' 'Shmem: 2596 kB' 'KReclaimable: 243256 kB' 'Slab: 312772 kB' 'SReclaimable: 243256 kB' 'SUnreclaim: 69516 kB' 'KernelStack: 4432 kB' 'PageTables: 3552 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 520536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19556 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.212 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 4945080 kB' 'MemAvailable: 10523888 kB' 'Buffers: 40420 kB' 'Cached: 5628096 kB' 'SwapCached: 0 kB' 'Active: 1403732 kB' 'Inactive: 4400328 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 146164 kB' 'Active(file): 1402692 kB' 'Inactive(file): 4254164 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 472 kB' 'Writeback: 0 kB' 'AnonPages: 164788 kB' 'Mapped: 67976 kB' 'Shmem: 2596 kB' 'KReclaimable: 243256 kB' 'Slab: 312772 kB' 'SReclaimable: 243256 kB' 'SUnreclaim: 69516 kB' 'KernelStack: 4416 kB' 'PageTables: 3512 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 520536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19556 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.213 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.214 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:47.215 nr_hugepages=512 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:47.215 resv_hugepages=0 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:47.215 surplus_hugepages=0 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:47.215 anon_hugepages=0 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 4945332 kB' 'MemAvailable: 10524140 kB' 'Buffers: 40420 kB' 'Cached: 5628096 kB' 'SwapCached: 0 kB' 'Active: 1403732 kB' 'Inactive: 4400352 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 146188 kB' 'Active(file): 1402692 kB' 'Inactive(file): 4254164 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 472 kB' 'Writeback: 0 kB' 'AnonPages: 164392 kB' 'Mapped: 67976 kB' 'Shmem: 2596 kB' 'KReclaimable: 243256 kB' 'Slab: 312772 kB' 'SReclaimable: 243256 kB' 'SUnreclaim: 69516 kB' 'KernelStack: 4484 kB' 'PageTables: 3520 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 520276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19492 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.215 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.216 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 4945584 kB' 'MemUsed: 7297396 kB' 'SwapCached: 0 kB' 'Active: 1403740 kB' 'Inactive: 4399816 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 145652 kB' 'Active(file): 1402692 kB' 'Inactive(file): 4254164 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 472 kB' 'Writeback: 0 kB' 'FilePages: 5668516 kB' 'Mapped: 68052 kB' 'AnonPages: 164312 kB' 'Shmem: 2596 kB' 'KernelStack: 4384 kB' 'PageTables: 3692 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 243256 kB' 'Slab: 312772 kB' 'SReclaimable: 243256 kB' 'SUnreclaim: 69516 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.217 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:47.218 node0=512 expecting 512 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:47.218 00:05:47.218 real 0m0.773s 00:05:47.218 user 0m0.307s 00:05:47.218 sys 0m0.507s 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.218 22:51:36 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:47.218 ************************************ 00:05:47.218 END TEST per_node_1G_alloc 00:05:47.218 ************************************ 00:05:47.218 22:51:36 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:47.218 22:51:36 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:47.218 22:51:36 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.218 22:51:36 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.218 22:51:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:47.218 ************************************ 00:05:47.218 START TEST even_2G_alloc 00:05:47.218 ************************************ 00:05:47.218 22:51:36 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:05:47.218 22:51:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:47.218 22:51:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:47.218 22:51:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:47.218 22:51:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:47.218 22:51:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:47.218 22:51:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:47.218 22:51:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:47.218 22:51:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:47.218 22:51:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:47.218 22:51:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:47.218 22:51:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:47.218 22:51:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:47.218 22:51:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:47.218 22:51:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:47.218 22:51:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:47.218 22:51:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:47.218 22:51:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:47.218 22:51:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:47.218 22:51:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:47.218 22:51:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:47.218 22:51:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:47.218 22:51:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:47.218 22:51:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:47.218 22:51:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:47.476 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:47.476 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 3897672 kB' 'MemAvailable: 9476476 kB' 'Buffers: 40420 kB' 'Cached: 5628092 kB' 'SwapCached: 0 kB' 'Active: 1403764 kB' 'Inactive: 4400240 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 146096 kB' 'Active(file): 1402708 kB' 'Inactive(file): 4254144 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 476 kB' 'Writeback: 0 kB' 'AnonPages: 164740 kB' 'Mapped: 68060 kB' 'Shmem: 2596 kB' 'KReclaimable: 243256 kB' 'Slab: 312524 kB' 'SReclaimable: 243256 kB' 'SUnreclaim: 69268 kB' 'KernelStack: 4384 kB' 'PageTables: 3436 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 520664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19540 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.045 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:48.046 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 3897672 kB' 'MemAvailable: 9476476 kB' 'Buffers: 40420 kB' 'Cached: 5628092 kB' 'SwapCached: 0 kB' 'Active: 1403764 kB' 'Inactive: 4399980 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 145836 kB' 'Active(file): 1402708 kB' 'Inactive(file): 4254144 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 476 kB' 'Writeback: 0 kB' 'AnonPages: 164480 kB' 'Mapped: 68060 kB' 'Shmem: 2596 kB' 'KReclaimable: 243256 kB' 'Slab: 312524 kB' 'SReclaimable: 243256 kB' 'SUnreclaim: 69268 kB' 'KernelStack: 4384 kB' 'PageTables: 3436 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 520664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19540 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.047 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:48.048 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 3897936 kB' 'MemAvailable: 9476740 kB' 'Buffers: 40420 kB' 'Cached: 5628092 kB' 'SwapCached: 0 kB' 'Active: 1403756 kB' 'Inactive: 4400468 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 146324 kB' 'Active(file): 1402708 kB' 'Inactive(file): 4254144 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 476 kB' 'Writeback: 0 kB' 'AnonPages: 164972 kB' 'Mapped: 68044 kB' 'Shmem: 2596 kB' 'KReclaimable: 243256 kB' 'Slab: 312524 kB' 'SReclaimable: 243256 kB' 'SUnreclaim: 69268 kB' 'KernelStack: 4384 kB' 'PageTables: 3428 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 522992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19524 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.310 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.311 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:48.312 nr_hugepages=1024 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:48.312 resv_hugepages=0 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:48.312 surplus_hugepages=0 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:48.312 anon_hugepages=0 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 3897936 kB' 'MemAvailable: 9476740 kB' 'Buffers: 40420 kB' 'Cached: 5628092 kB' 'SwapCached: 0 kB' 'Active: 1403756 kB' 'Inactive: 4400168 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 146024 kB' 'Active(file): 1402708 kB' 'Inactive(file): 4254144 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 476 kB' 'Writeback: 0 kB' 'AnonPages: 164660 kB' 'Mapped: 68044 kB' 'Shmem: 2596 kB' 'KReclaimable: 243256 kB' 'Slab: 312524 kB' 'SReclaimable: 243256 kB' 'SUnreclaim: 69268 kB' 'KernelStack: 4520 kB' 'PageTables: 3428 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 520664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19524 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.312 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 3897936 kB' 'MemUsed: 8345044 kB' 'SwapCached: 0 kB' 'Active: 1403756 kB' 'Inactive: 4400192 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 146048 kB' 'Active(file): 1402708 kB' 'Inactive(file): 4254144 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 476 kB' 'Writeback: 0 kB' 'FilePages: 5668512 kB' 'Mapped: 68012 kB' 'AnonPages: 164656 kB' 'Shmem: 2596 kB' 'KernelStack: 4436 kB' 'PageTables: 3644 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 243256 kB' 'Slab: 312524 kB' 'SReclaimable: 243256 kB' 'SUnreclaim: 69268 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.313 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.314 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:48.315 node0=1024 expecting 1024 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:48.315 00:05:48.315 real 0m0.992s 00:05:48.315 user 0m0.292s 00:05:48.315 sys 0m0.738s 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.315 22:51:37 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:48.315 ************************************ 00:05:48.315 END TEST even_2G_alloc 00:05:48.315 ************************************ 00:05:48.315 22:51:37 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:48.315 22:51:37 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:48.315 22:51:37 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.315 22:51:37 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.315 22:51:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:48.315 ************************************ 00:05:48.315 START TEST odd_alloc 00:05:48.315 ************************************ 00:05:48.315 22:51:37 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:05:48.315 22:51:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:48.315 22:51:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:48.315 22:51:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:48.315 22:51:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:48.315 22:51:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:48.315 22:51:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:48.315 22:51:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:48.315 22:51:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:48.315 22:51:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:48.315 22:51:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:48.315 22:51:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:48.315 22:51:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:48.315 22:51:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:48.315 22:51:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:48.315 22:51:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:48.315 22:51:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:48.315 22:51:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:48.315 22:51:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:48.315 22:51:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:48.315 22:51:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:48.315 22:51:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:48.315 22:51:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:48.315 22:51:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:48.315 22:51:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:48.574 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:48.574 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:49.145 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:49.145 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:49.145 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:49.145 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:49.145 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:49.145 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:49.145 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:49.145 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:49.145 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:49.145 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:49.145 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:49.145 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:49.145 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:49.145 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.145 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:49.145 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:49.145 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.145 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.145 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.145 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.145 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 3903272 kB' 'MemAvailable: 9482076 kB' 'Buffers: 40428 kB' 'Cached: 5628084 kB' 'SwapCached: 0 kB' 'Active: 1403768 kB' 'Inactive: 4395476 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 141344 kB' 'Active(file): 1402720 kB' 'Inactive(file): 4254132 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 159920 kB' 'Mapped: 67304 kB' 'Shmem: 2596 kB' 'KReclaimable: 243256 kB' 'Slab: 312684 kB' 'SReclaimable: 243256 kB' 'SUnreclaim: 69428 kB' 'KernelStack: 4304 kB' 'PageTables: 3176 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 507824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19460 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.146 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 3903272 kB' 'MemAvailable: 9482068 kB' 'Buffers: 40428 kB' 'Cached: 5628092 kB' 'SwapCached: 0 kB' 'Active: 1403768 kB' 'Inactive: 4395736 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 141604 kB' 'Active(file): 1402720 kB' 'Inactive(file): 4254132 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 372 kB' 'Writeback: 0 kB' 'AnonPages: 160180 kB' 'Mapped: 67304 kB' 'Shmem: 2596 kB' 'KReclaimable: 243248 kB' 'Slab: 312676 kB' 'SReclaimable: 243248 kB' 'SUnreclaim: 69428 kB' 'KernelStack: 4304 kB' 'PageTables: 3176 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 507824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19460 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.147 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 3903272 kB' 'MemAvailable: 9482068 kB' 'Buffers: 40428 kB' 'Cached: 5628092 kB' 'SwapCached: 0 kB' 'Active: 1403760 kB' 'Inactive: 4395560 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 141428 kB' 'Active(file): 1402720 kB' 'Inactive(file): 4254132 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 376 kB' 'Writeback: 0 kB' 'AnonPages: 160000 kB' 'Mapped: 67288 kB' 'Shmem: 2596 kB' 'KReclaimable: 243248 kB' 'Slab: 312700 kB' 'SReclaimable: 243248 kB' 'SUnreclaim: 69452 kB' 'KernelStack: 4320 kB' 'PageTables: 3200 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 510152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19444 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.148 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.149 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:49.150 nr_hugepages=1025 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:49.150 resv_hugepages=0 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:49.150 surplus_hugepages=0 00:05:49.150 anon_hugepages=0 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 3903524 kB' 'MemAvailable: 9482320 kB' 'Buffers: 40428 kB' 'Cached: 5628092 kB' 'SwapCached: 0 kB' 'Active: 1403760 kB' 'Inactive: 4395244 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 141112 kB' 'Active(file): 1402720 kB' 'Inactive(file): 4254132 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 376 kB' 'Writeback: 0 kB' 'AnonPages: 159704 kB' 'Mapped: 67288 kB' 'Shmem: 2596 kB' 'KReclaimable: 243248 kB' 'Slab: 312572 kB' 'SReclaimable: 243248 kB' 'SUnreclaim: 69324 kB' 'KernelStack: 4356 kB' 'PageTables: 3400 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 507436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19412 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.150 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:49.151 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:49.152 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:49.152 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:49.152 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:49.152 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:49.152 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:49.152 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.152 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:49.152 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:49.152 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.152 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.152 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.152 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.152 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 3903276 kB' 'MemUsed: 8339704 kB' 'SwapCached: 0 kB' 'Active: 1403768 kB' 'Inactive: 4395764 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 141632 kB' 'Active(file): 1402728 kB' 'Inactive(file): 4254132 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 376 kB' 'Writeback: 0 kB' 'FilePages: 5668520 kB' 'Mapped: 67288 kB' 'AnonPages: 159972 kB' 'Shmem: 2596 kB' 'KernelStack: 4504 kB' 'PageTables: 3600 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 243248 kB' 'Slab: 312572 kB' 'SReclaimable: 243248 kB' 'SUnreclaim: 69324 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:49.152 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.152 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.152 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.152 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.152 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.152 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.152 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.152 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.152 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.152 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.152 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.152 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.152 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.152 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.152 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.152 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.152 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.152 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.152 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.152 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.152 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.411 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.412 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.412 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.412 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.412 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.412 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.412 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.412 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.412 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.412 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.412 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:49.412 22:51:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:49.412 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:49.412 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:49.412 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:49.412 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:49.412 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:49.412 node0=1025 expecting 1025 00:05:49.412 22:51:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:49.412 00:05:49.412 real 0m0.968s 00:05:49.412 user 0m0.276s 00:05:49.412 sys 0m0.728s 00:05:49.412 22:51:38 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.412 22:51:38 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:49.412 ************************************ 00:05:49.412 END TEST odd_alloc 00:05:49.412 ************************************ 00:05:49.412 22:51:38 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:49.412 22:51:38 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:49.412 22:51:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.412 22:51:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.412 22:51:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:49.412 ************************************ 00:05:49.412 START TEST custom_alloc 00:05:49.412 ************************************ 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:49.412 22:51:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:49.670 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:49.670 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:49.931 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:49.931 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 4957148 kB' 'MemAvailable: 10535956 kB' 'Buffers: 40428 kB' 'Cached: 5628096 kB' 'SwapCached: 0 kB' 'Active: 1403772 kB' 'Inactive: 4395788 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 141648 kB' 'Active(file): 1402724 kB' 'Inactive(file): 4254140 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 120 kB' 'Writeback: 4 kB' 'AnonPages: 160240 kB' 'Mapped: 67248 kB' 'Shmem: 2596 kB' 'KReclaimable: 243248 kB' 'Slab: 312308 kB' 'SReclaimable: 243248 kB' 'SUnreclaim: 69060 kB' 'KernelStack: 4436 kB' 'PageTables: 3516 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 507824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19444 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.932 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 4957148 kB' 'MemAvailable: 10535956 kB' 'Buffers: 40428 kB' 'Cached: 5628096 kB' 'SwapCached: 0 kB' 'Active: 1403772 kB' 'Inactive: 4395412 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 141272 kB' 'Active(file): 1402724 kB' 'Inactive(file): 4254140 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 120 kB' 'Writeback: 4 kB' 'AnonPages: 159808 kB' 'Mapped: 67288 kB' 'Shmem: 2596 kB' 'KReclaimable: 243248 kB' 'Slab: 312312 kB' 'SReclaimable: 243248 kB' 'SUnreclaim: 69064 kB' 'KernelStack: 4392 kB' 'PageTables: 3252 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 507824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19444 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.933 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.934 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 4957640 kB' 'MemAvailable: 10536448 kB' 'Buffers: 40428 kB' 'Cached: 5628096 kB' 'SwapCached: 0 kB' 'Active: 1403772 kB' 'Inactive: 4395380 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 141240 kB' 'Active(file): 1402724 kB' 'Inactive(file): 4254140 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 176 kB' 'Writeback: 4 kB' 'AnonPages: 159748 kB' 'Mapped: 67288 kB' 'Shmem: 2596 kB' 'KReclaimable: 243248 kB' 'Slab: 312312 kB' 'SReclaimable: 243248 kB' 'SUnreclaim: 69064 kB' 'KernelStack: 4340 kB' 'PageTables: 2932 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 507824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19444 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.935 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:49.936 nr_hugepages=512 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:49.936 resv_hugepages=0 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:49.936 surplus_hugepages=0 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:49.936 anon_hugepages=0 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 4957640 kB' 'MemAvailable: 10536448 kB' 'Buffers: 40428 kB' 'Cached: 5628096 kB' 'SwapCached: 0 kB' 'Active: 1403772 kB' 'Inactive: 4395640 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 141500 kB' 'Active(file): 1402724 kB' 'Inactive(file): 4254140 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 176 kB' 'Writeback: 4 kB' 'AnonPages: 160008 kB' 'Mapped: 67288 kB' 'Shmem: 2596 kB' 'KReclaimable: 243248 kB' 'Slab: 312312 kB' 'SReclaimable: 243248 kB' 'SUnreclaim: 69064 kB' 'KernelStack: 4408 kB' 'PageTables: 3192 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 507824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19460 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.936 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.937 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.937 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.937 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.937 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.937 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.937 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.937 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.937 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.937 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.937 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.937 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.937 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.937 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.937 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.937 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.937 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.937 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.937 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.937 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.937 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.937 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.937 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.937 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.937 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.197 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.197 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.197 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.197 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.197 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.197 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.197 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.197 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.197 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.197 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.197 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.197 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.197 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.197 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.197 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.197 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.197 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.197 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.197 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.197 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.197 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.197 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.197 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.197 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.197 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.197 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.197 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.197 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.197 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.197 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.197 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.197 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.197 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.197 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.197 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 4957640 kB' 'MemUsed: 7285340 kB' 'SwapCached: 0 kB' 'Active: 1403772 kB' 'Inactive: 4395420 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 141280 kB' 'Active(file): 1402724 kB' 'Inactive(file): 4254140 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 176 kB' 'Writeback: 4 kB' 'FilePages: 5668524 kB' 'Mapped: 67288 kB' 'AnonPages: 160320 kB' 'Shmem: 2596 kB' 'KernelStack: 4392 kB' 'PageTables: 3152 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 243248 kB' 'Slab: 312312 kB' 'SReclaimable: 243248 kB' 'SUnreclaim: 69064 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.198 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.199 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:50.200 node0=512 expecting 512 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:50.200 00:05:50.200 real 0m0.757s 00:05:50.200 user 0m0.303s 00:05:50.200 sys 0m0.498s 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.200 22:51:39 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:50.200 ************************************ 00:05:50.200 END TEST custom_alloc 00:05:50.200 ************************************ 00:05:50.200 22:51:39 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:50.200 22:51:39 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:50.200 22:51:39 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.200 22:51:39 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.200 22:51:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:50.200 ************************************ 00:05:50.200 START TEST no_shrink_alloc 00:05:50.200 ************************************ 00:05:50.200 22:51:39 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:05:50.200 22:51:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:50.200 22:51:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:50.200 22:51:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:50.200 22:51:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:50.200 22:51:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:50.200 22:51:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:50.200 22:51:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:50.200 22:51:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:50.200 22:51:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:50.200 22:51:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:50.200 22:51:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:50.200 22:51:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:50.200 22:51:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:50.200 22:51:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:50.200 22:51:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:50.200 22:51:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:50.200 22:51:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:50.200 22:51:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:50.200 22:51:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:50.200 22:51:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:50.200 22:51:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:50.200 22:51:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:50.459 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:50.459 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 3907352 kB' 'MemAvailable: 9486088 kB' 'Buffers: 40428 kB' 'Cached: 5628104 kB' 'SwapCached: 0 kB' 'Active: 1403760 kB' 'Inactive: 4395652 kB' 'Active(anon): 1032 kB' 'Inactive(anon): 141508 kB' 'Active(file): 1402728 kB' 'Inactive(file): 4254144 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 180 kB' 'Writeback: 0 kB' 'AnonPages: 159880 kB' 'Mapped: 67352 kB' 'Shmem: 2596 kB' 'KReclaimable: 243168 kB' 'Slab: 311992 kB' 'SReclaimable: 243168 kB' 'SUnreclaim: 68824 kB' 'KernelStack: 4352 kB' 'PageTables: 3232 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 507824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19428 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.032 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:51.033 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 3907352 kB' 'MemAvailable: 9486080 kB' 'Buffers: 40428 kB' 'Cached: 5628096 kB' 'SwapCached: 0 kB' 'Active: 1403764 kB' 'Inactive: 4395716 kB' 'Active(anon): 1032 kB' 'Inactive(anon): 141584 kB' 'Active(file): 1402732 kB' 'Inactive(file): 4254132 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 159968 kB' 'Mapped: 67276 kB' 'Shmem: 2596 kB' 'KReclaimable: 243168 kB' 'Slab: 312112 kB' 'SReclaimable: 243168 kB' 'SUnreclaim: 68944 kB' 'KernelStack: 4316 kB' 'PageTables: 3060 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 507824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19428 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.034 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 3907352 kB' 'MemAvailable: 9486080 kB' 'Buffers: 40428 kB' 'Cached: 5628096 kB' 'SwapCached: 0 kB' 'Active: 1403764 kB' 'Inactive: 4395456 kB' 'Active(anon): 1032 kB' 'Inactive(anon): 141324 kB' 'Active(file): 1402732 kB' 'Inactive(file): 4254132 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 159972 kB' 'Mapped: 67276 kB' 'Shmem: 2596 kB' 'KReclaimable: 243168 kB' 'Slab: 312112 kB' 'SReclaimable: 243168 kB' 'SUnreclaim: 68944 kB' 'KernelStack: 4268 kB' 'PageTables: 2944 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 507824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19428 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.035 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.036 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:51.037 nr_hugepages=1024 00:05:51.037 resv_hugepages=0 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:51.037 surplus_hugepages=0 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:51.037 anon_hugepages=0 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 3907128 kB' 'MemAvailable: 9485856 kB' 'Buffers: 40428 kB' 'Cached: 5628096 kB' 'SwapCached: 0 kB' 'Active: 1403772 kB' 'Inactive: 4395476 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 141344 kB' 'Active(file): 1402732 kB' 'Inactive(file): 4254132 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 159748 kB' 'Mapped: 67292 kB' 'Shmem: 2596 kB' 'KReclaimable: 243168 kB' 'Slab: 312184 kB' 'SReclaimable: 243168 kB' 'SUnreclaim: 69016 kB' 'KernelStack: 4288 kB' 'PageTables: 3136 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 507824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19444 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.037 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:51.038 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 3907128 kB' 'MemUsed: 8335852 kB' 'SwapCached: 0 kB' 'Active: 1403772 kB' 'Inactive: 4395224 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 141092 kB' 'Active(file): 1402732 kB' 'Inactive(file): 4254132 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'FilePages: 5668524 kB' 'Mapped: 67292 kB' 'AnonPages: 159704 kB' 'Shmem: 2596 kB' 'KernelStack: 4272 kB' 'PageTables: 3096 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 243168 kB' 'Slab: 312184 kB' 'SReclaimable: 243168 kB' 'SUnreclaim: 69016 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.039 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:51.040 node0=1024 expecting 1024 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:51.040 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:51.298 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:51.561 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:51.561 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:51.561 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:51.561 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:51.561 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:51.561 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:51.561 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:51.561 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:51.561 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 3905156 kB' 'MemAvailable: 9483884 kB' 'Buffers: 40428 kB' 'Cached: 5628096 kB' 'SwapCached: 0 kB' 'Active: 1403780 kB' 'Inactive: 4396408 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 142276 kB' 'Active(file): 1402732 kB' 'Inactive(file): 4254132 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 160668 kB' 'Mapped: 67520 kB' 'Shmem: 2596 kB' 'KReclaimable: 243168 kB' 'Slab: 312224 kB' 'SReclaimable: 243168 kB' 'SUnreclaim: 69056 kB' 'KernelStack: 4516 kB' 'PageTables: 3744 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 507824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19444 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.562 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 3905408 kB' 'MemAvailable: 9484136 kB' 'Buffers: 40428 kB' 'Cached: 5628096 kB' 'SwapCached: 0 kB' 'Active: 1403772 kB' 'Inactive: 4396104 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 141972 kB' 'Active(file): 1402732 kB' 'Inactive(file): 4254132 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 160504 kB' 'Mapped: 67484 kB' 'Shmem: 2596 kB' 'KReclaimable: 243168 kB' 'Slab: 312240 kB' 'SReclaimable: 243168 kB' 'SUnreclaim: 69072 kB' 'KernelStack: 4416 kB' 'PageTables: 3384 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 507824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19460 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.563 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:51.564 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 3905408 kB' 'MemAvailable: 9484136 kB' 'Buffers: 40428 kB' 'Cached: 5628096 kB' 'SwapCached: 0 kB' 'Active: 1403788 kB' 'Inactive: 4395668 kB' 'Active(anon): 1032 kB' 'Inactive(anon): 141560 kB' 'Active(file): 1402756 kB' 'Inactive(file): 4254108 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 160452 kB' 'Mapped: 67556 kB' 'Shmem: 2596 kB' 'KReclaimable: 243168 kB' 'Slab: 312368 kB' 'SReclaimable: 243168 kB' 'SUnreclaim: 69200 kB' 'KernelStack: 4364 kB' 'PageTables: 3444 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 507824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19460 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.565 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:51.566 nr_hugepages=1024 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:51.566 resv_hugepages=0 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:51.566 surplus_hugepages=0 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:51.566 anon_hugepages=0 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:51.566 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 3905408 kB' 'MemAvailable: 9484136 kB' 'Buffers: 40428 kB' 'Cached: 5628096 kB' 'SwapCached: 0 kB' 'Active: 1403788 kB' 'Inactive: 4395940 kB' 'Active(anon): 1032 kB' 'Inactive(anon): 141832 kB' 'Active(file): 1402756 kB' 'Inactive(file): 4254108 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 160464 kB' 'Mapped: 67556 kB' 'Shmem: 2596 kB' 'KReclaimable: 243168 kB' 'Slab: 312368 kB' 'SReclaimable: 243168 kB' 'SUnreclaim: 69200 kB' 'KernelStack: 4364 kB' 'PageTables: 3444 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 507824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19460 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.567 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 3905408 kB' 'MemUsed: 8337572 kB' 'SwapCached: 0 kB' 'Active: 1403788 kB' 'Inactive: 4395940 kB' 'Active(anon): 1032 kB' 'Inactive(anon): 141832 kB' 'Active(file): 1402756 kB' 'Inactive(file): 4254108 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 5668524 kB' 'Mapped: 67556 kB' 'AnonPages: 160464 kB' 'Shmem: 2596 kB' 'KernelStack: 4432 kB' 'PageTables: 3444 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 243168 kB' 'Slab: 312368 kB' 'SReclaimable: 243168 kB' 'SUnreclaim: 69200 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.568 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:51.569 node0=1024 expecting 1024 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:51.569 00:05:51.569 real 0m1.426s 00:05:51.569 user 0m0.593s 00:05:51.569 sys 0m0.913s 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.569 22:51:40 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:51.569 ************************************ 00:05:51.569 END TEST no_shrink_alloc 00:05:51.569 ************************************ 00:05:51.569 22:51:40 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:51.569 22:51:40 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:51.569 22:51:40 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:51.569 22:51:40 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:51.569 22:51:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:51.569 22:51:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:51.569 22:51:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:51.569 22:51:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:51.569 22:51:40 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:51.569 22:51:40 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:51.569 00:05:51.569 real 0m6.570s 00:05:51.569 user 0m2.394s 00:05:51.569 sys 0m4.389s 00:05:51.569 22:51:40 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.569 22:51:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:51.569 ************************************ 00:05:51.569 END TEST hugepages 00:05:51.569 ************************************ 00:05:51.569 22:51:40 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:51.569 22:51:40 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:51.569 22:51:40 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:51.569 22:51:40 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.569 22:51:40 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:51.569 ************************************ 00:05:51.569 START TEST driver 00:05:51.569 ************************************ 00:05:51.569 22:51:40 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:51.828 * Looking for test storage... 00:05:51.828 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:51.828 22:51:41 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:51.828 22:51:41 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:51.828 22:51:41 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:52.086 22:51:41 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:52.086 22:51:41 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:52.086 22:51:41 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.086 22:51:41 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:52.086 ************************************ 00:05:52.086 START TEST guess_driver 00:05:52.086 ************************************ 00:05:52.087 22:51:41 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:52.087 22:51:41 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:52.087 22:51:41 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:52.087 22:51:41 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:52.087 22:51:41 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:52.087 22:51:41 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:52.087 22:51:41 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:52.087 22:51:41 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:52.087 22:51:41 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:52.087 22:51:41 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:52.087 22:51:41 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:52.087 22:51:41 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ N == Y ]] 00:05:52.087 22:51:41 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:05:52.087 22:51:41 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:05:52.087 22:51:41 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:52.087 22:51:41 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:52.087 22:51:41 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:52.087 22:51:41 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:52.087 22:51:41 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio.ko 00:05:52.087 insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio_pci_generic.ko == *\.\k\o* ]] 00:05:52.087 22:51:41 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:52.087 22:51:41 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:52.087 22:51:41 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:52.087 22:51:41 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:52.087 Looking for driver=uio_pci_generic 00:05:52.087 22:51:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:52.087 22:51:41 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:52.087 22:51:41 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:52.087 22:51:41 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:52.653 22:51:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:52.653 22:51:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:05:52.653 22:51:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:52.653 22:51:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:52.653 22:51:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:52.653 22:51:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:53.590 22:51:42 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:53.590 22:51:42 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:53.590 22:51:42 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:53.590 22:51:42 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:54.155 00:05:54.155 real 0m1.939s 00:05:54.155 user 0m0.488s 00:05:54.155 sys 0m1.474s 00:05:54.155 22:51:43 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.155 22:51:43 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:54.155 ************************************ 00:05:54.155 END TEST guess_driver 00:05:54.155 ************************************ 00:05:54.156 22:51:43 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:54.156 00:05:54.156 real 0m2.489s 00:05:54.156 user 0m0.774s 00:05:54.156 sys 0m1.753s 00:05:54.156 22:51:43 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.156 22:51:43 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:54.156 ************************************ 00:05:54.156 END TEST driver 00:05:54.156 ************************************ 00:05:54.156 22:51:43 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:54.156 22:51:43 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:54.156 22:51:43 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.156 22:51:43 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.156 22:51:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:54.156 ************************************ 00:05:54.156 START TEST devices 00:05:54.156 ************************************ 00:05:54.156 22:51:43 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:54.413 * Looking for test storage... 00:05:54.413 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:54.413 22:51:43 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:54.413 22:51:43 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:54.413 22:51:43 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:54.413 22:51:43 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:54.670 22:51:44 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:54.670 22:51:44 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:54.670 22:51:44 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:54.670 22:51:44 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:54.670 22:51:44 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:54.670 22:51:44 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:54.670 22:51:44 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:54.670 22:51:44 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:54.670 22:51:44 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:54.670 22:51:44 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:54.670 22:51:44 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:54.670 22:51:44 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:54.671 22:51:44 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:54.671 22:51:44 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:54.671 22:51:44 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:54.671 22:51:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:54.671 22:51:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:54.671 22:51:44 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:05:54.671 22:51:44 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:54.671 22:51:44 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:54.671 22:51:44 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:54.671 22:51:44 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:54.930 No valid GPT data, bailing 00:05:54.930 22:51:44 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:54.930 22:51:44 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:54.930 22:51:44 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:54.930 22:51:44 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:54.930 22:51:44 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:54.930 22:51:44 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:54.930 22:51:44 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:05:54.930 22:51:44 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:54.930 22:51:44 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:54.930 22:51:44 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:05:54.930 22:51:44 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:54.930 22:51:44 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:54.930 22:51:44 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:54.930 22:51:44 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.930 22:51:44 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.930 22:51:44 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:54.930 ************************************ 00:05:54.930 START TEST nvme_mount 00:05:54.930 ************************************ 00:05:54.930 22:51:44 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:54.930 22:51:44 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:54.930 22:51:44 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:54.930 22:51:44 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:54.930 22:51:44 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:54.930 22:51:44 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:54.930 22:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:54.930 22:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:54.930 22:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:54.930 22:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:54.930 22:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:54.930 22:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:54.930 22:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:54.930 22:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:54.930 22:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:54.930 22:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:54.930 22:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:54.930 22:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:54.930 22:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:54.930 22:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:55.862 Creating new GPT entries in memory. 00:05:55.862 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:55.862 other utilities. 00:05:55.862 22:51:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:55.862 22:51:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:55.862 22:51:45 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:55.862 22:51:45 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:55.862 22:51:45 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:56.795 Creating new GPT entries in memory. 00:05:56.795 The operation has completed successfully. 00:05:56.795 22:51:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:56.795 22:51:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:56.795 22:51:46 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 115849 00:05:57.054 22:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:57.054 22:51:46 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:57.054 22:51:46 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:57.054 22:51:46 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:57.054 22:51:46 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:57.054 22:51:46 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:57.054 22:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:10.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:57.054 22:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:57.054 22:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:57.054 22:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:57.054 22:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:57.054 22:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:57.054 22:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:57.054 22:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:57.054 22:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:57.054 22:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.054 22:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:57.054 22:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:57.054 22:51:46 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:57.054 22:51:46 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:57.054 22:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:57.054 22:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:57.054 22:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:57.054 22:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.054 22:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:57.054 22:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.314 22:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:57.314 22:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.250 22:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:58.250 22:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:58.250 22:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:58.250 22:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:58.250 22:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:58.250 22:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:58.250 22:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:58.250 22:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:58.250 22:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:58.250 22:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:58.250 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:58.250 22:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:58.250 22:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:58.509 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:58.509 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:58.509 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:58.509 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:58.509 22:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:58.509 22:51:47 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:58.509 22:51:47 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:58.509 22:51:47 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:58.509 22:51:47 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:58.509 22:51:47 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:58.509 22:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:10.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:58.509 22:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:58.509 22:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:58.509 22:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:58.509 22:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:58.509 22:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:58.509 22:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:58.509 22:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:58.509 22:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:58.509 22:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.509 22:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:58.509 22:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:58.509 22:51:47 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:58.509 22:51:47 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:58.509 22:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:58.509 22:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:58.509 22:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:58.509 22:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.509 22:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:58.509 22:51:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.768 22:51:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:58.768 22:51:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:59.721 22:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:59.721 22:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:59.721 22:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:59.721 22:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:59.721 22:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:59.721 22:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:59.721 22:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:10.0 data@nvme0n1 '' '' 00:05:59.721 22:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:59.721 22:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:59.721 22:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:59.721 22:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:59.721 22:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:59.721 22:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:59.721 22:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:59.721 22:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:59.721 22:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:59.721 22:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:59.721 22:51:49 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:59.721 22:51:49 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:00.006 22:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:00.006 22:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:06:00.006 22:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:00.006 22:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:00.006 22:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:00.006 22:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:00.264 22:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:00.264 22:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.201 22:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:01.201 22:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:01.201 22:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:06:01.201 22:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:06:01.201 22:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:01.201 22:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:01.201 22:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:01.201 22:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:01.201 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:01.201 00:06:01.201 real 0m6.445s 00:06:01.201 user 0m0.663s 00:06:01.201 sys 0m3.781s 00:06:01.201 22:51:50 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.201 22:51:50 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:06:01.201 ************************************ 00:06:01.201 END TEST nvme_mount 00:06:01.201 ************************************ 00:06:01.201 22:51:50 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:06:01.201 22:51:50 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:06:01.201 22:51:50 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.201 22:51:50 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.201 22:51:50 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:01.460 ************************************ 00:06:01.460 START TEST dm_mount 00:06:01.460 ************************************ 00:06:01.460 22:51:50 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:06:01.460 22:51:50 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:06:01.460 22:51:50 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:06:01.460 22:51:50 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:06:01.460 22:51:50 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:06:01.460 22:51:50 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:01.460 22:51:50 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:06:01.460 22:51:50 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:01.460 22:51:50 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:01.460 22:51:50 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:06:01.460 22:51:50 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:06:01.460 22:51:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:01.460 22:51:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:01.460 22:51:50 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:01.460 22:51:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:01.460 22:51:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:01.460 22:51:50 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:01.460 22:51:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:01.460 22:51:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:01.460 22:51:50 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:01.460 22:51:50 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:01.460 22:51:50 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:06:02.395 Creating new GPT entries in memory. 00:06:02.395 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:02.395 other utilities. 00:06:02.395 22:51:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:02.395 22:51:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:02.395 22:51:51 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:02.395 22:51:51 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:02.395 22:51:51 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:03.330 Creating new GPT entries in memory. 00:06:03.330 The operation has completed successfully. 00:06:03.330 22:51:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:03.330 22:51:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:03.330 22:51:52 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:03.330 22:51:52 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:03.330 22:51:52 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:06:04.705 The operation has completed successfully. 00:06:04.705 22:51:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:04.705 22:51:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:04.705 22:51:53 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 116333 00:06:04.705 22:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:06:04.705 22:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:04.705 22:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:04.705 22:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:06:04.705 22:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:06:04.705 22:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:04.705 22:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:06:04.705 22:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:04.705 22:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:06:04.705 22:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:06:04.705 22:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:06:04.705 22:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:06:04.705 22:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:06:04.705 22:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:04.705 22:51:53 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:06:04.705 22:51:53 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:04.705 22:51:53 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:04.705 22:51:53 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:06:04.705 22:51:53 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:04.705 22:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:10.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:04.705 22:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:06:04.705 22:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:06:04.705 22:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:04.705 22:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:04.705 22:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:04.705 22:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:04.705 22:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:06:04.705 22:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:04.705 22:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:04.705 22:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:06:04.705 22:51:53 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:04.705 22:51:53 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:04.705 22:51:53 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:04.705 22:51:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:04.705 22:51:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:04.705 22:51:54 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:04.705 22:51:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:04.705 22:51:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:04.705 22:51:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:04.964 22:51:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:04.964 22:51:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.900 22:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:05.900 22:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:06:05.900 22:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:05.900 22:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:05.900 22:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:05.900 22:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:05.900 22:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:10.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:06:05.900 22:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:06:05.900 22:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:06:05.900 22:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:05.900 22:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:06:05.900 22:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:05.900 22:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:05.900 22:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:05.900 22:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.900 22:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:06:05.900 22:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:05.900 22:51:55 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:05.900 22:51:55 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:06.159 22:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:06.159 22:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:06:06.159 22:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:06.159 22:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.159 22:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:06.159 22:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.418 22:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:06.418 22:51:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.353 22:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:07.353 22:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:07.353 22:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:06:07.353 22:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:06:07.353 22:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:07.353 22:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:07.353 22:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:07.353 22:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:07.353 22:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:07.353 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:07.353 22:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:07.353 22:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:07.353 00:06:07.353 real 0m6.121s 00:06:07.353 user 0m0.488s 00:06:07.353 sys 0m2.459s 00:06:07.353 22:51:56 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.353 ************************************ 00:06:07.353 22:51:56 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:06:07.353 END TEST dm_mount 00:06:07.353 ************************************ 00:06:07.612 22:51:56 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:06:07.612 22:51:56 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:06:07.612 22:51:56 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:06:07.612 22:51:56 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:07.612 22:51:56 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:07.612 22:51:56 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:07.612 22:51:56 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:07.612 22:51:56 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:07.612 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:07.612 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:07.612 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:07.612 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:07.612 22:51:56 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:06:07.612 22:51:56 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:07.612 22:51:56 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:07.612 22:51:56 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:07.612 22:51:56 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:07.612 22:51:56 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:07.612 22:51:56 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:07.612 00:06:07.612 real 0m13.345s 00:06:07.612 user 0m1.563s 00:06:07.612 sys 0m6.598s 00:06:07.613 22:51:56 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.613 22:51:56 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:07.613 ************************************ 00:06:07.613 END TEST devices 00:06:07.613 ************************************ 00:06:07.613 22:51:56 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:06:07.613 00:06:07.613 real 0m27.658s 00:06:07.613 user 0m6.477s 00:06:07.613 sys 0m16.358s 00:06:07.613 22:51:56 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.613 22:51:56 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:07.613 ************************************ 00:06:07.613 END TEST setup.sh 00:06:07.613 ************************************ 00:06:07.613 22:51:56 -- common/autotest_common.sh@1142 -- # return 0 00:06:07.613 22:51:56 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:08.180 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:08.180 Hugepages 00:06:08.180 node hugesize free / total 00:06:08.180 node0 1048576kB 0 / 0 00:06:08.180 node0 2048kB 2048 / 2048 00:06:08.180 00:06:08.180 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:08.180 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:08.180 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:08.180 22:51:57 -- spdk/autotest.sh@130 -- # uname -s 00:06:08.180 22:51:57 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:06:08.180 22:51:57 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:06:08.180 22:51:57 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:08.746 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:08.746 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:09.681 22:51:59 -- common/autotest_common.sh@1532 -- # sleep 1 00:06:11.107 22:52:00 -- common/autotest_common.sh@1533 -- # bdfs=() 00:06:11.108 22:52:00 -- common/autotest_common.sh@1533 -- # local bdfs 00:06:11.108 22:52:00 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:06:11.108 22:52:00 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:06:11.108 22:52:00 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:11.108 22:52:00 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:11.108 22:52:00 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:11.108 22:52:00 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:11.108 22:52:00 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:11.108 22:52:00 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:06:11.108 22:52:00 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:06:11.108 22:52:00 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:11.108 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:11.108 Waiting for block devices as requested 00:06:11.108 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:11.365 22:52:00 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:11.365 22:52:00 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:11.365 22:52:00 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:06:11.365 22:52:00 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:06:11.365 22:52:00 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:06:11.365 22:52:00 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 ]] 00:06:11.366 22:52:00 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:06:11.366 22:52:00 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:06:11.366 22:52:00 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:06:11.366 22:52:00 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:06:11.366 22:52:00 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:06:11.366 22:52:00 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:11.366 22:52:00 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:11.366 22:52:00 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:06:11.366 22:52:00 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:11.366 22:52:00 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:11.366 22:52:00 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:06:11.366 22:52:00 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:11.366 22:52:00 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:11.366 22:52:00 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:11.366 22:52:00 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:11.366 22:52:00 -- common/autotest_common.sh@1557 -- # continue 00:06:11.366 22:52:00 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:11.366 22:52:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:11.366 22:52:00 -- common/autotest_common.sh@10 -- # set +x 00:06:11.366 22:52:00 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:11.366 22:52:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:11.366 22:52:00 -- common/autotest_common.sh@10 -- # set +x 00:06:11.366 22:52:00 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:11.624 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:11.881 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:12.816 22:52:02 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:12.816 22:52:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:12.816 22:52:02 -- common/autotest_common.sh@10 -- # set +x 00:06:12.816 22:52:02 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:12.816 22:52:02 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:06:12.816 22:52:02 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:06:12.816 22:52:02 -- common/autotest_common.sh@1577 -- # bdfs=() 00:06:12.816 22:52:02 -- common/autotest_common.sh@1577 -- # local bdfs 00:06:12.816 22:52:02 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:06:12.816 22:52:02 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:13.076 22:52:02 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:13.076 22:52:02 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:13.076 22:52:02 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:13.076 22:52:02 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:13.076 22:52:02 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:06:13.076 22:52:02 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:06:13.076 22:52:02 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:13.076 22:52:02 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:13.076 22:52:02 -- common/autotest_common.sh@1580 -- # device=0x0010 00:06:13.076 22:52:02 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:13.076 22:52:02 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:06:13.076 22:52:02 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:06:13.076 22:52:02 -- common/autotest_common.sh@1593 -- # return 0 00:06:13.076 22:52:02 -- spdk/autotest.sh@150 -- # '[' 1 -eq 1 ']' 00:06:13.076 22:52:02 -- spdk/autotest.sh@151 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:06:13.076 22:52:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.076 22:52:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.076 22:52:02 -- common/autotest_common.sh@10 -- # set +x 00:06:13.076 ************************************ 00:06:13.076 START TEST unittest 00:06:13.076 ************************************ 00:06:13.076 22:52:02 unittest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:06:13.076 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:06:13.076 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:06:13.076 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:06:13.076 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:06:13.076 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:06:13.076 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:13.076 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:06:13.076 ++ rpc_py=rpc_cmd 00:06:13.076 ++ set -e 00:06:13.076 ++ shopt -s nullglob 00:06:13.076 ++ shopt -s extglob 00:06:13.076 ++ shopt -s inherit_errexit 00:06:13.076 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:06:13.076 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:13.076 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:13.076 +++ CONFIG_WPDK_DIR= 00:06:13.076 +++ CONFIG_ASAN=y 00:06:13.076 +++ CONFIG_VBDEV_COMPRESS=n 00:06:13.076 +++ CONFIG_HAVE_EXECINFO_H=y 00:06:13.076 +++ CONFIG_USDT=n 00:06:13.076 +++ CONFIG_CUSTOMOCF=n 00:06:13.076 +++ CONFIG_PREFIX=/usr/local 00:06:13.076 +++ CONFIG_RBD=n 00:06:13.076 +++ CONFIG_LIBDIR= 00:06:13.076 +++ CONFIG_IDXD=y 00:06:13.076 +++ CONFIG_NVME_CUSE=y 00:06:13.076 +++ CONFIG_SMA=n 00:06:13.076 +++ CONFIG_VTUNE=n 00:06:13.076 +++ CONFIG_TSAN=n 00:06:13.076 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:13.076 +++ CONFIG_VFIO_USER_DIR= 00:06:13.076 +++ CONFIG_PGO_CAPTURE=n 00:06:13.076 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:13.076 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:13.076 +++ CONFIG_LTO=n 00:06:13.076 +++ CONFIG_ISCSI_INITIATOR=y 00:06:13.076 +++ CONFIG_CET=n 00:06:13.076 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:13.076 +++ CONFIG_OCF_PATH= 00:06:13.076 +++ CONFIG_RDMA_SET_TOS=y 00:06:13.076 +++ CONFIG_HAVE_ARC4RANDOM=n 00:06:13.076 +++ CONFIG_HAVE_LIBARCHIVE=n 00:06:13.076 +++ CONFIG_UBLK=n 00:06:13.076 +++ CONFIG_ISAL_CRYPTO=y 00:06:13.076 +++ CONFIG_OPENSSL_PATH= 00:06:13.076 +++ CONFIG_OCF=n 00:06:13.076 +++ CONFIG_FUSE=n 00:06:13.076 +++ CONFIG_VTUNE_DIR= 00:06:13.076 +++ CONFIG_FUZZER_LIB= 00:06:13.076 +++ CONFIG_FUZZER=n 00:06:13.076 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:06:13.076 +++ CONFIG_CRYPTO=n 00:06:13.076 +++ CONFIG_PGO_USE=n 00:06:13.076 +++ CONFIG_VHOST=y 00:06:13.076 +++ CONFIG_DAOS=n 00:06:13.076 +++ CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:06:13.076 +++ CONFIG_DAOS_DIR= 00:06:13.076 +++ CONFIG_UNIT_TESTS=y 00:06:13.076 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:13.076 +++ CONFIG_VIRTIO=y 00:06:13.076 +++ CONFIG_DPDK_UADK=n 00:06:13.076 +++ CONFIG_COVERAGE=y 00:06:13.076 +++ CONFIG_RDMA=y 00:06:13.076 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:13.076 +++ CONFIG_URING_PATH= 00:06:13.076 +++ CONFIG_XNVME=n 00:06:13.076 +++ CONFIG_VFIO_USER=n 00:06:13.076 +++ CONFIG_ARCH=native 00:06:13.076 +++ CONFIG_HAVE_EVP_MAC=y 00:06:13.076 +++ CONFIG_URING_ZNS=n 00:06:13.076 +++ CONFIG_WERROR=y 00:06:13.076 +++ CONFIG_HAVE_LIBBSD=n 00:06:13.076 +++ CONFIG_UBSAN=y 00:06:13.076 +++ CONFIG_IPSEC_MB_DIR= 00:06:13.076 +++ CONFIG_GOLANG=n 00:06:13.076 +++ CONFIG_ISAL=y 00:06:13.076 +++ CONFIG_IDXD_KERNEL=n 00:06:13.076 +++ CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:06:13.076 +++ CONFIG_RDMA_PROV=verbs 00:06:13.076 +++ CONFIG_APPS=y 00:06:13.076 +++ CONFIG_SHARED=n 00:06:13.076 +++ CONFIG_HAVE_KEYUTILS=y 00:06:13.076 +++ CONFIG_FC_PATH= 00:06:13.076 +++ CONFIG_DPDK_PKG_CONFIG=n 00:06:13.076 +++ CONFIG_FC=n 00:06:13.076 +++ CONFIG_AVAHI=n 00:06:13.076 +++ CONFIG_FIO_PLUGIN=y 00:06:13.076 +++ CONFIG_RAID5F=y 00:06:13.076 +++ CONFIG_EXAMPLES=y 00:06:13.076 +++ CONFIG_TESTS=y 00:06:13.076 +++ CONFIG_CRYPTO_MLX5=n 00:06:13.076 +++ CONFIG_MAX_LCORES=128 00:06:13.076 +++ CONFIG_IPSEC_MB=n 00:06:13.076 +++ CONFIG_PGO_DIR= 00:06:13.076 +++ CONFIG_DEBUG=y 00:06:13.076 +++ CONFIG_DPDK_COMPRESSDEV=n 00:06:13.076 +++ CONFIG_CROSS_PREFIX= 00:06:13.076 +++ CONFIG_URING=n 00:06:13.076 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:13.076 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:13.076 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:06:13.076 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:06:13.076 +++ _root=/home/vagrant/spdk_repo/spdk 00:06:13.076 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:06:13.076 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:06:13.076 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:06:13.076 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:13.076 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:13.076 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:13.076 +++ VHOST_APP=("$_app_dir/vhost") 00:06:13.076 +++ DD_APP=("$_app_dir/spdk_dd") 00:06:13.076 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:06:13.076 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:06:13.076 +++ [[ #ifndef SPDK_CONFIG_H 00:06:13.076 #define SPDK_CONFIG_H 00:06:13.076 #define SPDK_CONFIG_APPS 1 00:06:13.076 #define SPDK_CONFIG_ARCH native 00:06:13.076 #define SPDK_CONFIG_ASAN 1 00:06:13.076 #undef SPDK_CONFIG_AVAHI 00:06:13.076 #undef SPDK_CONFIG_CET 00:06:13.076 #define SPDK_CONFIG_COVERAGE 1 00:06:13.076 #define SPDK_CONFIG_CROSS_PREFIX 00:06:13.076 #undef SPDK_CONFIG_CRYPTO 00:06:13.076 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:13.076 #undef SPDK_CONFIG_CUSTOMOCF 00:06:13.076 #undef SPDK_CONFIG_DAOS 00:06:13.076 #define SPDK_CONFIG_DAOS_DIR 00:06:13.076 #define SPDK_CONFIG_DEBUG 1 00:06:13.076 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:13.076 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:06:13.076 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:06:13.076 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:06:13.076 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:13.076 #undef SPDK_CONFIG_DPDK_UADK 00:06:13.076 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:13.076 #define SPDK_CONFIG_EXAMPLES 1 00:06:13.076 #undef SPDK_CONFIG_FC 00:06:13.076 #define SPDK_CONFIG_FC_PATH 00:06:13.076 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:13.076 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:13.076 #undef SPDK_CONFIG_FUSE 00:06:13.076 #undef SPDK_CONFIG_FUZZER 00:06:13.076 #define SPDK_CONFIG_FUZZER_LIB 00:06:13.076 #undef SPDK_CONFIG_GOLANG 00:06:13.076 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:06:13.076 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:13.076 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:13.076 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:13.076 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:13.076 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:13.076 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:13.076 #define SPDK_CONFIG_IDXD 1 00:06:13.076 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:13.076 #undef SPDK_CONFIG_IPSEC_MB 00:06:13.076 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:13.077 #define SPDK_CONFIG_ISAL 1 00:06:13.077 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:13.077 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:13.077 #define SPDK_CONFIG_LIBDIR 00:06:13.077 #undef SPDK_CONFIG_LTO 00:06:13.077 #define SPDK_CONFIG_MAX_LCORES 128 00:06:13.077 #define SPDK_CONFIG_NVME_CUSE 1 00:06:13.077 #undef SPDK_CONFIG_OCF 00:06:13.077 #define SPDK_CONFIG_OCF_PATH 00:06:13.077 #define SPDK_CONFIG_OPENSSL_PATH 00:06:13.077 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:13.077 #define SPDK_CONFIG_PGO_DIR 00:06:13.077 #undef SPDK_CONFIG_PGO_USE 00:06:13.077 #define SPDK_CONFIG_PREFIX /usr/local 00:06:13.077 #define SPDK_CONFIG_RAID5F 1 00:06:13.077 #undef SPDK_CONFIG_RBD 00:06:13.077 #define SPDK_CONFIG_RDMA 1 00:06:13.077 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:13.077 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:13.077 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:13.077 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:13.077 #undef SPDK_CONFIG_SHARED 00:06:13.077 #undef SPDK_CONFIG_SMA 00:06:13.077 #define SPDK_CONFIG_TESTS 1 00:06:13.077 #undef SPDK_CONFIG_TSAN 00:06:13.077 #undef SPDK_CONFIG_UBLK 00:06:13.077 #define SPDK_CONFIG_UBSAN 1 00:06:13.077 #define SPDK_CONFIG_UNIT_TESTS 1 00:06:13.077 #undef SPDK_CONFIG_URING 00:06:13.077 #define SPDK_CONFIG_URING_PATH 00:06:13.077 #undef SPDK_CONFIG_URING_ZNS 00:06:13.077 #undef SPDK_CONFIG_USDT 00:06:13.077 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:13.077 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:13.077 #undef SPDK_CONFIG_VFIO_USER 00:06:13.077 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:13.077 #define SPDK_CONFIG_VHOST 1 00:06:13.077 #define SPDK_CONFIG_VIRTIO 1 00:06:13.077 #undef SPDK_CONFIG_VTUNE 00:06:13.077 #define SPDK_CONFIG_VTUNE_DIR 00:06:13.077 #define SPDK_CONFIG_WERROR 1 00:06:13.077 #define SPDK_CONFIG_WPDK_DIR 00:06:13.077 #undef SPDK_CONFIG_XNVME 00:06:13.077 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:13.077 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:13.077 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:13.077 +++ [[ -e /bin/wpdk_common.sh ]] 00:06:13.077 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:13.077 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:13.077 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:13.077 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:13.077 ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:13.077 ++++ export PATH 00:06:13.077 ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:13.077 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:13.077 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:13.077 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:13.077 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:13.077 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:06:13.077 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:06:13.077 +++ TEST_TAG=N/A 00:06:13.077 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:06:13.077 +++ PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:06:13.077 ++++ uname -s 00:06:13.077 +++ PM_OS=Linux 00:06:13.077 +++ MONITOR_RESOURCES_SUDO=() 00:06:13.077 +++ declare -A MONITOR_RESOURCES_SUDO 00:06:13.077 +++ MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:13.077 +++ MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:13.077 +++ MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:13.077 +++ MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:13.077 +++ SUDO[0]= 00:06:13.077 +++ SUDO[1]='sudo -E' 00:06:13.077 +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:13.077 +++ [[ Linux == FreeBSD ]] 00:06:13.077 +++ [[ Linux == Linux ]] 00:06:13.077 +++ [[ QEMU != QEMU ]] 00:06:13.077 +++ [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:06:13.077 ++ : 1 00:06:13.077 ++ export RUN_NIGHTLY 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_RUN_VALGRIND 00:06:13.077 ++ : 1 00:06:13.077 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:06:13.077 ++ : 1 00:06:13.077 ++ export SPDK_TEST_UNITTEST 00:06:13.077 ++ : 00:06:13.077 ++ export SPDK_TEST_AUTOBUILD 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_RELEASE_BUILD 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_ISAL 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_ISCSI 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_ISCSI_INITIATOR 00:06:13.077 ++ : 1 00:06:13.077 ++ export SPDK_TEST_NVME 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_NVME_PMR 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_NVME_BP 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_NVME_CLI 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_NVME_CUSE 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_NVME_FDP 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_NVMF 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_VFIOUSER 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_VFIOUSER_QEMU 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_FUZZER 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_FUZZER_SHORT 00:06:13.077 ++ : rdma 00:06:13.077 ++ export SPDK_TEST_NVMF_TRANSPORT 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_RBD 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_VHOST 00:06:13.077 ++ : 1 00:06:13.077 ++ export SPDK_TEST_BLOCKDEV 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_IOAT 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_BLOBFS 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_VHOST_INIT 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_LVOL 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_VBDEV_COMPRESS 00:06:13.077 ++ : 1 00:06:13.077 ++ export SPDK_RUN_ASAN 00:06:13.077 ++ : 1 00:06:13.077 ++ export SPDK_RUN_UBSAN 00:06:13.077 ++ : /home/vagrant/spdk_repo/dpdk/build 00:06:13.077 ++ export SPDK_RUN_EXTERNAL_DPDK 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_RUN_NON_ROOT 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_CRYPTO 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_FTL 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_OCF 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_VMD 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_OPAL 00:06:13.077 ++ : v22.11.4 00:06:13.077 ++ export SPDK_TEST_NATIVE_DPDK 00:06:13.077 ++ : true 00:06:13.077 ++ export SPDK_AUTOTEST_X 00:06:13.077 ++ : 1 00:06:13.077 ++ export SPDK_TEST_RAID5 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_URING 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_USDT 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_USE_IGB_UIO 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_SCHEDULER 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_SCANBUILD 00:06:13.077 ++ : 00:06:13.077 ++ export SPDK_TEST_NVMF_NICS 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_SMA 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_DAOS 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_XNVME 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_ACCEL_DSA 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_ACCEL_IAA 00:06:13.077 ++ : 00:06:13.077 ++ export SPDK_TEST_FUZZER_TARGET 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_TEST_NVMF_MDNS 00:06:13.077 ++ : 0 00:06:13.077 ++ export SPDK_JSONRPC_GO_CLIENT 00:06:13.077 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:13.077 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:13.077 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:06:13.077 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:06:13.077 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:13.077 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:13.077 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:13.077 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:13.077 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:13.077 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:06:13.077 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:13.077 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:13.077 ++ export PYTHONDONTWRITEBYTECODE=1 00:06:13.077 ++ PYTHONDONTWRITEBYTECODE=1 00:06:13.077 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:13.077 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:13.077 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:13.077 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:13.077 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:06:13.077 ++ rm -rf /var/tmp/asan_suppression_file 00:06:13.077 ++ cat 00:06:13.077 ++ echo leak:libfuse3.so 00:06:13.077 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:13.077 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:13.077 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:13.078 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:13.078 ++ '[' -z /var/spdk/dependencies ']' 00:06:13.078 ++ export DEPENDENCY_DIR 00:06:13.078 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:13.078 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:13.078 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:13.078 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:13.078 ++ export QEMU_BIN= 00:06:13.078 ++ QEMU_BIN= 00:06:13.078 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:06:13.078 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:06:13.078 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:13.078 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:13.078 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:13.078 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:13.078 ++ '[' 0 -eq 0 ']' 00:06:13.078 ++ export valgrind= 00:06:13.078 ++ valgrind= 00:06:13.078 +++ uname -s 00:06:13.078 ++ '[' Linux = Linux ']' 00:06:13.078 ++ HUGEMEM=4096 00:06:13.078 ++ export CLEAR_HUGE=yes 00:06:13.078 ++ CLEAR_HUGE=yes 00:06:13.078 ++ [[ 0 -eq 1 ]] 00:06:13.078 ++ [[ 0 -eq 1 ]] 00:06:13.078 ++ MAKE=make 00:06:13.078 +++ nproc 00:06:13.078 ++ MAKEFLAGS=-j10 00:06:13.078 ++ export HUGEMEM=4096 00:06:13.078 ++ HUGEMEM=4096 00:06:13.078 ++ NO_HUGE=() 00:06:13.078 ++ TEST_MODE= 00:06:13.078 ++ [[ -z '' ]] 00:06:13.078 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:06:13.078 ++ exec 00:06:13.078 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:06:13.078 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:06:13.078 ++ set_test_storage 2147483648 00:06:13.078 ++ [[ -v testdir ]] 00:06:13.078 ++ local requested_size=2147483648 00:06:13.078 ++ local mount target_dir 00:06:13.078 ++ local -A mounts fss sizes avails uses 00:06:13.078 ++ local source fs size avail mount use 00:06:13.078 ++ local storage_fallback storage_candidates 00:06:13.078 +++ mktemp -udt spdk.XXXXXX 00:06:13.078 ++ storage_fallback=/tmp/spdk.eW4XjS 00:06:13.078 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:13.078 ++ [[ -n '' ]] 00:06:13.078 ++ [[ -n '' ]] 00:06:13.078 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.eW4XjS/tests/unit /tmp/spdk.eW4XjS 00:06:13.078 ++ requested_size=2214592512 00:06:13.078 ++ read -r source fs size use avail _ mount 00:06:13.078 +++ df -T 00:06:13.078 +++ grep -v Filesystem 00:06:13.078 ++ mounts["$mount"]=tmpfs 00:06:13.078 ++ fss["$mount"]=tmpfs 00:06:13.078 ++ avails["$mount"]=1252610048 00:06:13.078 ++ sizes["$mount"]=1253683200 00:06:13.078 ++ uses["$mount"]=1073152 00:06:13.078 ++ read -r source fs size use avail _ mount 00:06:13.078 ++ mounts["$mount"]=/dev/vda1 00:06:13.078 ++ fss["$mount"]=ext4 00:06:13.078 ++ avails["$mount"]=9342935040 00:06:13.078 ++ sizes["$mount"]=20616794112 00:06:13.078 ++ uses["$mount"]=11257081856 00:06:13.078 ++ read -r source fs size use avail _ mount 00:06:13.078 ++ mounts["$mount"]=tmpfs 00:06:13.078 ++ fss["$mount"]=tmpfs 00:06:13.078 ++ avails["$mount"]=6268403712 00:06:13.078 ++ sizes["$mount"]=6268403712 00:06:13.078 ++ uses["$mount"]=0 00:06:13.078 ++ read -r source fs size use avail _ mount 00:06:13.078 ++ mounts["$mount"]=tmpfs 00:06:13.078 ++ fss["$mount"]=tmpfs 00:06:13.078 ++ avails["$mount"]=5242880 00:06:13.078 ++ sizes["$mount"]=5242880 00:06:13.078 ++ uses["$mount"]=0 00:06:13.078 ++ read -r source fs size use avail _ mount 00:06:13.078 ++ mounts["$mount"]=/dev/vda15 00:06:13.078 ++ fss["$mount"]=vfat 00:06:13.078 ++ avails["$mount"]=103061504 00:06:13.078 ++ sizes["$mount"]=109395968 00:06:13.078 ++ uses["$mount"]=6334464 00:06:13.078 ++ read -r source fs size use avail _ mount 00:06:13.078 ++ mounts["$mount"]=tmpfs 00:06:13.078 ++ fss["$mount"]=tmpfs 00:06:13.078 ++ avails["$mount"]=1253675008 00:06:13.078 ++ sizes["$mount"]=1253679104 00:06:13.078 ++ uses["$mount"]=4096 00:06:13.078 ++ read -r source fs size use avail _ mount 00:06:13.078 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:06:13.078 ++ fss["$mount"]=fuse.sshfs 00:06:13.078 ++ avails["$mount"]=95039692800 00:06:13.078 ++ sizes["$mount"]=105088212992 00:06:13.078 ++ uses["$mount"]=4663087104 00:06:13.078 ++ read -r source fs size use avail _ mount 00:06:13.078 ++ printf '* Looking for test storage...\n' 00:06:13.078 * Looking for test storage... 00:06:13.078 ++ local target_space new_size 00:06:13.078 ++ for target_dir in "${storage_candidates[@]}" 00:06:13.078 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:06:13.078 +++ awk '$1 !~ /Filesystem/{print $6}' 00:06:13.078 ++ mount=/ 00:06:13.078 ++ target_space=9342935040 00:06:13.078 ++ (( target_space == 0 || target_space < requested_size )) 00:06:13.078 ++ (( target_space >= requested_size )) 00:06:13.078 ++ [[ ext4 == tmpfs ]] 00:06:13.078 ++ [[ ext4 == ramfs ]] 00:06:13.078 ++ [[ / == / ]] 00:06:13.078 ++ new_size=13471674368 00:06:13.078 ++ (( new_size * 100 / sizes[/] > 95 )) 00:06:13.078 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:06:13.078 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:06:13.078 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:06:13.078 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:06:13.078 ++ return 0 00:06:13.078 ++ set -o errtrace 00:06:13.078 ++ shopt -s extdebug 00:06:13.078 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:06:13.078 ++ PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:13.078 22:52:02 unittest -- common/autotest_common.sh@1687 -- # true 00:06:13.078 22:52:02 unittest -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:13.078 22:52:02 unittest -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:06:13.078 22:52:02 unittest -- common/autotest_common.sh@29 -- # exec 00:06:13.078 22:52:02 unittest -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:13.078 22:52:02 unittest -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:13.078 22:52:02 unittest -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:13.078 22:52:02 unittest -- common/autotest_common.sh@18 -- # set -x 00:06:13.078 22:52:02 unittest -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:06:13.078 22:52:02 unittest -- unit/unittest.sh@153 -- # '[' 0 -eq 1 ']' 00:06:13.078 22:52:02 unittest -- unit/unittest.sh@160 -- # '[' -z x ']' 00:06:13.078 22:52:02 unittest -- unit/unittest.sh@167 -- # '[' 0 -eq 1 ']' 00:06:13.078 22:52:02 unittest -- unit/unittest.sh@180 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:06:13.078 22:52:02 unittest -- unit/unittest.sh@180 -- # CC_TYPE=CC_TYPE=gcc 00:06:13.078 22:52:02 unittest -- unit/unittest.sh@181 -- # hash lcov 00:06:13.078 22:52:02 unittest -- unit/unittest.sh@181 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:13.078 22:52:02 unittest -- unit/unittest.sh@181 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:06:13.078 22:52:02 unittest -- unit/unittest.sh@182 -- # cov_avail=yes 00:06:13.078 22:52:02 unittest -- unit/unittest.sh@186 -- # '[' yes = yes ']' 00:06:13.078 22:52:02 unittest -- unit/unittest.sh@188 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:06:13.078 22:52:02 unittest -- unit/unittest.sh@191 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:06:13.078 22:52:02 unittest -- unit/unittest.sh@193 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:06:13.078 22:52:02 unittest -- unit/unittest.sh@201 -- # export 'LCOV_OPTS= 00:06:13.078 --rc lcov_branch_coverage=1 00:06:13.078 --rc lcov_function_coverage=1 00:06:13.078 --rc genhtml_branch_coverage=1 00:06:13.078 --rc genhtml_function_coverage=1 00:06:13.078 --rc genhtml_legend=1 00:06:13.078 --rc geninfo_all_blocks=1 00:06:13.078 ' 00:06:13.078 22:52:02 unittest -- unit/unittest.sh@201 -- # LCOV_OPTS=' 00:06:13.078 --rc lcov_branch_coverage=1 00:06:13.078 --rc lcov_function_coverage=1 00:06:13.078 --rc genhtml_branch_coverage=1 00:06:13.078 --rc genhtml_function_coverage=1 00:06:13.078 --rc genhtml_legend=1 00:06:13.078 --rc geninfo_all_blocks=1 00:06:13.078 ' 00:06:13.078 22:52:02 unittest -- unit/unittest.sh@202 -- # export 'LCOV=lcov 00:06:13.078 --rc lcov_branch_coverage=1 00:06:13.078 --rc lcov_function_coverage=1 00:06:13.078 --rc genhtml_branch_coverage=1 00:06:13.078 --rc genhtml_function_coverage=1 00:06:13.078 --rc genhtml_legend=1 00:06:13.078 --rc geninfo_all_blocks=1 00:06:13.078 --no-external' 00:06:13.078 22:52:02 unittest -- unit/unittest.sh@202 -- # LCOV='lcov 00:06:13.078 --rc lcov_branch_coverage=1 00:06:13.078 --rc lcov_function_coverage=1 00:06:13.078 --rc genhtml_branch_coverage=1 00:06:13.078 --rc genhtml_function_coverage=1 00:06:13.078 --rc genhtml_legend=1 00:06:13.078 --rc geninfo_all_blocks=1 00:06:13.078 --no-external' 00:06:13.078 22:52:02 unittest -- unit/unittest.sh@204 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:06:18.345 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:18.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:07:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:07:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:07:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:07:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:07:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:07:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:07:05.047 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:07:05.047 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:07:05.047 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:07:05.047 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:07:05.047 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:07:05.047 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:07:05.047 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:07:05.047 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:07:05.047 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:07:05.047 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:07:05.047 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:07:05.047 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:07:05.047 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:07:05.047 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:07:05.047 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:07:05.047 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:07:05.047 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:07:05.047 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:07:05.047 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:07:05.047 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:07:05.047 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:07:05.047 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:07:05.047 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:07:05.047 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:07:05.047 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:07:05.047 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:07:05.047 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:07:05.047 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:07:05.047 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:07:05.047 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:07:05.047 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:07:05.047 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:07:05.047 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:07:05.047 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:07:05.047 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:07:05.047 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:07:05.047 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:07:05.047 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:07:05.047 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:07:05.047 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:07:05.047 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:07:05.047 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:07:05.047 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:07:05.047 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:07:05.047 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:07:05.047 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:07:05.047 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:07:05.047 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:07:05.047 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:07:05.047 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:07:05.047 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:07:05.047 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:07:05.047 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:07:05.047 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:07:05.047 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:07:05.047 22:52:49 unittest -- unit/unittest.sh@208 -- # uname -m 00:07:05.047 22:52:49 unittest -- unit/unittest.sh@208 -- # '[' x86_64 = aarch64 ']' 00:07:05.047 22:52:49 unittest -- unit/unittest.sh@212 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:07:05.047 22:52:49 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:05.047 22:52:49 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.047 22:52:49 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:05.047 ************************************ 00:07:05.047 START TEST unittest_pci_event 00:07:05.047 ************************************ 00:07:05.047 22:52:49 unittest.unittest_pci_event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:07:05.047 00:07:05.047 00:07:05.047 CUnit - A unit testing framework for C - Version 2.1-3 00:07:05.047 http://cunit.sourceforge.net/ 00:07:05.047 00:07:05.047 00:07:05.047 Suite: pci_event 00:07:05.047 Test: test_pci_parse_event ...[2024-07-13 22:52:50.009501] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:07:05.047 [2024-07-13 22:52:50.010139] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:07:05.047 passed 00:07:05.047 00:07:05.047 Run Summary: Type Total Ran Passed Failed Inactive 00:07:05.047 suites 1 1 n/a 0 0 00:07:05.047 tests 1 1 1 0 0 00:07:05.047 asserts 15 15 15 0 n/a 00:07:05.047 00:07:05.047 Elapsed time = 0.001 seconds 00:07:05.047 00:07:05.047 real 0m0.036s 00:07:05.047 user 0m0.023s 00:07:05.047 sys 0m0.011s 00:07:05.047 22:52:50 unittest.unittest_pci_event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.047 22:52:50 unittest.unittest_pci_event -- common/autotest_common.sh@10 -- # set +x 00:07:05.047 ************************************ 00:07:05.047 END TEST unittest_pci_event 00:07:05.047 ************************************ 00:07:05.047 22:52:50 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:05.047 22:52:50 unittest -- unit/unittest.sh@213 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:07:05.047 22:52:50 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:05.047 22:52:50 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.047 22:52:50 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:05.047 ************************************ 00:07:05.047 START TEST unittest_include 00:07:05.047 ************************************ 00:07:05.047 22:52:50 unittest.unittest_include -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:07:05.047 00:07:05.047 00:07:05.047 CUnit - A unit testing framework for C - Version 2.1-3 00:07:05.047 http://cunit.sourceforge.net/ 00:07:05.047 00:07:05.047 00:07:05.047 Suite: histogram 00:07:05.047 Test: histogram_test ...passed 00:07:05.047 Test: histogram_merge ...passed 00:07:05.047 00:07:05.047 Run Summary: Type Total Ran Passed Failed Inactive 00:07:05.047 suites 1 1 n/a 0 0 00:07:05.047 tests 2 2 2 0 0 00:07:05.047 asserts 50 50 50 0 n/a 00:07:05.047 00:07:05.047 Elapsed time = 0.006 seconds 00:07:05.047 00:07:05.047 real 0m0.041s 00:07:05.047 user 0m0.024s 00:07:05.047 sys 0m0.015s 00:07:05.047 22:52:50 unittest.unittest_include -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.047 22:52:50 unittest.unittest_include -- common/autotest_common.sh@10 -- # set +x 00:07:05.047 ************************************ 00:07:05.047 END TEST unittest_include 00:07:05.047 ************************************ 00:07:05.047 22:52:50 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:05.047 22:52:50 unittest -- unit/unittest.sh@214 -- # run_test unittest_bdev unittest_bdev 00:07:05.047 22:52:50 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:05.047 22:52:50 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.047 22:52:50 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:05.047 ************************************ 00:07:05.047 START TEST unittest_bdev 00:07:05.047 ************************************ 00:07:05.047 22:52:50 unittest.unittest_bdev -- common/autotest_common.sh@1123 -- # unittest_bdev 00:07:05.047 22:52:50 unittest.unittest_bdev -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:07:05.047 00:07:05.047 00:07:05.047 CUnit - A unit testing framework for C - Version 2.1-3 00:07:05.047 http://cunit.sourceforge.net/ 00:07:05.047 00:07:05.047 00:07:05.047 Suite: bdev 00:07:05.047 Test: bytes_to_blocks_test ...passed 00:07:05.047 Test: num_blocks_test ...passed 00:07:05.047 Test: io_valid_test ...passed 00:07:05.047 Test: open_write_test ...[2024-07-13 22:52:50.270449] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:07:05.047 [2024-07-13 22:52:50.270776] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:07:05.047 [2024-07-13 22:52:50.270915] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:07:05.047 passed 00:07:05.047 Test: claim_test ...passed 00:07:05.047 Test: alias_add_del_test ...[2024-07-13 22:52:50.368136] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:07:05.047 [2024-07-13 22:52:50.368313] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4643:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:07:05.047 [2024-07-13 22:52:50.368372] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:07:05.047 passed 00:07:05.047 Test: get_device_stat_test ...passed 00:07:05.047 Test: bdev_io_types_test ...passed 00:07:05.047 Test: bdev_io_wait_test ...passed 00:07:05.047 Test: bdev_io_spans_split_test ...passed 00:07:05.047 Test: bdev_io_boundary_split_test ...passed 00:07:05.048 Test: bdev_io_max_size_and_segment_split_test ...[2024-07-13 22:52:50.536221] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3208:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:07:05.048 passed 00:07:05.048 Test: bdev_io_mix_split_test ...passed 00:07:05.048 Test: bdev_io_split_with_io_wait ...passed 00:07:05.048 Test: bdev_io_write_unit_split_test ...[2024-07-13 22:52:50.648602] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:07:05.048 [2024-07-13 22:52:50.648716] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:07:05.048 [2024-07-13 22:52:50.648760] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:07:05.048 [2024-07-13 22:52:50.648801] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:07:05.048 passed 00:07:05.048 Test: bdev_io_alignment_with_boundary ...passed 00:07:05.048 Test: bdev_io_alignment ...passed 00:07:05.048 Test: bdev_histograms ...passed 00:07:05.048 Test: bdev_write_zeroes ...passed 00:07:05.048 Test: bdev_compare_and_write ...passed 00:07:05.048 Test: bdev_compare ...passed 00:07:05.048 Test: bdev_compare_emulated ...passed 00:07:05.048 Test: bdev_zcopy_write ...passed 00:07:05.048 Test: bdev_zcopy_read ...passed 00:07:05.048 Test: bdev_open_while_hotremove ...passed 00:07:05.048 Test: bdev_close_while_hotremove ...passed 00:07:05.048 Test: bdev_open_ext_test ...[2024-07-13 22:52:51.055363] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8184:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:07:05.048 passed 00:07:05.048 Test: bdev_open_ext_unregister ...[2024-07-13 22:52:51.055569] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8184:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:07:05.048 passed 00:07:05.048 Test: bdev_set_io_timeout ...passed 00:07:05.048 Test: bdev_set_qd_sampling ...passed 00:07:05.048 Test: lba_range_overlap ...passed 00:07:05.048 Test: lock_lba_range_check_ranges ...passed 00:07:05.048 Test: lock_lba_range_with_io_outstanding ...passed 00:07:05.048 Test: lock_lba_range_overlapped ...passed 00:07:05.048 Test: bdev_quiesce ...[2024-07-13 22:52:51.250697] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10107:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:07:05.048 passed 00:07:05.048 Test: bdev_io_abort ...passed 00:07:05.048 Test: bdev_unmap ...passed 00:07:05.048 Test: bdev_write_zeroes_split_test ...passed 00:07:05.048 Test: bdev_set_options_test ...passed 00:07:05.048 Test: bdev_get_memory_domains ...passed 00:07:05.048 Test: bdev_io_ext ...[2024-07-13 22:52:51.405146] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 502:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:07:05.048 passed 00:07:05.048 Test: bdev_io_ext_no_opts ...passed 00:07:05.048 Test: bdev_io_ext_invalid_opts ...passed 00:07:05.048 Test: bdev_io_ext_split ...passed 00:07:05.048 Test: bdev_io_ext_bounce_buffer ...passed 00:07:05.048 Test: bdev_register_uuid_alias ...[2024-07-13 22:52:51.648898] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 347b1dd7-ff49-448f-9eb0-cdd12781af31 already exists 00:07:05.048 [2024-07-13 22:52:51.649021] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:347b1dd7-ff49-448f-9eb0-cdd12781af31 alias for bdev bdev0 00:07:05.048 passed 00:07:05.048 Test: bdev_unregister_by_name ...[2024-07-13 22:52:51.672342] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7974:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:07:05.048 passed 00:07:05.048 Test: for_each_bdev_test ...[2024-07-13 22:52:51.672447] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7982:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:07:05.048 passed 00:07:05.048 Test: bdev_seek_test ...passed 00:07:05.048 Test: bdev_copy ...passed 00:07:05.048 Test: bdev_copy_split_test ...passed 00:07:05.048 Test: examine_locks ...passed 00:07:05.048 Test: claim_v2_rwo ...[2024-07-13 22:52:51.814579] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:07:05.048 [2024-07-13 22:52:51.814686] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8708:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:07:05.048 [2024-07-13 22:52:51.814719] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:07:05.048 [2024-07-13 22:52:51.814796] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:07:05.048 [2024-07-13 22:52:51.814818] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:07:05.048 passed 00:07:05.048 Test: claim_v2_rom ...[2024-07-13 22:52:51.814874] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8703:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:07:05.048 [2024-07-13 22:52:51.815045] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:07:05.048 [2024-07-13 22:52:51.815124] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:07:05.048 [2024-07-13 22:52:51.815151] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:07:05.048 [2024-07-13 22:52:51.815180] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:07:05.048 [2024-07-13 22:52:51.815220] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8746:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:07:05.048 [2024-07-13 22:52:51.815284] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8741:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:07:05.048 passed 00:07:05.048 Test: claim_v2_rwm ...[2024-07-13 22:52:51.815425] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8776:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:07:05.048 [2024-07-13 22:52:51.815494] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:07:05.048 [2024-07-13 22:52:51.815540] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:07:05.048 [2024-07-13 22:52:51.815572] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:07:05.048 [2024-07-13 22:52:51.815593] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:07:05.048 [2024-07-13 22:52:51.815630] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8796:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:07:05.048 [2024-07-13 22:52:51.815681] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8776:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:07:05.048 passed 00:07:05.048 Test: claim_v2_existing_writer ...passed 00:07:05.048 Test: claim_v2_existing_v1 ...[2024-07-13 22:52:51.815835] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8741:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:07:05.048 [2024-07-13 22:52:51.815874] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8741:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:07:05.048 [2024-07-13 22:52:51.816023] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:07:05.048 passed 00:07:05.048 Test: claim_v1_existing_v2 ...[2024-07-13 22:52:51.816074] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:07:05.048 [2024-07-13 22:52:51.816096] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:07:05.048 [2024-07-13 22:52:51.816240] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:07:05.048 [2024-07-13 22:52:51.816303] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:07:05.048 [2024-07-13 22:52:51.816348] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:07:05.048 passed 00:07:05.048 Test: examine_claimed ...[2024-07-13 22:52:51.816756] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:07:05.048 passed 00:07:05.048 00:07:05.048 Run Summary: Type Total Ran Passed Failed Inactive 00:07:05.048 suites 1 1 n/a 0 0 00:07:05.048 tests 59 59 59 0 0 00:07:05.048 asserts 4599 4599 4599 0 n/a 00:07:05.048 00:07:05.048 Elapsed time = 1.617 seconds 00:07:05.048 22:52:51 unittest.unittest_bdev -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:07:05.048 00:07:05.048 00:07:05.048 CUnit - A unit testing framework for C - Version 2.1-3 00:07:05.048 http://cunit.sourceforge.net/ 00:07:05.048 00:07:05.048 00:07:05.048 Suite: nvme 00:07:05.048 Test: test_create_ctrlr ...passed 00:07:05.048 Test: test_reset_ctrlr ...[2024-07-13 22:52:51.872863] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:05.048 passed 00:07:05.048 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:07:05.048 Test: test_failover_ctrlr ...passed 00:07:05.048 Test: test_race_between_failover_and_add_secondary_trid ...[2024-07-13 22:52:51.875602] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:05.048 [2024-07-13 22:52:51.875829] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:05.048 [2024-07-13 22:52:51.876049] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:05.048 passed 00:07:05.048 Test: test_pending_reset ...[2024-07-13 22:52:51.877573] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:05.048 [2024-07-13 22:52:51.877895] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:05.048 passed 00:07:05.048 Test: test_attach_ctrlr ...[2024-07-13 22:52:51.879012] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:07:05.048 passed 00:07:05.048 Test: test_aer_cb ...passed 00:07:05.048 Test: test_submit_nvme_cmd ...passed 00:07:05.048 Test: test_add_remove_trid ...passed 00:07:05.048 Test: test_abort ...[2024-07-13 22:52:51.882560] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7452:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:07:05.048 passed 00:07:05.048 Test: test_get_io_qpair ...passed 00:07:05.048 Test: test_bdev_unregister ...passed 00:07:05.048 Test: test_compare_ns ...passed 00:07:05.048 Test: test_init_ana_log_page ...passed 00:07:05.048 Test: test_get_memory_domains ...passed 00:07:05.048 Test: test_reconnect_qpair ...[2024-07-13 22:52:51.885182] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:05.048 passed 00:07:05.048 Test: test_create_bdev_ctrlr ...[2024-07-13 22:52:51.885822] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5382:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:07:05.049 passed 00:07:05.049 Test: test_add_multi_ns_to_bdev ...[2024-07-13 22:52:51.887228] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4573:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:07:05.049 passed 00:07:05.049 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:07:05.049 Test: test_admin_path ...passed 00:07:05.049 Test: test_reset_bdev_ctrlr ...passed 00:07:05.049 Test: test_find_io_path ...passed 00:07:05.049 Test: test_retry_io_if_ana_state_is_updating ...passed 00:07:05.049 Test: test_retry_io_for_io_path_error ...passed 00:07:05.049 Test: test_retry_io_count ...passed 00:07:05.049 Test: test_concurrent_read_ana_log_page ...passed 00:07:05.049 Test: test_retry_io_for_ana_error ...passed 00:07:05.049 Test: test_check_io_error_resiliency_params ...[2024-07-13 22:52:51.894788] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6076:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:07:05.049 [2024-07-13 22:52:51.894899] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6080:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:07:05.049 [2024-07-13 22:52:51.894932] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6089:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:07:05.049 [2024-07-13 22:52:51.894968] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6092:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:07:05.049 [2024-07-13 22:52:51.895000] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6104:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:07:05.049 [2024-07-13 22:52:51.895038] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6104:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:07:05.049 [2024-07-13 22:52:51.895074] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6084:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:07:05.049 [2024-07-13 22:52:51.895139] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6099:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:07:05.049 passed 00:07:05.049 Test: test_retry_io_if_ctrlr_is_resetting ...[2024-07-13 22:52:51.895182] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6096:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:07:05.049 passed 00:07:05.049 Test: test_reconnect_ctrlr ...[2024-07-13 22:52:51.896028] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:05.049 [2024-07-13 22:52:51.896160] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:05.049 [2024-07-13 22:52:51.896439] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:05.049 [2024-07-13 22:52:51.896569] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:05.049 [2024-07-13 22:52:51.896703] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:05.049 passed 00:07:05.049 Test: test_retry_failover_ctrlr ...[2024-07-13 22:52:51.897061] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:05.049 passed 00:07:05.049 Test: test_fail_path ...[2024-07-13 22:52:51.897685] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:05.049 [2024-07-13 22:52:51.897881] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:05.049 [2024-07-13 22:52:51.898022] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:05.049 [2024-07-13 22:52:51.898129] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:05.049 [2024-07-13 22:52:51.898252] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:05.049 passed 00:07:05.049 Test: test_nvme_ns_cmp ...passed 00:07:05.049 Test: test_ana_transition ...passed 00:07:05.049 Test: test_set_preferred_path ...passed 00:07:05.049 Test: test_find_next_io_path ...passed 00:07:05.049 Test: test_find_io_path_min_qd ...passed 00:07:05.049 Test: test_disable_auto_failback ...[2024-07-13 22:52:51.899980] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:05.049 passed 00:07:05.049 Test: test_set_multipath_policy ...passed 00:07:05.049 Test: test_uuid_generation ...passed 00:07:05.049 Test: test_retry_io_to_same_path ...passed 00:07:05.049 Test: test_race_between_reset_and_disconnected ...passed 00:07:05.049 Test: test_ctrlr_op_rpc ...passed 00:07:05.049 Test: test_bdev_ctrlr_op_rpc ...passed 00:07:05.049 Test: test_disable_enable_ctrlr ...[2024-07-13 22:52:51.903898] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:05.049 [2024-07-13 22:52:51.904073] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:05.049 passed 00:07:05.049 Test: test_delete_ctrlr_done ...passed 00:07:05.049 Test: test_ns_remove_during_reset ...passed 00:07:05.049 Test: test_io_path_is_current ...passed 00:07:05.049 00:07:05.049 Run Summary: Type Total Ran Passed Failed Inactive 00:07:05.049 suites 1 1 n/a 0 0 00:07:05.049 tests 49 49 49 0 0 00:07:05.049 asserts 3577 3577 3577 0 n/a 00:07:05.049 00:07:05.049 Elapsed time = 0.033 seconds 00:07:05.049 22:52:51 unittest.unittest_bdev -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:07:05.049 00:07:05.049 00:07:05.049 CUnit - A unit testing framework for C - Version 2.1-3 00:07:05.049 http://cunit.sourceforge.net/ 00:07:05.049 00:07:05.049 Test Options 00:07:05.049 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:07:05.049 00:07:05.049 Suite: raid 00:07:05.049 Test: test_create_raid ...passed 00:07:05.049 Test: test_create_raid_superblock ...passed 00:07:05.049 Test: test_delete_raid ...passed 00:07:05.049 Test: test_create_raid_invalid_args ...[2024-07-13 22:52:51.951744] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1481:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:07:05.049 [2024-07-13 22:52:51.952283] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1475:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:07:05.049 [2024-07-13 22:52:51.953072] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1465:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:07:05.049 [2024-07-13 22:52:51.953368] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3193:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:07:05.049 [2024-07-13 22:52:51.953512] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3369:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:07:05.049 [2024-07-13 22:52:51.954552] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3193:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:07:05.049 [2024-07-13 22:52:51.954614] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3369:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:07:05.049 passed 00:07:05.049 Test: test_delete_raid_invalid_args ...passed 00:07:05.049 Test: test_io_channel ...passed 00:07:05.049 Test: test_reset_io ...passed 00:07:05.049 Test: test_multi_raid ...passed 00:07:05.049 Test: test_io_type_supported ...passed 00:07:05.049 Test: test_raid_json_dump_info ...passed 00:07:05.049 Test: test_context_size ...passed 00:07:05.049 Test: test_raid_level_conversions ...passed 00:07:05.049 Test: test_raid_io_split ...passed 00:07:05.049 Test: test_raid_process ...passed 00:07:05.049 00:07:05.049 Run Summary: Type Total Ran Passed Failed Inactive 00:07:05.049 suites 1 1 n/a 0 0 00:07:05.049 tests 14 14 14 0 0 00:07:05.049 asserts 6183 6183 6183 0 n/a 00:07:05.049 00:07:05.049 Elapsed time = 0.024 seconds 00:07:05.049 22:52:51 unittest.unittest_bdev -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:07:05.049 00:07:05.049 00:07:05.049 CUnit - A unit testing framework for C - Version 2.1-3 00:07:05.049 http://cunit.sourceforge.net/ 00:07:05.049 00:07:05.049 00:07:05.049 Suite: raid_sb 00:07:05.049 Test: test_raid_bdev_write_superblock ...passed 00:07:05.049 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:07:05.049 Test: test_raid_bdev_parse_superblock ...[2024-07-13 22:52:52.011624] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:07:05.049 passed 00:07:05.049 Suite: raid_sb_md 00:07:05.049 Test: test_raid_bdev_write_superblock ...passed 00:07:05.049 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:07:05.049 Test: test_raid_bdev_parse_superblock ...passed 00:07:05.049 Suite: raid_sb_md_interleaved 00:07:05.049 Test: test_raid_bdev_write_superblock ...[2024-07-13 22:52:52.012085] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:07:05.049 passed 00:07:05.049 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:07:05.049 Test: test_raid_bdev_parse_superblock ...[2024-07-13 22:52:52.012359] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:07:05.049 passed 00:07:05.049 00:07:05.049 Run Summary: Type Total Ran Passed Failed Inactive 00:07:05.049 suites 3 3 n/a 0 0 00:07:05.049 tests 9 9 9 0 0 00:07:05.049 asserts 139 139 139 0 n/a 00:07:05.049 00:07:05.049 Elapsed time = 0.001 seconds 00:07:05.049 22:52:52 unittest.unittest_bdev -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:07:05.049 00:07:05.049 00:07:05.049 CUnit - A unit testing framework for C - Version 2.1-3 00:07:05.049 http://cunit.sourceforge.net/ 00:07:05.049 00:07:05.049 00:07:05.049 Suite: concat 00:07:05.049 Test: test_concat_start ...passed 00:07:05.049 Test: test_concat_rw ...passed 00:07:05.049 Test: test_concat_null_payload ...passed 00:07:05.049 00:07:05.049 Run Summary: Type Total Ran Passed Failed Inactive 00:07:05.049 suites 1 1 n/a 0 0 00:07:05.049 tests 3 3 3 0 0 00:07:05.049 asserts 8460 8460 8460 0 n/a 00:07:05.049 00:07:05.049 Elapsed time = 0.008 seconds 00:07:05.049 22:52:52 unittest.unittest_bdev -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid0.c/raid0_ut 00:07:05.049 00:07:05.049 00:07:05.049 CUnit - A unit testing framework for C - Version 2.1-3 00:07:05.049 http://cunit.sourceforge.net/ 00:07:05.049 00:07:05.049 00:07:05.049 Suite: raid0 00:07:05.049 Test: test_write_io ...passed 00:07:05.049 Test: test_read_io ...passed 00:07:05.050 Test: test_unmap_io ...passed 00:07:05.050 Test: test_io_failure ...passed 00:07:05.050 Suite: raid0_dif 00:07:05.050 Test: test_write_io ...passed 00:07:05.050 Test: test_read_io ...passed 00:07:05.050 Test: test_unmap_io ...passed 00:07:05.050 Test: test_io_failure ...passed 00:07:05.050 00:07:05.050 Run Summary: Type Total Ran Passed Failed Inactive 00:07:05.050 suites 2 2 n/a 0 0 00:07:05.050 tests 8 8 8 0 0 00:07:05.050 asserts 368291 368291 368291 0 n/a 00:07:05.050 00:07:05.050 Elapsed time = 0.140 seconds 00:07:05.050 22:52:52 unittest.unittest_bdev -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:07:05.050 00:07:05.050 00:07:05.050 CUnit - A unit testing framework for C - Version 2.1-3 00:07:05.050 http://cunit.sourceforge.net/ 00:07:05.050 00:07:05.050 00:07:05.050 Suite: raid1 00:07:05.050 Test: test_raid1_start ...passed 00:07:05.050 Test: test_raid1_read_balancing ...passed 00:07:05.050 Test: test_raid1_write_error ...passed 00:07:05.050 Test: test_raid1_read_error ...passed 00:07:05.050 00:07:05.050 Run Summary: Type Total Ran Passed Failed Inactive 00:07:05.050 suites 1 1 n/a 0 0 00:07:05.050 tests 4 4 4 0 0 00:07:05.050 asserts 4374 4374 4374 0 n/a 00:07:05.050 00:07:05.050 Elapsed time = 0.006 seconds 00:07:05.050 22:52:52 unittest.unittest_bdev -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:07:05.050 00:07:05.050 00:07:05.050 CUnit - A unit testing framework for C - Version 2.1-3 00:07:05.050 http://cunit.sourceforge.net/ 00:07:05.050 00:07:05.050 00:07:05.050 Suite: zone 00:07:05.050 Test: test_zone_get_operation ...passed 00:07:05.050 Test: test_bdev_zone_get_info ...passed 00:07:05.050 Test: test_bdev_zone_management ...passed 00:07:05.050 Test: test_bdev_zone_append ...passed 00:07:05.050 Test: test_bdev_zone_append_with_md ...passed 00:07:05.050 Test: test_bdev_zone_appendv ...passed 00:07:05.050 Test: test_bdev_zone_appendv_with_md ...passed 00:07:05.050 Test: test_bdev_io_get_append_location ...passed 00:07:05.050 00:07:05.050 Run Summary: Type Total Ran Passed Failed Inactive 00:07:05.050 suites 1 1 n/a 0 0 00:07:05.050 tests 8 8 8 0 0 00:07:05.050 asserts 94 94 94 0 n/a 00:07:05.050 00:07:05.050 Elapsed time = 0.000 seconds 00:07:05.050 22:52:52 unittest.unittest_bdev -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:07:05.050 00:07:05.050 00:07:05.050 CUnit - A unit testing framework for C - Version 2.1-3 00:07:05.050 http://cunit.sourceforge.net/ 00:07:05.050 00:07:05.050 00:07:05.050 Suite: gpt_parse 00:07:05.050 Test: test_parse_mbr_and_primary ...[2024-07-13 22:52:52.344618] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:07:05.050 [2024-07-13 22:52:52.345242] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:07:05.050 [2024-07-13 22:52:52.345327] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:07:05.050 [2024-07-13 22:52:52.345445] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:07:05.050 [2024-07-13 22:52:52.345566] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:07:05.050 [2024-07-13 22:52:52.345876] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:07:05.050 passed 00:07:05.050 Test: test_parse_secondary ...[2024-07-13 22:52:52.346840] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:07:05.050 [2024-07-13 22:52:52.346917] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:07:05.050 [2024-07-13 22:52:52.346974] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:07:05.050 [2024-07-13 22:52:52.347028] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:07:05.050 passed 00:07:05.050 Test: test_check_mbr ...[2024-07-13 22:52:52.348073] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:07:05.050 [2024-07-13 22:52:52.348139] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:07:05.050 passed 00:07:05.050 Test: test_read_header ...[2024-07-13 22:52:52.348208] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:07:05.050 [2024-07-13 22:52:52.348320] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:07:05.050 [2024-07-13 22:52:52.348700] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:07:05.050 [2024-07-13 22:52:52.348766] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:07:05.050 [2024-07-13 22:52:52.348812] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:07:05.050 [2024-07-13 22:52:52.348859] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:07:05.050 passed 00:07:05.050 Test: test_read_partitions ...[2024-07-13 22:52:52.349144] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:07:05.050 [2024-07-13 22:52:52.349572] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:07:05.050 [2024-07-13 22:52:52.349629] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:07:05.050 [2024-07-13 22:52:52.349668] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:07:05.050 [2024-07-13 22:52:52.350358] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:07:05.050 passed 00:07:05.050 00:07:05.050 Run Summary: Type Total Ran Passed Failed Inactive 00:07:05.050 suites 1 1 n/a 0 0 00:07:05.050 tests 5 5 5 0 0 00:07:05.050 asserts 33 33 33 0 n/a 00:07:05.050 00:07:05.050 Elapsed time = 0.007 seconds 00:07:05.050 22:52:52 unittest.unittest_bdev -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:07:05.050 00:07:05.050 00:07:05.050 CUnit - A unit testing framework for C - Version 2.1-3 00:07:05.050 http://cunit.sourceforge.net/ 00:07:05.050 00:07:05.050 00:07:05.050 Suite: bdev_part 00:07:05.050 Test: part_test ...[2024-07-13 22:52:52.387602] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name eac5f519-b66f-52a1-9744-8491e4dd730b already exists 00:07:05.050 [2024-07-13 22:52:52.387894] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:eac5f519-b66f-52a1-9744-8491e4dd730b alias for bdev test1 00:07:05.050 passed 00:07:05.050 Test: part_free_test ...passed 00:07:05.050 Test: part_get_io_channel_test ...passed 00:07:05.050 Test: part_construct_ext ...passed 00:07:05.050 00:07:05.050 Run Summary: Type Total Ran Passed Failed Inactive 00:07:05.050 suites 1 1 n/a 0 0 00:07:05.050 tests 4 4 4 0 0 00:07:05.050 asserts 48 48 48 0 n/a 00:07:05.050 00:07:05.050 Elapsed time = 0.051 seconds 00:07:05.050 22:52:52 unittest.unittest_bdev -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:07:05.050 00:07:05.050 00:07:05.050 CUnit - A unit testing framework for C - Version 2.1-3 00:07:05.050 http://cunit.sourceforge.net/ 00:07:05.050 00:07:05.050 00:07:05.050 Suite: scsi_nvme_suite 00:07:05.050 Test: scsi_nvme_translate_test ...passed 00:07:05.050 00:07:05.050 Run Summary: Type Total Ran Passed Failed Inactive 00:07:05.050 suites 1 1 n/a 0 0 00:07:05.050 tests 1 1 1 0 0 00:07:05.050 asserts 104 104 104 0 n/a 00:07:05.050 00:07:05.050 Elapsed time = 0.000 seconds 00:07:05.050 22:52:52 unittest.unittest_bdev -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:07:05.050 00:07:05.050 00:07:05.050 CUnit - A unit testing framework for C - Version 2.1-3 00:07:05.050 http://cunit.sourceforge.net/ 00:07:05.050 00:07:05.050 00:07:05.051 Suite: lvol 00:07:05.051 Test: ut_lvs_init ...[2024-07-13 22:52:52.509277] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:07:05.051 [2024-07-13 22:52:52.509843] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:07:05.051 passed 00:07:05.051 Test: ut_lvol_init ...passed 00:07:05.051 Test: ut_lvol_snapshot ...passed 00:07:05.051 Test: ut_lvol_clone ...passed 00:07:05.051 Test: ut_lvs_destroy ...passed 00:07:05.051 Test: ut_lvs_unload ...passed 00:07:05.051 Test: ut_lvol_resize ...[2024-07-13 22:52:52.512046] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1394:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:07:05.051 passed 00:07:05.051 Test: ut_lvol_set_read_only ...passed 00:07:05.051 Test: ut_lvol_hotremove ...passed 00:07:05.051 Test: ut_vbdev_lvol_get_io_channel ...passed 00:07:05.051 Test: ut_vbdev_lvol_io_type_supported ...passed 00:07:05.051 Test: ut_lvol_read_write ...passed 00:07:05.051 Test: ut_vbdev_lvol_submit_request ...passed 00:07:05.051 Test: ut_lvol_examine_config ...passed 00:07:05.051 Test: ut_lvol_examine_disk ...[2024-07-13 22:52:52.513032] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1536:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:07:05.051 passed 00:07:05.051 Test: ut_lvol_rename ...[2024-07-13 22:52:52.514517] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:07:05.051 [2024-07-13 22:52:52.514708] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1344:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:07:05.051 passed 00:07:05.051 Test: ut_bdev_finish ...passed 00:07:05.051 Test: ut_lvs_rename ...passed 00:07:05.051 Test: ut_lvol_seek ...passed 00:07:05.051 Test: ut_esnap_dev_create ...[2024-07-13 22:52:52.515808] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:07:05.051 [2024-07-13 22:52:52.515952] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1885:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:07:05.051 passed 00:07:05.051 Test: ut_lvol_esnap_clone_bad_args ...[2024-07-13 22:52:52.516018] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1890:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:07:05.051 [2024-07-13 22:52:52.516226] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1280:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:07:05.051 [2024-07-13 22:52:52.516297] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1287:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:07:05.051 passed 00:07:05.051 Test: ut_lvol_shallow_copy ...[2024-07-13 22:52:52.516968] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1977:vbdev_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:07:05.051 [2024-07-13 22:52:52.517084] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1982:vbdev_lvol_shallow_copy: *ERROR*: lvol lvol_sc, bdev name must not be NULL 00:07:05.051 passed 00:07:05.051 Test: ut_lvol_set_external_parent ...[2024-07-13 22:52:52.517264] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:2037:vbdev_lvol_set_external_parent: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:07:05.051 passed 00:07:05.051 00:07:05.051 Run Summary: Type Total Ran Passed Failed Inactive 00:07:05.051 suites 1 1 n/a 0 0 00:07:05.051 tests 23 23 23 0 0 00:07:05.051 asserts 770 770 770 0 n/a 00:07:05.051 00:07:05.051 Elapsed time = 0.008 seconds 00:07:05.051 22:52:52 unittest.unittest_bdev -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:07:05.051 00:07:05.051 00:07:05.051 CUnit - A unit testing framework for C - Version 2.1-3 00:07:05.051 http://cunit.sourceforge.net/ 00:07:05.051 00:07:05.051 00:07:05.051 Suite: zone_block 00:07:05.051 Test: test_zone_block_create ...passed 00:07:05.051 Test: test_zone_block_create_invalid ...[2024-07-13 22:52:52.574900] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:07:05.051 [2024-07-13 22:52:52.575230] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-13 22:52:52.575419] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:07:05.051 [2024-07-13 22:52:52.575494] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-13 22:52:52.575670] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:07:05.051 [2024-07-13 22:52:52.575716] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-13 22:52:52.575817] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:07:05.051 [2024-07-13 22:52:52.575876] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:07:05.051 Test: test_get_zone_info ...[2024-07-13 22:52:52.576369] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:05.051 [2024-07-13 22:52:52.576466] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:05.051 [2024-07-13 22:52:52.576528] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:05.051 passed 00:07:05.051 Test: test_supported_io_types ...passed 00:07:05.051 Test: test_reset_zone ...[2024-07-13 22:52:52.577412] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:05.051 [2024-07-13 22:52:52.577489] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:05.051 passed 00:07:05.051 Test: test_open_zone ...[2024-07-13 22:52:52.577972] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:05.051 [2024-07-13 22:52:52.578699] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:05.051 [2024-07-13 22:52:52.578772] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:05.051 passed 00:07:05.051 Test: test_zone_write ...[2024-07-13 22:52:52.579290] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:07:05.051 [2024-07-13 22:52:52.579364] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:05.051 [2024-07-13 22:52:52.579434] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:07:05.051 [2024-07-13 22:52:52.579486] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:05.051 [2024-07-13 22:52:52.584924] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:07:05.051 [2024-07-13 22:52:52.584992] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:05.051 [2024-07-13 22:52:52.585082] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:07:05.051 [2024-07-13 22:52:52.585122] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:05.051 [2024-07-13 22:52:52.590650] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:07:05.051 [2024-07-13 22:52:52.590726] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:05.051 passed 00:07:05.051 Test: test_zone_read ...[2024-07-13 22:52:52.591195] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:07:05.051 [2024-07-13 22:52:52.591259] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:05.051 [2024-07-13 22:52:52.591341] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:07:05.051 [2024-07-13 22:52:52.591385] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:05.051 [2024-07-13 22:52:52.591871] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:07:05.051 [2024-07-13 22:52:52.591935] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:05.051 passed 00:07:05.051 Test: test_close_zone ...[2024-07-13 22:52:52.592286] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:05.051 [2024-07-13 22:52:52.592381] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:05.051 [2024-07-13 22:52:52.592629] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:05.051 [2024-07-13 22:52:52.592697] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:05.051 passed 00:07:05.051 Test: test_finish_zone ...[2024-07-13 22:52:52.593329] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:05.051 [2024-07-13 22:52:52.593417] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:05.051 passed 00:07:05.051 Test: test_append_zone ...[2024-07-13 22:52:52.593811] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:07:05.051 [2024-07-13 22:52:52.593870] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:05.051 [2024-07-13 22:52:52.593959] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:07:05.051 [2024-07-13 22:52:52.593996] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:05.051 [2024-07-13 22:52:52.606646] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:07:05.051 [2024-07-13 22:52:52.606729] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:05.051 passed 00:07:05.051 00:07:05.051 Run Summary: Type Total Ran Passed Failed Inactive 00:07:05.051 suites 1 1 n/a 0 0 00:07:05.051 tests 11 11 11 0 0 00:07:05.051 asserts 3437 3437 3437 0 n/a 00:07:05.051 00:07:05.051 Elapsed time = 0.033 seconds 00:07:05.051 22:52:52 unittest.unittest_bdev -- unit/unittest.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:07:05.051 00:07:05.051 00:07:05.051 CUnit - A unit testing framework for C - Version 2.1-3 00:07:05.051 http://cunit.sourceforge.net/ 00:07:05.051 00:07:05.051 00:07:05.051 Suite: bdev 00:07:05.051 Test: basic ...[2024-07-13 22:52:52.700340] thread.c:2373:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55daf18bb7c1): Operation not permitted (rc=-1) 00:07:05.052 [2024-07-13 22:52:52.700620] thread.c:2373:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x55daf18bb780): Operation not permitted (rc=-1) 00:07:05.052 [2024-07-13 22:52:52.700661] thread.c:2373:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55daf18bb7c1): Operation not permitted (rc=-1) 00:07:05.052 passed 00:07:05.052 Test: unregister_and_close ...passed 00:07:05.052 Test: unregister_and_close_different_threads ...passed 00:07:05.052 Test: basic_qos ...passed 00:07:05.052 Test: put_channel_during_reset ...passed 00:07:05.052 Test: aborted_reset ...passed 00:07:05.052 Test: aborted_reset_no_outstanding_io ...passed 00:07:05.052 Test: io_during_reset ...passed 00:07:05.052 Test: reset_completions ...passed 00:07:05.052 Test: io_during_qos_queue ...passed 00:07:05.052 Test: io_during_qos_reset ...passed 00:07:05.052 Test: enomem ...passed 00:07:05.052 Test: enomem_multi_bdev ...passed 00:07:05.052 Test: enomem_multi_bdev_unregister ...passed 00:07:05.052 Test: enomem_multi_io_target ...passed 00:07:05.052 Test: qos_dynamic_enable ...passed 00:07:05.052 Test: bdev_histograms_mt ...passed 00:07:05.052 Test: bdev_set_io_timeout_mt ...[2024-07-13 22:52:53.496440] thread.c: 471:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:07:05.052 passed 00:07:05.052 Test: lock_lba_range_then_submit_io ...[2024-07-13 22:52:53.521770] thread.c:2177:spdk_io_device_register: *ERROR*: io_device 0x55daf18bb740 already registered (old:0x6130000003c0 new:0x613000000c80) 00:07:05.052 passed 00:07:05.052 Test: unregister_during_reset ...passed 00:07:05.052 Test: event_notify_and_close ...passed 00:07:05.052 Test: unregister_and_qos_poller ...passed 00:07:05.052 Suite: bdev_wrong_thread 00:07:05.052 Test: spdk_bdev_register_wt ...[2024-07-13 22:52:53.702649] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8502:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0x619000158b80 (0x619000158b80) 00:07:05.052 passed 00:07:05.052 Test: spdk_bdev_examine_wt ...[2024-07-13 22:52:53.703425] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 810:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x619000158b80 (0x619000158b80) 00:07:05.052 passed 00:07:05.052 00:07:05.052 Run Summary: Type Total Ran Passed Failed Inactive 00:07:05.052 suites 2 2 n/a 0 0 00:07:05.052 tests 24 24 24 0 0 00:07:05.052 asserts 621 621 621 0 n/a 00:07:05.052 00:07:05.052 Elapsed time = 1.012 seconds 00:07:05.052 00:07:05.052 real 0m3.565s 00:07:05.052 user 0m1.714s 00:07:05.052 sys 0m1.836s 00:07:05.052 22:52:53 unittest.unittest_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.052 22:52:53 unittest.unittest_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:05.052 ************************************ 00:07:05.052 END TEST unittest_bdev 00:07:05.052 ************************************ 00:07:05.052 22:52:53 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:05.052 22:52:53 unittest -- unit/unittest.sh@215 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:05.052 22:52:53 unittest -- unit/unittest.sh@220 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:05.052 22:52:53 unittest -- unit/unittest.sh@225 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:05.052 22:52:53 unittest -- unit/unittest.sh@229 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:05.052 22:52:53 unittest -- unit/unittest.sh@230 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:07:05.052 22:52:53 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:05.052 22:52:53 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.052 22:52:53 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:05.052 ************************************ 00:07:05.052 START TEST unittest_bdev_raid5f 00:07:05.052 ************************************ 00:07:05.052 22:52:53 unittest.unittest_bdev_raid5f -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:07:05.052 00:07:05.052 00:07:05.052 CUnit - A unit testing framework for C - Version 2.1-3 00:07:05.052 http://cunit.sourceforge.net/ 00:07:05.052 00:07:05.052 00:07:05.052 Suite: raid5f 00:07:05.052 Test: test_raid5f_start ...passed 00:07:05.311 Test: test_raid5f_submit_read_request ...passed 00:07:05.311 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:07:10.584 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:07:37.121 Test: test_raid5f_chunk_write_error ...passed 00:07:47.132 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:07:49.687 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:08:36.355 Test: test_raid5f_submit_read_request_degraded ...passed 00:08:36.355 00:08:36.355 Run Summary: Type Total Ran Passed Failed Inactive 00:08:36.355 suites 1 1 n/a 0 0 00:08:36.355 tests 8 8 8 0 0 00:08:36.355 asserts 518158 518158 518158 0 n/a 00:08:36.355 00:08:36.355 Elapsed time = 84.225 seconds 00:08:36.355 00:08:36.355 real 1m24.368s 00:08:36.355 user 1m19.657s 00:08:36.355 sys 0m4.644s 00:08:36.355 22:54:18 unittest.unittest_bdev_raid5f -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:36.355 22:54:18 unittest.unittest_bdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:08:36.355 ************************************ 00:08:36.355 END TEST unittest_bdev_raid5f 00:08:36.355 ************************************ 00:08:36.355 22:54:18 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:36.355 22:54:18 unittest -- unit/unittest.sh@233 -- # run_test unittest_blob_blobfs unittest_blob 00:08:36.355 22:54:18 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:36.355 22:54:18 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:36.355 22:54:18 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:36.355 ************************************ 00:08:36.355 START TEST unittest_blob_blobfs 00:08:36.355 ************************************ 00:08:36.355 22:54:18 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1123 -- # unittest_blob 00:08:36.355 22:54:18 unittest.unittest_blob_blobfs -- unit/unittest.sh@39 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:08:36.355 22:54:18 unittest.unittest_blob_blobfs -- unit/unittest.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:08:36.355 00:08:36.355 00:08:36.355 CUnit - A unit testing framework for C - Version 2.1-3 00:08:36.355 http://cunit.sourceforge.net/ 00:08:36.355 00:08:36.355 00:08:36.355 Suite: blob_nocopy_noextent 00:08:36.355 Test: blob_init ...[2024-07-13 22:54:18.250673] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:08:36.355 passed 00:08:36.355 Test: blob_thin_provision ...passed 00:08:36.355 Test: blob_read_only ...passed 00:08:36.355 Test: bs_load ...[2024-07-13 22:54:18.348404] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:08:36.355 passed 00:08:36.355 Test: bs_load_custom_cluster_size ...passed 00:08:36.355 Test: bs_load_after_failed_grow ...passed 00:08:36.355 Test: bs_cluster_sz ...[2024-07-13 22:54:18.381108] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:08:36.355 [2024-07-13 22:54:18.381504] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:08:36.355 [2024-07-13 22:54:18.381674] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:08:36.355 passed 00:08:36.355 Test: bs_resize_md ...passed 00:08:36.355 Test: bs_destroy ...passed 00:08:36.355 Test: bs_type ...passed 00:08:36.355 Test: bs_super_block ...passed 00:08:36.355 Test: bs_test_recover_cluster_count ...passed 00:08:36.356 Test: bs_grow_live ...passed 00:08:36.356 Test: bs_grow_live_no_space ...passed 00:08:36.356 Test: bs_test_grow ...passed 00:08:36.356 Test: blob_serialize_test ...passed 00:08:36.356 Test: super_block_crc ...passed 00:08:36.356 Test: blob_thin_prov_write_count_io ...passed 00:08:36.356 Test: blob_thin_prov_unmap_cluster ...passed 00:08:36.356 Test: bs_load_iter_test ...passed 00:08:36.356 Test: blob_relations ...[2024-07-13 22:54:18.588623] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:36.356 [2024-07-13 22:54:18.588763] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:36.356 [2024-07-13 22:54:18.589795] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:36.356 [2024-07-13 22:54:18.589895] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:36.356 passed 00:08:36.356 Test: blob_relations2 ...[2024-07-13 22:54:18.605061] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:36.356 [2024-07-13 22:54:18.605171] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:36.356 [2024-07-13 22:54:18.605221] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:36.356 [2024-07-13 22:54:18.605254] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:36.356 [2024-07-13 22:54:18.606701] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:36.356 [2024-07-13 22:54:18.606796] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:36.356 [2024-07-13 22:54:18.607223] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:36.356 [2024-07-13 22:54:18.607290] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:36.356 passed 00:08:36.356 Test: blob_relations3 ...passed 00:08:36.356 Test: blobstore_clean_power_failure ...passed 00:08:36.356 Test: blob_delete_snapshot_power_failure ...[2024-07-13 22:54:18.771347] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:36.356 [2024-07-13 22:54:18.784305] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:36.356 [2024-07-13 22:54:18.784410] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:36.356 [2024-07-13 22:54:18.784470] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:36.356 [2024-07-13 22:54:18.798057] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:36.356 [2024-07-13 22:54:18.798160] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:36.356 [2024-07-13 22:54:18.798211] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:36.356 [2024-07-13 22:54:18.798268] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:36.356 [2024-07-13 22:54:18.811601] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:08:36.356 [2024-07-13 22:54:18.811751] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:36.356 [2024-07-13 22:54:18.824951] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:08:36.356 [2024-07-13 22:54:18.825127] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:36.356 [2024-07-13 22:54:18.838535] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:08:36.356 [2024-07-13 22:54:18.838679] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:36.356 passed 00:08:36.356 Test: blob_create_snapshot_power_failure ...[2024-07-13 22:54:18.879100] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:36.356 [2024-07-13 22:54:18.906579] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:36.356 [2024-07-13 22:54:18.920592] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:08:36.356 passed 00:08:36.356 Test: blob_io_unit ...passed 00:08:36.356 Test: blob_io_unit_compatibility ...passed 00:08:36.356 Test: blob_ext_md_pages ...passed 00:08:36.356 Test: blob_esnap_io_4096_4096 ...passed 00:08:36.356 Test: blob_esnap_io_512_512 ...passed 00:08:36.356 Test: blob_esnap_io_4096_512 ...passed 00:08:36.356 Test: blob_esnap_io_512_4096 ...passed 00:08:36.356 Test: blob_esnap_clone_resize ...passed 00:08:36.356 Suite: blob_bs_nocopy_noextent 00:08:36.356 Test: blob_open ...passed 00:08:36.356 Test: blob_create ...[2024-07-13 22:54:19.225857] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:08:36.356 passed 00:08:36.356 Test: blob_create_loop ...passed 00:08:36.356 Test: blob_create_fail ...[2024-07-13 22:54:19.328074] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:36.356 passed 00:08:36.356 Test: blob_create_internal ...passed 00:08:36.356 Test: blob_create_zero_extent ...passed 00:08:36.356 Test: blob_snapshot ...passed 00:08:36.356 Test: blob_clone ...passed 00:08:36.356 Test: blob_inflate ...[2024-07-13 22:54:19.524450] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:08:36.356 passed 00:08:36.356 Test: blob_delete ...passed 00:08:36.356 Test: blob_resize_test ...[2024-07-13 22:54:19.601101] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:08:36.356 passed 00:08:36.356 Test: blob_resize_thin_test ...passed 00:08:36.356 Test: channel_ops ...passed 00:08:36.356 Test: blob_super ...passed 00:08:36.356 Test: blob_rw_verify_iov ...passed 00:08:36.356 Test: blob_unmap ...passed 00:08:36.356 Test: blob_iter ...passed 00:08:36.356 Test: blob_parse_md ...passed 00:08:36.356 Test: bs_load_pending_removal ...passed 00:08:36.356 Test: bs_unload ...[2024-07-13 22:54:19.936690] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:08:36.356 passed 00:08:36.356 Test: bs_usable_clusters ...passed 00:08:36.356 Test: blob_crc ...[2024-07-13 22:54:20.011775] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:36.356 [2024-07-13 22:54:20.011936] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:36.356 passed 00:08:36.356 Test: blob_flags ...passed 00:08:36.356 Test: bs_version ...passed 00:08:36.356 Test: blob_set_xattrs_test ...[2024-07-13 22:54:20.129156] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:36.356 [2024-07-13 22:54:20.129267] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:36.356 passed 00:08:36.356 Test: blob_thin_prov_alloc ...passed 00:08:36.356 Test: blob_insert_cluster_msg_test ...passed 00:08:36.356 Test: blob_thin_prov_rw ...passed 00:08:36.356 Test: blob_thin_prov_rle ...passed 00:08:36.356 Test: blob_thin_prov_rw_iov ...passed 00:08:36.356 Test: blob_snapshot_rw ...passed 00:08:36.356 Test: blob_snapshot_rw_iov ...passed 00:08:36.356 Test: blob_inflate_rw ...passed 00:08:36.356 Test: blob_snapshot_freeze_io ...passed 00:08:36.356 Test: blob_operation_split_rw ...passed 00:08:36.356 Test: blob_operation_split_rw_iov ...passed 00:08:36.356 Test: blob_simultaneous_operations ...[2024-07-13 22:54:21.140804] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:36.356 [2024-07-13 22:54:21.140993] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:36.356 [2024-07-13 22:54:21.142244] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:36.356 [2024-07-13 22:54:21.142331] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:36.357 [2024-07-13 22:54:21.153853] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:36.357 [2024-07-13 22:54:21.153941] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:36.357 [2024-07-13 22:54:21.154082] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:36.357 [2024-07-13 22:54:21.154122] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:36.357 passed 00:08:36.357 Test: blob_persist_test ...passed 00:08:36.357 Test: blob_decouple_snapshot ...passed 00:08:36.357 Test: blob_seek_io_unit ...passed 00:08:36.357 Test: blob_nested_freezes ...passed 00:08:36.357 Test: blob_clone_resize ...passed 00:08:36.357 Test: blob_shallow_copy ...[2024-07-13 22:54:21.449796] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:08:36.357 [2024-07-13 22:54:21.450157] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:08:36.357 [2024-07-13 22:54:21.450434] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:08:36.357 passed 00:08:36.357 Suite: blob_blob_nocopy_noextent 00:08:36.357 Test: blob_write ...passed 00:08:36.357 Test: blob_read ...passed 00:08:36.357 Test: blob_rw_verify ...passed 00:08:36.357 Test: blob_rw_verify_iov_nomem ...passed 00:08:36.357 Test: blob_rw_iov_read_only ...passed 00:08:36.357 Test: blob_xattr ...passed 00:08:36.357 Test: blob_dirty_shutdown ...passed 00:08:36.357 Test: blob_is_degraded ...passed 00:08:36.357 Suite: blob_esnap_bs_nocopy_noextent 00:08:36.357 Test: blob_esnap_create ...passed 00:08:36.357 Test: blob_esnap_thread_add_remove ...passed 00:08:36.357 Test: blob_esnap_clone_snapshot ...passed 00:08:36.357 Test: blob_esnap_clone_inflate ...passed 00:08:36.357 Test: blob_esnap_clone_decouple ...passed 00:08:36.357 Test: blob_esnap_clone_reload ...passed 00:08:36.357 Test: blob_esnap_hotplug ...passed 00:08:36.357 Test: blob_set_parent ...[2024-07-13 22:54:22.074855] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:08:36.357 [2024-07-13 22:54:22.074972] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:08:36.357 [2024-07-13 22:54:22.075143] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:08:36.357 [2024-07-13 22:54:22.075206] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:08:36.357 [2024-07-13 22:54:22.075793] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:36.357 passed 00:08:36.357 Test: blob_set_external_parent ...[2024-07-13 22:54:22.113571] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:08:36.357 [2024-07-13 22:54:22.113692] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7795:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:08:36.357 [2024-07-13 22:54:22.113741] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:08:36.357 [2024-07-13 22:54:22.114239] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:36.357 passed 00:08:36.357 Suite: blob_nocopy_extent 00:08:36.357 Test: blob_init ...[2024-07-13 22:54:22.127549] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:08:36.357 passed 00:08:36.357 Test: blob_thin_provision ...passed 00:08:36.357 Test: blob_read_only ...passed 00:08:36.357 Test: bs_load ...[2024-07-13 22:54:22.182449] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:08:36.357 passed 00:08:36.357 Test: bs_load_custom_cluster_size ...passed 00:08:36.357 Test: bs_load_after_failed_grow ...passed 00:08:36.357 Test: bs_cluster_sz ...[2024-07-13 22:54:22.211850] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:08:36.357 [2024-07-13 22:54:22.212153] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:08:36.357 [2024-07-13 22:54:22.212259] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:08:36.357 passed 00:08:36.357 Test: bs_resize_md ...passed 00:08:36.357 Test: bs_destroy ...passed 00:08:36.357 Test: bs_type ...passed 00:08:36.357 Test: bs_super_block ...passed 00:08:36.357 Test: bs_test_recover_cluster_count ...passed 00:08:36.357 Test: bs_grow_live ...passed 00:08:36.357 Test: bs_grow_live_no_space ...passed 00:08:36.357 Test: bs_test_grow ...passed 00:08:36.357 Test: blob_serialize_test ...passed 00:08:36.357 Test: super_block_crc ...passed 00:08:36.357 Test: blob_thin_prov_write_count_io ...passed 00:08:36.357 Test: blob_thin_prov_unmap_cluster ...passed 00:08:36.357 Test: bs_load_iter_test ...passed 00:08:36.357 Test: blob_relations ...[2024-07-13 22:54:22.417199] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:36.357 [2024-07-13 22:54:22.417391] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:36.357 [2024-07-13 22:54:22.418460] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:36.357 [2024-07-13 22:54:22.418519] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:36.357 passed 00:08:36.357 Test: blob_relations2 ...[2024-07-13 22:54:22.435193] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:36.357 [2024-07-13 22:54:22.435319] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:36.357 [2024-07-13 22:54:22.435369] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:36.357 [2024-07-13 22:54:22.435400] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:36.357 [2024-07-13 22:54:22.437045] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:36.357 [2024-07-13 22:54:22.437134] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:36.357 [2024-07-13 22:54:22.437566] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:36.357 [2024-07-13 22:54:22.437633] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:36.357 passed 00:08:36.357 Test: blob_relations3 ...passed 00:08:36.357 Test: blobstore_clean_power_failure ...passed 00:08:36.357 Test: blob_delete_snapshot_power_failure ...[2024-07-13 22:54:22.618540] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:36.357 [2024-07-13 22:54:22.633204] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:36.357 [2024-07-13 22:54:22.647927] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:36.357 [2024-07-13 22:54:22.648038] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:36.357 [2024-07-13 22:54:22.648105] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:36.357 [2024-07-13 22:54:22.662443] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:36.357 [2024-07-13 22:54:22.662592] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:36.357 [2024-07-13 22:54:22.662645] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:36.358 [2024-07-13 22:54:22.662693] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:36.358 [2024-07-13 22:54:22.677135] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:36.358 [2024-07-13 22:54:22.677257] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:36.358 [2024-07-13 22:54:22.677309] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:36.358 [2024-07-13 22:54:22.677362] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:36.358 [2024-07-13 22:54:22.691961] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:08:36.358 [2024-07-13 22:54:22.692114] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:36.358 [2024-07-13 22:54:22.706279] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:08:36.358 [2024-07-13 22:54:22.706439] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:36.358 [2024-07-13 22:54:22.721310] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:08:36.358 [2024-07-13 22:54:22.721425] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:36.358 passed 00:08:36.358 Test: blob_create_snapshot_power_failure ...[2024-07-13 22:54:22.765945] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:36.358 [2024-07-13 22:54:22.780099] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:36.358 [2024-07-13 22:54:22.807624] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:36.358 [2024-07-13 22:54:22.822056] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:08:36.358 passed 00:08:36.358 Test: blob_io_unit ...passed 00:08:36.358 Test: blob_io_unit_compatibility ...passed 00:08:36.358 Test: blob_ext_md_pages ...passed 00:08:36.358 Test: blob_esnap_io_4096_4096 ...passed 00:08:36.358 Test: blob_esnap_io_512_512 ...passed 00:08:36.358 Test: blob_esnap_io_4096_512 ...passed 00:08:36.358 Test: blob_esnap_io_512_4096 ...passed 00:08:36.358 Test: blob_esnap_clone_resize ...passed 00:08:36.358 Suite: blob_bs_nocopy_extent 00:08:36.358 Test: blob_open ...passed 00:08:36.358 Test: blob_create ...[2024-07-13 22:54:23.134574] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:08:36.358 passed 00:08:36.358 Test: blob_create_loop ...passed 00:08:36.358 Test: blob_create_fail ...[2024-07-13 22:54:23.246316] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:36.358 passed 00:08:36.358 Test: blob_create_internal ...passed 00:08:36.358 Test: blob_create_zero_extent ...passed 00:08:36.358 Test: blob_snapshot ...passed 00:08:36.358 Test: blob_clone ...passed 00:08:36.358 Test: blob_inflate ...[2024-07-13 22:54:23.444196] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:08:36.358 passed 00:08:36.358 Test: blob_delete ...passed 00:08:36.358 Test: blob_resize_test ...[2024-07-13 22:54:23.515976] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:08:36.358 passed 00:08:36.358 Test: blob_resize_thin_test ...passed 00:08:36.358 Test: channel_ops ...passed 00:08:36.358 Test: blob_super ...passed 00:08:36.358 Test: blob_rw_verify_iov ...passed 00:08:36.358 Test: blob_unmap ...passed 00:08:36.358 Test: blob_iter ...passed 00:08:36.358 Test: blob_parse_md ...passed 00:08:36.358 Test: bs_load_pending_removal ...passed 00:08:36.358 Test: bs_unload ...[2024-07-13 22:54:23.860440] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:08:36.358 passed 00:08:36.358 Test: bs_usable_clusters ...passed 00:08:36.358 Test: blob_crc ...[2024-07-13 22:54:23.932528] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:36.358 [2024-07-13 22:54:23.932685] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:36.358 passed 00:08:36.358 Test: blob_flags ...passed 00:08:36.358 Test: bs_version ...passed 00:08:36.358 Test: blob_set_xattrs_test ...[2024-07-13 22:54:24.044807] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:36.358 [2024-07-13 22:54:24.044953] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:36.358 passed 00:08:36.358 Test: blob_thin_prov_alloc ...passed 00:08:36.358 Test: blob_insert_cluster_msg_test ...passed 00:08:36.358 Test: blob_thin_prov_rw ...passed 00:08:36.358 Test: blob_thin_prov_rle ...passed 00:08:36.358 Test: blob_thin_prov_rw_iov ...passed 00:08:36.358 Test: blob_snapshot_rw ...passed 00:08:36.358 Test: blob_snapshot_rw_iov ...passed 00:08:36.358 Test: blob_inflate_rw ...passed 00:08:36.358 Test: blob_snapshot_freeze_io ...passed 00:08:36.358 Test: blob_operation_split_rw ...passed 00:08:36.358 Test: blob_operation_split_rw_iov ...passed 00:08:36.358 Test: blob_simultaneous_operations ...[2024-07-13 22:54:25.030188] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:36.358 [2024-07-13 22:54:25.030307] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:36.358 [2024-07-13 22:54:25.031475] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:36.358 [2024-07-13 22:54:25.031557] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:36.358 [2024-07-13 22:54:25.042028] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:36.358 [2024-07-13 22:54:25.042099] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:36.358 [2024-07-13 22:54:25.042225] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:36.358 [2024-07-13 22:54:25.042249] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:36.358 passed 00:08:36.358 Test: blob_persist_test ...passed 00:08:36.358 Test: blob_decouple_snapshot ...passed 00:08:36.358 Test: blob_seek_io_unit ...passed 00:08:36.358 Test: blob_nested_freezes ...passed 00:08:36.358 Test: blob_clone_resize ...passed 00:08:36.358 Test: blob_shallow_copy ...[2024-07-13 22:54:25.338503] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:08:36.358 [2024-07-13 22:54:25.338835] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:08:36.358 [2024-07-13 22:54:25.339082] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:08:36.358 passed 00:08:36.358 Suite: blob_blob_nocopy_extent 00:08:36.358 Test: blob_write ...passed 00:08:36.358 Test: blob_read ...passed 00:08:36.358 Test: blob_rw_verify ...passed 00:08:36.358 Test: blob_rw_verify_iov_nomem ...passed 00:08:36.358 Test: blob_rw_iov_read_only ...passed 00:08:36.358 Test: blob_xattr ...passed 00:08:36.358 Test: blob_dirty_shutdown ...passed 00:08:36.358 Test: blob_is_degraded ...passed 00:08:36.358 Suite: blob_esnap_bs_nocopy_extent 00:08:36.358 Test: blob_esnap_create ...passed 00:08:36.358 Test: blob_esnap_thread_add_remove ...passed 00:08:36.618 Test: blob_esnap_clone_snapshot ...passed 00:08:36.618 Test: blob_esnap_clone_inflate ...passed 00:08:36.618 Test: blob_esnap_clone_decouple ...passed 00:08:36.618 Test: blob_esnap_clone_reload ...passed 00:08:36.618 Test: blob_esnap_hotplug ...passed 00:08:36.618 Test: blob_set_parent ...[2024-07-13 22:54:25.951472] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:08:36.618 [2024-07-13 22:54:25.951597] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:08:36.618 [2024-07-13 22:54:25.951720] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:08:36.618 [2024-07-13 22:54:25.951762] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:08:36.618 [2024-07-13 22:54:25.952219] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:36.618 passed 00:08:36.618 Test: blob_set_external_parent ...[2024-07-13 22:54:25.990531] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:08:36.618 [2024-07-13 22:54:25.990641] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7795:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:08:36.618 [2024-07-13 22:54:25.990685] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:08:36.618 [2024-07-13 22:54:25.991098] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:36.618 passed 00:08:36.618 Suite: blob_copy_noextent 00:08:36.618 Test: blob_init ...[2024-07-13 22:54:26.004128] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:08:36.618 passed 00:08:36.877 Test: blob_thin_provision ...passed 00:08:36.877 Test: blob_read_only ...passed 00:08:36.877 Test: bs_load ...[2024-07-13 22:54:26.056357] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:08:36.877 passed 00:08:36.877 Test: bs_load_custom_cluster_size ...passed 00:08:36.877 Test: bs_load_after_failed_grow ...passed 00:08:36.877 Test: bs_cluster_sz ...[2024-07-13 22:54:26.083117] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:08:36.877 [2024-07-13 22:54:26.083335] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:08:36.877 [2024-07-13 22:54:26.083380] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:08:36.877 passed 00:08:36.877 Test: bs_resize_md ...passed 00:08:36.877 Test: bs_destroy ...passed 00:08:36.877 Test: bs_type ...passed 00:08:36.877 Test: bs_super_block ...passed 00:08:36.877 Test: bs_test_recover_cluster_count ...passed 00:08:36.877 Test: bs_grow_live ...passed 00:08:36.877 Test: bs_grow_live_no_space ...passed 00:08:36.877 Test: bs_test_grow ...passed 00:08:36.877 Test: blob_serialize_test ...passed 00:08:36.877 Test: super_block_crc ...passed 00:08:36.877 Test: blob_thin_prov_write_count_io ...passed 00:08:36.877 Test: blob_thin_prov_unmap_cluster ...passed 00:08:36.877 Test: bs_load_iter_test ...passed 00:08:37.136 Test: blob_relations ...[2024-07-13 22:54:26.290222] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:37.136 [2024-07-13 22:54:26.290352] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:37.136 [2024-07-13 22:54:26.290978] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:37.136 [2024-07-13 22:54:26.291032] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:37.136 passed 00:08:37.136 Test: blob_relations2 ...[2024-07-13 22:54:26.305310] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:37.136 [2024-07-13 22:54:26.305432] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:37.136 [2024-07-13 22:54:26.305496] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:37.136 [2024-07-13 22:54:26.305512] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:37.136 [2024-07-13 22:54:26.306540] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:37.136 [2024-07-13 22:54:26.306605] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:37.136 [2024-07-13 22:54:26.306950] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:37.136 [2024-07-13 22:54:26.306997] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:37.136 passed 00:08:37.136 Test: blob_relations3 ...passed 00:08:37.136 Test: blobstore_clean_power_failure ...passed 00:08:37.136 Test: blob_delete_snapshot_power_failure ...[2024-07-13 22:54:26.483307] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:37.136 [2024-07-13 22:54:26.496294] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:37.136 [2024-07-13 22:54:26.496398] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:37.136 [2024-07-13 22:54:26.496442] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:37.136 [2024-07-13 22:54:26.509429] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:37.136 [2024-07-13 22:54:26.509540] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:37.136 [2024-07-13 22:54:26.509581] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:37.136 [2024-07-13 22:54:26.509614] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:37.136 [2024-07-13 22:54:26.522997] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:08:37.136 [2024-07-13 22:54:26.523131] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:37.136 [2024-07-13 22:54:26.537321] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:08:37.136 [2024-07-13 22:54:26.537512] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:37.395 [2024-07-13 22:54:26.552509] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:08:37.395 [2024-07-13 22:54:26.552648] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:37.395 passed 00:08:37.395 Test: blob_create_snapshot_power_failure ...[2024-07-13 22:54:26.596272] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:37.395 [2024-07-13 22:54:26.622499] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:37.395 [2024-07-13 22:54:26.636494] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:08:37.395 passed 00:08:37.395 Test: blob_io_unit ...passed 00:08:37.395 Test: blob_io_unit_compatibility ...passed 00:08:37.395 Test: blob_ext_md_pages ...passed 00:08:37.395 Test: blob_esnap_io_4096_4096 ...passed 00:08:37.395 Test: blob_esnap_io_512_512 ...passed 00:08:37.654 Test: blob_esnap_io_4096_512 ...passed 00:08:37.654 Test: blob_esnap_io_512_4096 ...passed 00:08:37.654 Test: blob_esnap_clone_resize ...passed 00:08:37.654 Suite: blob_bs_copy_noextent 00:08:37.654 Test: blob_open ...passed 00:08:37.654 Test: blob_create ...[2024-07-13 22:54:26.934635] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:08:37.654 passed 00:08:37.654 Test: blob_create_loop ...passed 00:08:37.654 Test: blob_create_fail ...[2024-07-13 22:54:27.039588] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:37.654 passed 00:08:37.912 Test: blob_create_internal ...passed 00:08:37.912 Test: blob_create_zero_extent ...passed 00:08:37.912 Test: blob_snapshot ...passed 00:08:37.912 Test: blob_clone ...passed 00:08:37.912 Test: blob_inflate ...[2024-07-13 22:54:27.237415] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:08:37.912 passed 00:08:37.912 Test: blob_delete ...passed 00:08:37.912 Test: blob_resize_test ...[2024-07-13 22:54:27.311640] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:08:38.171 passed 00:08:38.171 Test: blob_resize_thin_test ...passed 00:08:38.171 Test: channel_ops ...passed 00:08:38.171 Test: blob_super ...passed 00:08:38.171 Test: blob_rw_verify_iov ...passed 00:08:38.171 Test: blob_unmap ...passed 00:08:38.171 Test: blob_iter ...passed 00:08:38.430 Test: blob_parse_md ...passed 00:08:38.430 Test: bs_load_pending_removal ...passed 00:08:38.430 Test: bs_unload ...[2024-07-13 22:54:27.669907] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:08:38.430 passed 00:08:38.430 Test: bs_usable_clusters ...passed 00:08:38.430 Test: blob_crc ...[2024-07-13 22:54:27.749202] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:38.430 [2024-07-13 22:54:27.749427] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:38.430 passed 00:08:38.430 Test: blob_flags ...passed 00:08:38.689 Test: bs_version ...passed 00:08:38.689 Test: blob_set_xattrs_test ...[2024-07-13 22:54:27.867867] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:38.689 [2024-07-13 22:54:27.868041] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:38.689 passed 00:08:38.689 Test: blob_thin_prov_alloc ...passed 00:08:38.689 Test: blob_insert_cluster_msg_test ...passed 00:08:38.948 Test: blob_thin_prov_rw ...passed 00:08:38.948 Test: blob_thin_prov_rle ...passed 00:08:38.948 Test: blob_thin_prov_rw_iov ...passed 00:08:38.948 Test: blob_snapshot_rw ...passed 00:08:38.948 Test: blob_snapshot_rw_iov ...passed 00:08:39.206 Test: blob_inflate_rw ...passed 00:08:39.206 Test: blob_snapshot_freeze_io ...passed 00:08:39.465 Test: blob_operation_split_rw ...passed 00:08:39.465 Test: blob_operation_split_rw_iov ...passed 00:08:39.724 Test: blob_simultaneous_operations ...[2024-07-13 22:54:28.884027] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:39.724 [2024-07-13 22:54:28.884160] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:39.724 [2024-07-13 22:54:28.884673] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:39.724 [2024-07-13 22:54:28.884727] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:39.724 [2024-07-13 22:54:28.887478] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:39.724 [2024-07-13 22:54:28.887544] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:39.724 [2024-07-13 22:54:28.887666] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:39.724 [2024-07-13 22:54:28.887689] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:39.724 passed 00:08:39.724 Test: blob_persist_test ...passed 00:08:39.724 Test: blob_decouple_snapshot ...passed 00:08:39.724 Test: blob_seek_io_unit ...passed 00:08:39.724 Test: blob_nested_freezes ...passed 00:08:39.984 Test: blob_clone_resize ...passed 00:08:39.984 Test: blob_shallow_copy ...[2024-07-13 22:54:29.166623] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:08:39.984 [2024-07-13 22:54:29.166969] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:08:39.984 [2024-07-13 22:54:29.167196] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:08:39.984 passed 00:08:39.984 Suite: blob_blob_copy_noextent 00:08:39.984 Test: blob_write ...passed 00:08:39.984 Test: blob_read ...passed 00:08:39.984 Test: blob_rw_verify ...passed 00:08:39.984 Test: blob_rw_verify_iov_nomem ...passed 00:08:39.984 Test: blob_rw_iov_read_only ...passed 00:08:40.243 Test: blob_xattr ...passed 00:08:40.243 Test: blob_dirty_shutdown ...passed 00:08:40.243 Test: blob_is_degraded ...passed 00:08:40.243 Suite: blob_esnap_bs_copy_noextent 00:08:40.243 Test: blob_esnap_create ...passed 00:08:40.243 Test: blob_esnap_thread_add_remove ...passed 00:08:40.243 Test: blob_esnap_clone_snapshot ...passed 00:08:40.243 Test: blob_esnap_clone_inflate ...passed 00:08:40.502 Test: blob_esnap_clone_decouple ...passed 00:08:40.502 Test: blob_esnap_clone_reload ...passed 00:08:40.502 Test: blob_esnap_hotplug ...passed 00:08:40.502 Test: blob_set_parent ...[2024-07-13 22:54:29.783482] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:08:40.502 [2024-07-13 22:54:29.783600] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:08:40.502 [2024-07-13 22:54:29.783721] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:08:40.502 [2024-07-13 22:54:29.783766] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:08:40.502 [2024-07-13 22:54:29.784217] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:40.502 passed 00:08:40.502 Test: blob_set_external_parent ...[2024-07-13 22:54:29.821499] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:08:40.502 [2024-07-13 22:54:29.821616] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7795:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:08:40.502 [2024-07-13 22:54:29.821661] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:08:40.502 [2024-07-13 22:54:29.822025] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:40.502 passed 00:08:40.502 Suite: blob_copy_extent 00:08:40.502 Test: blob_init ...[2024-07-13 22:54:29.834661] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:08:40.502 passed 00:08:40.502 Test: blob_thin_provision ...passed 00:08:40.502 Test: blob_read_only ...passed 00:08:40.502 Test: bs_load ...[2024-07-13 22:54:29.884420] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:08:40.502 passed 00:08:40.502 Test: bs_load_custom_cluster_size ...passed 00:08:40.761 Test: bs_load_after_failed_grow ...passed 00:08:40.761 Test: bs_cluster_sz ...[2024-07-13 22:54:29.911567] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:08:40.761 [2024-07-13 22:54:29.911779] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:08:40.761 [2024-07-13 22:54:29.911822] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:08:40.761 passed 00:08:40.761 Test: bs_resize_md ...passed 00:08:40.761 Test: bs_destroy ...passed 00:08:40.761 Test: bs_type ...passed 00:08:40.761 Test: bs_super_block ...passed 00:08:40.761 Test: bs_test_recover_cluster_count ...passed 00:08:40.761 Test: bs_grow_live ...passed 00:08:40.761 Test: bs_grow_live_no_space ...passed 00:08:40.761 Test: bs_test_grow ...passed 00:08:40.761 Test: blob_serialize_test ...passed 00:08:40.761 Test: super_block_crc ...passed 00:08:40.761 Test: blob_thin_prov_write_count_io ...passed 00:08:40.761 Test: blob_thin_prov_unmap_cluster ...passed 00:08:40.761 Test: bs_load_iter_test ...passed 00:08:40.761 Test: blob_relations ...[2024-07-13 22:54:30.104889] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:40.761 [2024-07-13 22:54:30.105046] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:40.761 [2024-07-13 22:54:30.105750] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:40.761 [2024-07-13 22:54:30.105820] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:40.761 passed 00:08:40.761 Test: blob_relations2 ...[2024-07-13 22:54:30.120369] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:40.761 [2024-07-13 22:54:30.120461] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:40.761 [2024-07-13 22:54:30.120514] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:40.761 [2024-07-13 22:54:30.120534] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:40.761 [2024-07-13 22:54:30.121669] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:40.761 [2024-07-13 22:54:30.121748] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:40.761 [2024-07-13 22:54:30.122124] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:40.761 [2024-07-13 22:54:30.122172] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:40.761 passed 00:08:40.761 Test: blob_relations3 ...passed 00:08:41.020 Test: blobstore_clean_power_failure ...passed 00:08:41.020 Test: blob_delete_snapshot_power_failure ...[2024-07-13 22:54:30.297700] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:41.020 [2024-07-13 22:54:30.310670] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:41.020 [2024-07-13 22:54:30.323394] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:41.020 [2024-07-13 22:54:30.323495] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:41.020 [2024-07-13 22:54:30.323540] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:41.020 [2024-07-13 22:54:30.336576] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:41.020 [2024-07-13 22:54:30.336682] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:41.020 [2024-07-13 22:54:30.336725] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:41.020 [2024-07-13 22:54:30.336753] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:41.020 [2024-07-13 22:54:30.350132] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:41.020 [2024-07-13 22:54:30.353106] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:41.020 [2024-07-13 22:54:30.353166] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:41.020 [2024-07-13 22:54:30.353198] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:41.020 [2024-07-13 22:54:30.367006] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:08:41.020 [2024-07-13 22:54:30.367141] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:41.020 [2024-07-13 22:54:30.380555] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:08:41.020 [2024-07-13 22:54:30.380707] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:41.020 [2024-07-13 22:54:30.395090] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:08:41.020 [2024-07-13 22:54:30.395220] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:41.020 passed 00:08:41.278 Test: blob_create_snapshot_power_failure ...[2024-07-13 22:54:30.438459] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:41.278 [2024-07-13 22:54:30.452117] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:41.278 [2024-07-13 22:54:30.479432] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:41.278 [2024-07-13 22:54:30.493121] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:08:41.278 passed 00:08:41.278 Test: blob_io_unit ...passed 00:08:41.278 Test: blob_io_unit_compatibility ...passed 00:08:41.278 Test: blob_ext_md_pages ...passed 00:08:41.278 Test: blob_esnap_io_4096_4096 ...passed 00:08:41.278 Test: blob_esnap_io_512_512 ...passed 00:08:41.278 Test: blob_esnap_io_4096_512 ...passed 00:08:41.568 Test: blob_esnap_io_512_4096 ...passed 00:08:41.568 Test: blob_esnap_clone_resize ...passed 00:08:41.568 Suite: blob_bs_copy_extent 00:08:41.568 Test: blob_open ...passed 00:08:41.568 Test: blob_create ...[2024-07-13 22:54:30.791472] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:08:41.568 passed 00:08:41.568 Test: blob_create_loop ...passed 00:08:41.568 Test: blob_create_fail ...[2024-07-13 22:54:30.905812] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:41.568 passed 00:08:41.568 Test: blob_create_internal ...passed 00:08:41.848 Test: blob_create_zero_extent ...passed 00:08:41.848 Test: blob_snapshot ...passed 00:08:41.848 Test: blob_clone ...passed 00:08:41.848 Test: blob_inflate ...[2024-07-13 22:54:31.097702] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:08:41.848 passed 00:08:41.848 Test: blob_delete ...passed 00:08:41.848 Test: blob_resize_test ...[2024-07-13 22:54:31.168052] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:08:41.848 passed 00:08:41.848 Test: blob_resize_thin_test ...passed 00:08:42.106 Test: channel_ops ...passed 00:08:42.106 Test: blob_super ...passed 00:08:42.106 Test: blob_rw_verify_iov ...passed 00:08:42.106 Test: blob_unmap ...passed 00:08:42.106 Test: blob_iter ...passed 00:08:42.106 Test: blob_parse_md ...passed 00:08:42.106 Test: bs_load_pending_removal ...passed 00:08:42.106 Test: bs_unload ...[2024-07-13 22:54:31.509658] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:08:42.365 passed 00:08:42.365 Test: bs_usable_clusters ...passed 00:08:42.365 Test: blob_crc ...[2024-07-13 22:54:31.581348] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:42.365 [2024-07-13 22:54:31.581527] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:42.365 passed 00:08:42.365 Test: blob_flags ...passed 00:08:42.365 Test: bs_version ...passed 00:08:42.365 Test: blob_set_xattrs_test ...[2024-07-13 22:54:31.687500] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:42.365 [2024-07-13 22:54:31.687644] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:42.365 passed 00:08:42.625 Test: blob_thin_prov_alloc ...passed 00:08:42.625 Test: blob_insert_cluster_msg_test ...passed 00:08:42.625 Test: blob_thin_prov_rw ...passed 00:08:42.625 Test: blob_thin_prov_rle ...passed 00:08:42.625 Test: blob_thin_prov_rw_iov ...passed 00:08:42.625 Test: blob_snapshot_rw ...passed 00:08:42.883 Test: blob_snapshot_rw_iov ...passed 00:08:42.883 Test: blob_inflate_rw ...passed 00:08:43.141 Test: blob_snapshot_freeze_io ...passed 00:08:43.141 Test: blob_operation_split_rw ...passed 00:08:43.399 Test: blob_operation_split_rw_iov ...passed 00:08:43.399 Test: blob_simultaneous_operations ...[2024-07-13 22:54:32.615550] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:43.399 [2024-07-13 22:54:32.615669] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:43.399 [2024-07-13 22:54:32.616208] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:43.399 [2024-07-13 22:54:32.616256] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:43.400 [2024-07-13 22:54:32.618806] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:43.400 [2024-07-13 22:54:32.618874] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:43.400 [2024-07-13 22:54:32.618987] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:43.400 [2024-07-13 22:54:32.619010] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:43.400 passed 00:08:43.400 Test: blob_persist_test ...passed 00:08:43.400 Test: blob_decouple_snapshot ...passed 00:08:43.400 Test: blob_seek_io_unit ...passed 00:08:43.400 Test: blob_nested_freezes ...passed 00:08:43.658 Test: blob_clone_resize ...passed 00:08:43.658 Test: blob_shallow_copy ...[2024-07-13 22:54:32.863823] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:08:43.658 [2024-07-13 22:54:32.864162] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:08:43.658 [2024-07-13 22:54:32.864435] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:08:43.658 passed 00:08:43.658 Suite: blob_blob_copy_extent 00:08:43.658 Test: blob_write ...passed 00:08:43.658 Test: blob_read ...passed 00:08:43.658 Test: blob_rw_verify ...passed 00:08:43.658 Test: blob_rw_verify_iov_nomem ...passed 00:08:43.658 Test: blob_rw_iov_read_only ...passed 00:08:43.916 Test: blob_xattr ...passed 00:08:43.916 Test: blob_dirty_shutdown ...passed 00:08:43.916 Test: blob_is_degraded ...passed 00:08:43.916 Suite: blob_esnap_bs_copy_extent 00:08:43.916 Test: blob_esnap_create ...passed 00:08:43.916 Test: blob_esnap_thread_add_remove ...passed 00:08:43.916 Test: blob_esnap_clone_snapshot ...passed 00:08:43.916 Test: blob_esnap_clone_inflate ...passed 00:08:44.174 Test: blob_esnap_clone_decouple ...passed 00:08:44.174 Test: blob_esnap_clone_reload ...passed 00:08:44.174 Test: blob_esnap_hotplug ...passed 00:08:44.174 Test: blob_set_parent ...[2024-07-13 22:54:33.426374] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:08:44.174 [2024-07-13 22:54:33.426491] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:08:44.174 [2024-07-13 22:54:33.426632] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:08:44.174 [2024-07-13 22:54:33.426679] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:08:44.174 [2024-07-13 22:54:33.427230] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:44.174 passed 00:08:44.174 Test: blob_set_external_parent ...[2024-07-13 22:54:33.465522] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:08:44.174 [2024-07-13 22:54:33.465666] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7795:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:08:44.174 [2024-07-13 22:54:33.465726] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:08:44.174 [2024-07-13 22:54:33.466236] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:44.174 passed 00:08:44.174 00:08:44.174 Run Summary: Type Total Ran Passed Failed Inactive 00:08:44.174 suites 16 16 n/a 0 0 00:08:44.174 tests 376 376 376 0 0 00:08:44.174 asserts 143965 143965 143965 0 n/a 00:08:44.174 00:08:44.174 Elapsed time = 15.226 seconds 00:08:44.174 22:54:33 unittest.unittest_blob_blobfs -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:08:44.174 00:08:44.174 00:08:44.174 CUnit - A unit testing framework for C - Version 2.1-3 00:08:44.174 http://cunit.sourceforge.net/ 00:08:44.174 00:08:44.174 00:08:44.174 Suite: blob_bdev 00:08:44.174 Test: create_bs_dev ...passed 00:08:44.174 Test: create_bs_dev_ro ...[2024-07-13 22:54:33.573137] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 529:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:08:44.174 passed 00:08:44.174 Test: create_bs_dev_rw ...passed 00:08:44.174 Test: claim_bs_dev ...[2024-07-13 22:54:33.573657] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:08:44.174 passed 00:08:44.174 Test: claim_bs_dev_ro ...passed 00:08:44.174 Test: deferred_destroy_refs ...passed 00:08:44.174 Test: deferred_destroy_channels ...passed 00:08:44.174 Test: deferred_destroy_threads ...passed 00:08:44.174 00:08:44.174 Run Summary: Type Total Ran Passed Failed Inactive 00:08:44.174 suites 1 1 n/a 0 0 00:08:44.174 tests 8 8 8 0 0 00:08:44.174 asserts 119 119 119 0 n/a 00:08:44.174 00:08:44.174 Elapsed time = 0.001 seconds 00:08:44.432 22:54:33 unittest.unittest_blob_blobfs -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:08:44.432 00:08:44.432 00:08:44.432 CUnit - A unit testing framework for C - Version 2.1-3 00:08:44.432 http://cunit.sourceforge.net/ 00:08:44.432 00:08:44.432 00:08:44.432 Suite: tree 00:08:44.432 Test: blobfs_tree_op_test ...passed 00:08:44.432 00:08:44.432 Run Summary: Type Total Ran Passed Failed Inactive 00:08:44.432 suites 1 1 n/a 0 0 00:08:44.432 tests 1 1 1 0 0 00:08:44.432 asserts 27 27 27 0 n/a 00:08:44.432 00:08:44.432 Elapsed time = 0.000 seconds 00:08:44.432 22:54:33 unittest.unittest_blob_blobfs -- unit/unittest.sh@44 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:08:44.432 00:08:44.432 00:08:44.432 CUnit - A unit testing framework for C - Version 2.1-3 00:08:44.432 http://cunit.sourceforge.net/ 00:08:44.432 00:08:44.432 00:08:44.432 Suite: blobfs_async_ut 00:08:44.432 Test: fs_init ...passed 00:08:44.432 Test: fs_open ...passed 00:08:44.432 Test: fs_create ...passed 00:08:44.432 Test: fs_truncate ...passed 00:08:44.432 Test: fs_rename ...[2024-07-13 22:54:33.766767] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:08:44.432 passed 00:08:44.432 Test: fs_rw_async ...passed 00:08:44.432 Test: fs_writev_readv_async ...passed 00:08:44.432 Test: tree_find_buffer_ut ...passed 00:08:44.432 Test: channel_ops ...passed 00:08:44.432 Test: channel_ops_sync ...passed 00:08:44.432 00:08:44.432 Run Summary: Type Total Ran Passed Failed Inactive 00:08:44.432 suites 1 1 n/a 0 0 00:08:44.432 tests 10 10 10 0 0 00:08:44.432 asserts 292 292 292 0 n/a 00:08:44.432 00:08:44.432 Elapsed time = 0.179 seconds 00:08:44.690 22:54:33 unittest.unittest_blob_blobfs -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:08:44.690 00:08:44.690 00:08:44.690 CUnit - A unit testing framework for C - Version 2.1-3 00:08:44.690 http://cunit.sourceforge.net/ 00:08:44.690 00:08:44.690 00:08:44.690 Suite: blobfs_sync_ut 00:08:44.690 Test: cache_read_after_write ...[2024-07-13 22:54:33.952149] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:08:44.690 passed 00:08:44.690 Test: file_length ...passed 00:08:44.690 Test: append_write_to_extend_blob ...passed 00:08:44.690 Test: partial_buffer ...passed 00:08:44.690 Test: cache_write_null_buffer ...passed 00:08:44.690 Test: fs_create_sync ...passed 00:08:44.690 Test: fs_rename_sync ...passed 00:08:44.690 Test: cache_append_no_cache ...passed 00:08:44.690 Test: fs_delete_file_without_close ...passed 00:08:44.690 00:08:44.690 Run Summary: Type Total Ran Passed Failed Inactive 00:08:44.690 suites 1 1 n/a 0 0 00:08:44.690 tests 9 9 9 0 0 00:08:44.690 asserts 345 345 345 0 n/a 00:08:44.690 00:08:44.690 Elapsed time = 0.376 seconds 00:08:44.950 22:54:34 unittest.unittest_blob_blobfs -- unit/unittest.sh@47 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:08:44.950 00:08:44.950 00:08:44.950 CUnit - A unit testing framework for C - Version 2.1-3 00:08:44.950 http://cunit.sourceforge.net/ 00:08:44.950 00:08:44.950 00:08:44.950 Suite: blobfs_bdev_ut 00:08:44.950 Test: spdk_blobfs_bdev_detect_test ...[2024-07-13 22:54:34.142380] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:08:44.950 passed 00:08:44.950 Test: spdk_blobfs_bdev_create_test ...passed 00:08:44.950 Test: spdk_blobfs_bdev_mount_test ...passed 00:08:44.950 00:08:44.950 [2024-07-13 22:54:34.142787] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:08:44.950 Run Summary: Type Total Ran Passed Failed Inactive 00:08:44.950 suites 1 1 n/a 0 0 00:08:44.950 tests 3 3 3 0 0 00:08:44.950 asserts 9 9 9 0 n/a 00:08:44.950 00:08:44.950 Elapsed time = 0.001 seconds 00:08:44.950 00:08:44.950 real 0m15.938s 00:08:44.950 user 0m15.398s 00:08:44.950 sys 0m0.743s 00:08:44.950 22:54:34 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:44.950 22:54:34 unittest.unittest_blob_blobfs -- common/autotest_common.sh@10 -- # set +x 00:08:44.950 ************************************ 00:08:44.950 END TEST unittest_blob_blobfs 00:08:44.950 ************************************ 00:08:44.950 22:54:34 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:44.950 22:54:34 unittest -- unit/unittest.sh@234 -- # run_test unittest_event unittest_event 00:08:44.950 22:54:34 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:44.950 22:54:34 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.950 22:54:34 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:44.950 ************************************ 00:08:44.950 START TEST unittest_event 00:08:44.950 ************************************ 00:08:44.950 22:54:34 unittest.unittest_event -- common/autotest_common.sh@1123 -- # unittest_event 00:08:44.950 22:54:34 unittest.unittest_event -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:08:44.950 00:08:44.950 00:08:44.950 CUnit - A unit testing framework for C - Version 2.1-3 00:08:44.950 http://cunit.sourceforge.net/ 00:08:44.950 00:08:44.950 00:08:44.950 Suite: app_suite 00:08:44.950 Test: test_spdk_app_parse_args ...app_ut [options] 00:08:44.950 00:08:44.950 CPU options: 00:08:44.950 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:44.950 (like [0,1,10]) 00:08:44.950 --lcores lcore to CPU mapping list. The list is in the format: 00:08:44.950 [<,lcores[@CPUs]>...] 00:08:44.950 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:44.950 Within the group, '-' is used for range separator, 00:08:44.950 ',' is used for single number separator. 00:08:44.950 '( )' can be omitted for single element group, 00:08:44.950 '@' can be omitted if cpus and lcores have the same value 00:08:44.950 --disable-cpumask-locks Disable CPU core lock files. 00:08:44.950 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:44.950 pollers in the app support interrupt mode) 00:08:44.950 -p, --main-core main (primary) core for DPDK 00:08:44.950 00:08:44.950 Configuration options: 00:08:44.950 -c, --config, --json JSON config file 00:08:44.950 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:44.950 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:44.950 app_ut: invalid option -- 'z' 00:08:44.950 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:44.950 --rpcs-allowed comma-separated list of permitted RPCS 00:08:44.950 --json-ignore-init-errors don't exit on invalid config entry 00:08:44.950 00:08:44.950 Memory options: 00:08:44.950 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:44.950 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:44.950 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:44.950 -R, --huge-unlink unlink huge files after initialization 00:08:44.950 -n, --mem-channels number of memory channels used for DPDK 00:08:44.950 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:44.950 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:44.950 --no-huge run without using hugepages 00:08:44.950 -i, --shm-id shared memory ID (optional) 00:08:44.950 -g, --single-file-segments force creating just one hugetlbfs file 00:08:44.950 00:08:44.950 PCI options: 00:08:44.950 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:44.950 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:44.950 -u, --no-pci disable PCI access 00:08:44.950 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:44.950 00:08:44.950 Log options: 00:08:44.950 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:08:44.950 --silence-noticelog disable notice level logging to stderr 00:08:44.950 00:08:44.950 Trace options: 00:08:44.950 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:44.950 setting 0 to disable trace (default 32768) 00:08:44.950 Tracepoints vary in size and can use more than one trace entry. 00:08:44.950 -e, --tpoint-group [:] 00:08:44.950 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:08:44.950 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:44.950 a tracepoint group. First tpoint inside a group can be enabled by 00:08:44.950 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:44.950 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:44.950 in /include/spdk_internal/trace_defs.h 00:08:44.950 00:08:44.950 Other options: 00:08:44.950 -h, --help show this usage 00:08:44.950 -v, --version print SPDK version 00:08:44.950 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:44.950 --env-context Opaque context for use of the env implementation 00:08:44.950 app_ut [options] 00:08:44.950 00:08:44.950 CPU options: 00:08:44.950 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:44.950 app_ut: unrecognized option '--test-long-opt' 00:08:44.950 (like [0,1,10]) 00:08:44.950 --lcores lcore to CPU mapping list. The list is in the format: 00:08:44.950 [<,lcores[@CPUs]>...] 00:08:44.950 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:44.950 Within the group, '-' is used for range separator, 00:08:44.950 ',' is used for single number separator. 00:08:44.950 '( )' can be omitted for single element group, 00:08:44.950 '@' can be omitted if cpus and lcores have the same value 00:08:44.951 --disable-cpumask-locks Disable CPU core lock files. 00:08:44.951 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:44.951 pollers in the app support interrupt mode) 00:08:44.951 -p, --main-core main (primary) core for DPDK 00:08:44.951 00:08:44.951 Configuration options: 00:08:44.951 -c, --config, --json JSON config file 00:08:44.951 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:44.951 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:44.951 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:44.951 --rpcs-allowed comma-separated list of permitted RPCS 00:08:44.951 --json-ignore-init-errors don't exit on invalid config entry 00:08:44.951 00:08:44.951 Memory options: 00:08:44.951 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:44.951 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:44.951 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:44.951 -R, --huge-unlink unlink huge files after initialization 00:08:44.951 -n, --mem-channels number of memory channels used for DPDK 00:08:44.951 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:44.951 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:44.951 --no-huge run without using hugepages 00:08:44.951 -i, --shm-id shared memory ID (optional) 00:08:44.951 -g, --single-file-segments force creating just one hugetlbfs file 00:08:44.951 00:08:44.951 PCI options: 00:08:44.951 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:44.951 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:44.951 -u, --no-pci disable PCI access 00:08:44.951 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:44.951 00:08:44.951 Log options: 00:08:44.951 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:08:44.951 --silence-noticelog disable notice level logging to stderr 00:08:44.951 00:08:44.951 Trace options: 00:08:44.951 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:44.951 setting 0 to disable trace (default 32768) 00:08:44.951 Tracepoints vary in size and can use more than one trace entry. 00:08:44.951 -e, --tpoint-group [:] 00:08:44.951 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:08:44.951 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:44.951 a tracepoint group. First tpoint inside a group can be enabled by 00:08:44.951 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:44.951 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:44.951 in /include/spdk_internal/trace_defs.h 00:08:44.951 00:08:44.951 Other options: 00:08:44.951 -h, --help show this usage 00:08:44.951 -v, --version print SPDK version 00:08:44.951 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:44.951 --env-context Opaque context for use of the env implementation 00:08:44.951 [2024-07-13 22:54:34.229243] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1191:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:08:44.951 [2024-07-13 22:54:34.229624] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1372:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:08:44.951 app_ut [options] 00:08:44.951 00:08:44.951 CPU options: 00:08:44.951 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:44.951 (like [0,1,10]) 00:08:44.951 --lcores lcore to CPU mapping list. The list is in the format: 00:08:44.951 [<,lcores[@CPUs]>...] 00:08:44.951 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:44.951 Within the group, '-' is used for range separator, 00:08:44.951 ',' is used for single number separator. 00:08:44.951 '( )' can be omitted for single element group, 00:08:44.951 '@' can be omitted if cpus and lcores have the same value 00:08:44.951 --disable-cpumask-locks Disable CPU core lock files. 00:08:44.951 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:44.951 pollers in the app support interrupt mode) 00:08:44.951 -p, --main-core main (primary) core for DPDK 00:08:44.951 00:08:44.951 Configuration options: 00:08:44.951 -c, --config, --json JSON config file 00:08:44.951 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:44.951 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:44.951 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:44.951 --rpcs-allowed comma-separated list of permitted RPCS 00:08:44.951 --json-ignore-init-errors don't exit on invalid config entry 00:08:44.951 00:08:44.951 Memory options: 00:08:44.951 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:44.951 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:44.951 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:44.951 -R, --huge-unlink unlink huge files after initialization 00:08:44.951 -n, --mem-channels number of memory channels used for DPDK 00:08:44.951 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:44.951 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:44.951 --no-huge run without using hugepages 00:08:44.951 -i, --shm-id shared memory ID (optional) 00:08:44.951 -g, --single-file-segments force creating just one hugetlbfs file 00:08:44.951 00:08:44.951 PCI options: 00:08:44.951 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:44.951 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:44.951 -u, --no-pci disable PCI access 00:08:44.951 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:44.951 00:08:44.951 Log options: 00:08:44.951 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:08:44.951 --silence-noticelog disable notice level logging to stderr 00:08:44.951 00:08:44.951 Trace options: 00:08:44.951 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:44.951 setting 0 to disable trace (default 32768) 00:08:44.951 Tracepoints vary in size and can use more than one trace entry. 00:08:44.951 -e, --tpoint-group [:] 00:08:44.951 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:08:44.951 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:44.951 a tracepoint group. First tpoint inside a group can be enabled by 00:08:44.951 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:44.951 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:44.951 in /include/spdk_internal/trace_defs.h 00:08:44.951 00:08:44.951 Other options: 00:08:44.951 -h, --help show this usage 00:08:44.951 -v, --version print SPDK version 00:08:44.951 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:44.951 --env-context Opaque context for use of the env implementation 00:08:44.951 passed 00:08:44.951 00:08:44.951 [2024-07-13 22:54:34.229930] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1277:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:08:44.951 Run Summary: Type Total Ran Passed Failed Inactive 00:08:44.951 suites 1 1 n/a 0 0 00:08:44.951 tests 1 1 1 0 0 00:08:44.951 asserts 8 8 8 0 n/a 00:08:44.951 00:08:44.951 Elapsed time = 0.001 seconds 00:08:44.951 22:54:34 unittest.unittest_event -- unit/unittest.sh@52 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:08:44.951 00:08:44.951 00:08:44.951 CUnit - A unit testing framework for C - Version 2.1-3 00:08:44.951 http://cunit.sourceforge.net/ 00:08:44.951 00:08:44.951 00:08:44.951 Suite: app_suite 00:08:44.951 Test: test_create_reactor ...passed 00:08:44.951 Test: test_init_reactors ...passed 00:08:44.951 Test: test_event_call ...passed 00:08:44.951 Test: test_schedule_thread ...passed 00:08:44.951 Test: test_reschedule_thread ...passed 00:08:44.951 Test: test_bind_thread ...passed 00:08:44.951 Test: test_for_each_reactor ...passed 00:08:44.951 Test: test_reactor_stats ...passed 00:08:44.951 Test: test_scheduler ...passed 00:08:44.951 Test: test_governor ...passed 00:08:44.951 00:08:44.951 Run Summary: Type Total Ran Passed Failed Inactive 00:08:44.951 suites 1 1 n/a 0 0 00:08:44.951 tests 10 10 10 0 0 00:08:44.951 asserts 344 344 344 0 n/a 00:08:44.951 00:08:44.951 Elapsed time = 0.017 seconds 00:08:44.951 00:08:44.951 real 0m0.086s 00:08:44.951 user 0m0.046s 00:08:44.951 sys 0m0.041s 00:08:44.951 22:54:34 unittest.unittest_event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:44.951 22:54:34 unittest.unittest_event -- common/autotest_common.sh@10 -- # set +x 00:08:44.951 ************************************ 00:08:44.951 END TEST unittest_event 00:08:44.951 ************************************ 00:08:44.951 22:54:34 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:44.951 22:54:34 unittest -- unit/unittest.sh@235 -- # uname -s 00:08:44.951 22:54:34 unittest -- unit/unittest.sh@235 -- # '[' Linux = Linux ']' 00:08:44.951 22:54:34 unittest -- unit/unittest.sh@236 -- # run_test unittest_ftl unittest_ftl 00:08:44.951 22:54:34 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:44.951 22:54:34 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.951 22:54:34 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:44.951 ************************************ 00:08:44.951 START TEST unittest_ftl 00:08:44.951 ************************************ 00:08:44.951 22:54:34 unittest.unittest_ftl -- common/autotest_common.sh@1123 -- # unittest_ftl 00:08:44.951 22:54:34 unittest.unittest_ftl -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:08:45.210 00:08:45.210 00:08:45.210 CUnit - A unit testing framework for C - Version 2.1-3 00:08:45.210 http://cunit.sourceforge.net/ 00:08:45.210 00:08:45.210 00:08:45.210 Suite: ftl_band_suite 00:08:45.210 Test: test_band_block_offset_from_addr_base ...passed 00:08:45.210 Test: test_band_block_offset_from_addr_offset ...passed 00:08:45.210 Test: test_band_addr_from_block_offset ...passed 00:08:45.210 Test: test_band_set_addr ...passed 00:08:45.210 Test: test_invalidate_addr ...passed 00:08:45.210 Test: test_next_xfer_addr ...passed 00:08:45.210 00:08:45.210 Run Summary: Type Total Ran Passed Failed Inactive 00:08:45.210 suites 1 1 n/a 0 0 00:08:45.210 tests 6 6 6 0 0 00:08:45.210 asserts 30356 30356 30356 0 n/a 00:08:45.210 00:08:45.210 Elapsed time = 0.183 seconds 00:08:45.210 22:54:34 unittest.unittest_ftl -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:08:45.469 00:08:45.469 00:08:45.469 CUnit - A unit testing framework for C - Version 2.1-3 00:08:45.469 http://cunit.sourceforge.net/ 00:08:45.469 00:08:45.469 00:08:45.469 Suite: ftl_bitmap 00:08:45.469 Test: test_ftl_bitmap_create ...[2024-07-13 22:54:34.628229] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:08:45.469 [2024-07-13 22:54:34.628715] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:08:45.469 passed 00:08:45.469 Test: test_ftl_bitmap_get ...passed 00:08:45.469 Test: test_ftl_bitmap_set ...passed 00:08:45.469 Test: test_ftl_bitmap_clear ...passed 00:08:45.469 Test: test_ftl_bitmap_find_first_set ...passed 00:08:45.469 Test: test_ftl_bitmap_find_first_clear ...passed 00:08:45.469 Test: test_ftl_bitmap_count_set ...passed 00:08:45.469 00:08:45.469 Run Summary: Type Total Ran Passed Failed Inactive 00:08:45.469 suites 1 1 n/a 0 0 00:08:45.469 tests 7 7 7 0 0 00:08:45.469 asserts 137 137 137 0 n/a 00:08:45.469 00:08:45.469 Elapsed time = 0.001 seconds 00:08:45.469 22:54:34 unittest.unittest_ftl -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:08:45.469 00:08:45.469 00:08:45.469 CUnit - A unit testing framework for C - Version 2.1-3 00:08:45.469 http://cunit.sourceforge.net/ 00:08:45.469 00:08:45.469 00:08:45.469 Suite: ftl_io_suite 00:08:45.469 Test: test_completion ...passed 00:08:45.469 Test: test_multiple_ios ...passed 00:08:45.469 00:08:45.469 Run Summary: Type Total Ran Passed Failed Inactive 00:08:45.469 suites 1 1 n/a 0 0 00:08:45.469 tests 2 2 2 0 0 00:08:45.469 asserts 47 47 47 0 n/a 00:08:45.469 00:08:45.469 Elapsed time = 0.003 seconds 00:08:45.469 22:54:34 unittest.unittest_ftl -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:08:45.469 00:08:45.469 00:08:45.469 CUnit - A unit testing framework for C - Version 2.1-3 00:08:45.469 http://cunit.sourceforge.net/ 00:08:45.469 00:08:45.469 00:08:45.469 Suite: ftl_mngt 00:08:45.469 Test: test_next_step ...passed 00:08:45.469 Test: test_continue_step ...passed 00:08:45.469 Test: test_get_func_and_step_cntx_alloc ...passed 00:08:45.469 Test: test_fail_step ...passed 00:08:45.469 Test: test_mngt_call_and_call_rollback ...passed 00:08:45.469 Test: test_nested_process_failure ...passed 00:08:45.469 Test: test_call_init_success ...passed 00:08:45.469 Test: test_call_init_failure ...passed 00:08:45.469 00:08:45.469 Run Summary: Type Total Ran Passed Failed Inactive 00:08:45.469 suites 1 1 n/a 0 0 00:08:45.469 tests 8 8 8 0 0 00:08:45.469 asserts 196 196 196 0 n/a 00:08:45.469 00:08:45.469 Elapsed time = 0.002 seconds 00:08:45.469 22:54:34 unittest.unittest_ftl -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:08:45.469 00:08:45.469 00:08:45.469 CUnit - A unit testing framework for C - Version 2.1-3 00:08:45.469 http://cunit.sourceforge.net/ 00:08:45.469 00:08:45.469 00:08:45.469 Suite: ftl_mempool 00:08:45.469 Test: test_ftl_mempool_create ...passed 00:08:45.469 Test: test_ftl_mempool_get_put ...passed 00:08:45.469 00:08:45.469 Run Summary: Type Total Ran Passed Failed Inactive 00:08:45.469 suites 1 1 n/a 0 0 00:08:45.469 tests 2 2 2 0 0 00:08:45.469 asserts 36 36 36 0 n/a 00:08:45.469 00:08:45.469 Elapsed time = 0.000 seconds 00:08:45.469 22:54:34 unittest.unittest_ftl -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:08:45.469 00:08:45.469 00:08:45.469 CUnit - A unit testing framework for C - Version 2.1-3 00:08:45.469 http://cunit.sourceforge.net/ 00:08:45.469 00:08:45.469 00:08:45.469 Suite: ftl_addr64_suite 00:08:45.469 Test: test_addr_cached ...passed 00:08:45.469 00:08:45.469 Run Summary: Type Total Ran Passed Failed Inactive 00:08:45.469 suites 1 1 n/a 0 0 00:08:45.469 tests 1 1 1 0 0 00:08:45.469 asserts 1536 1536 1536 0 n/a 00:08:45.469 00:08:45.469 Elapsed time = 0.000 seconds 00:08:45.469 22:54:34 unittest.unittest_ftl -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:08:45.469 00:08:45.470 00:08:45.470 CUnit - A unit testing framework for C - Version 2.1-3 00:08:45.470 http://cunit.sourceforge.net/ 00:08:45.470 00:08:45.470 00:08:45.470 Suite: ftl_sb 00:08:45.470 Test: test_sb_crc_v2 ...passed 00:08:45.470 Test: test_sb_crc_v3 ...passed 00:08:45.470 Test: test_sb_v3_md_layout ...[2024-07-13 22:54:34.781712] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:08:45.470 [2024-07-13 22:54:34.782075] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:08:45.470 [2024-07-13 22:54:34.782136] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:08:45.470 [2024-07-13 22:54:34.782184] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:08:45.470 [2024-07-13 22:54:34.782222] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:08:45.470 [2024-07-13 22:54:34.782329] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:08:45.470 [2024-07-13 22:54:34.782371] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:08:45.470 [2024-07-13 22:54:34.782429] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:08:45.470 [2024-07-13 22:54:34.782517] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:08:45.470 [2024-07-13 22:54:34.782566] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:08:45.470 [2024-07-13 22:54:34.782615] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:08:45.470 passed 00:08:45.470 Test: test_sb_v5_md_layout ...passed 00:08:45.470 00:08:45.470 Run Summary: Type Total Ran Passed Failed Inactive 00:08:45.470 suites 1 1 n/a 0 0 00:08:45.470 tests 4 4 4 0 0 00:08:45.470 asserts 160 160 160 0 n/a 00:08:45.470 00:08:45.470 Elapsed time = 0.002 seconds 00:08:45.470 22:54:34 unittest.unittest_ftl -- unit/unittest.sh@63 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:08:45.470 00:08:45.470 00:08:45.470 CUnit - A unit testing framework for C - Version 2.1-3 00:08:45.470 http://cunit.sourceforge.net/ 00:08:45.470 00:08:45.470 00:08:45.470 Suite: ftl_layout_upgrade 00:08:45.470 Test: test_l2p_upgrade ...passed 00:08:45.470 00:08:45.470 Run Summary: Type Total Ran Passed Failed Inactive 00:08:45.470 suites 1 1 n/a 0 0 00:08:45.470 tests 1 1 1 0 0 00:08:45.470 asserts 152 152 152 0 n/a 00:08:45.470 00:08:45.470 Elapsed time = 0.001 seconds 00:08:45.470 22:54:34 unittest.unittest_ftl -- unit/unittest.sh@64 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_p2l.c/ftl_p2l_ut 00:08:45.470 00:08:45.470 00:08:45.470 CUnit - A unit testing framework for C - Version 2.1-3 00:08:45.470 http://cunit.sourceforge.net/ 00:08:45.470 00:08:45.470 00:08:45.470 Suite: ftl_p2l_suite 00:08:45.470 Test: test_p2l_num_pages ...passed 00:08:46.036 Test: test_ckpt_issue ...passed 00:08:46.603 Test: test_persist_band_p2l ...passed 00:08:47.168 Test: test_clean_restore_p2l ...passed 00:08:48.102 Test: test_dirty_restore_p2l ...passed 00:08:48.102 00:08:48.102 Run Summary: Type Total Ran Passed Failed Inactive 00:08:48.102 suites 1 1 n/a 0 0 00:08:48.102 tests 5 5 5 0 0 00:08:48.102 asserts 10020 10020 10020 0 n/a 00:08:48.102 00:08:48.102 Elapsed time = 2.643 seconds 00:08:48.102 00:08:48.102 real 0m3.158s 00:08:48.102 user 0m1.108s 00:08:48.102 sys 0m2.050s 00:08:48.102 22:54:37 unittest.unittest_ftl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:48.102 22:54:37 unittest.unittest_ftl -- common/autotest_common.sh@10 -- # set +x 00:08:48.102 ************************************ 00:08:48.102 END TEST unittest_ftl 00:08:48.102 ************************************ 00:08:48.360 22:54:37 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:48.360 22:54:37 unittest -- unit/unittest.sh@239 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:08:48.360 22:54:37 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:48.360 22:54:37 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.360 22:54:37 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:48.360 ************************************ 00:08:48.360 START TEST unittest_accel 00:08:48.360 ************************************ 00:08:48.360 22:54:37 unittest.unittest_accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:08:48.360 00:08:48.360 00:08:48.360 CUnit - A unit testing framework for C - Version 2.1-3 00:08:48.360 http://cunit.sourceforge.net/ 00:08:48.360 00:08:48.360 00:08:48.360 Suite: accel_sequence 00:08:48.360 Test: test_sequence_fill_copy ...passed 00:08:48.360 Test: test_sequence_abort ...passed 00:08:48.360 Test: test_sequence_append_error ...passed 00:08:48.360 Test: test_sequence_completion_error ...[2024-07-13 22:54:37.581248] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1945:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f63705fe7c0 00:08:48.360 [2024-07-13 22:54:37.581582] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1945:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7f63705fe7c0 00:08:48.360 [2024-07-13 22:54:37.581679] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1855:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7f63705fe7c0 00:08:48.360 [2024-07-13 22:54:37.581736] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1855:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7f63705fe7c0 00:08:48.360 passed 00:08:48.360 Test: test_sequence_decompress ...passed 00:08:48.360 Test: test_sequence_reverse ...passed 00:08:48.360 Test: test_sequence_copy_elision ...passed 00:08:48.360 Test: test_sequence_accel_buffers ...passed 00:08:48.360 Test: test_sequence_memory_domain ...[2024-07-13 22:54:37.592113] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1747:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:08:48.360 passed 00:08:48.360 Test: test_sequence_module_memory_domain ...[2024-07-13 22:54:37.592284] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1786:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:08:48.360 passed 00:08:48.360 Test: test_sequence_crypto ...passed 00:08:48.360 Test: test_sequence_driver ...[2024-07-13 22:54:37.598430] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1894:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7f636f8957c0 using driver: ut 00:08:48.360 [2024-07-13 22:54:37.598563] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1958:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f636f8957c0 through driver: ut 00:08:48.360 passed 00:08:48.360 Test: test_sequence_same_iovs ...passed 00:08:48.360 Test: test_sequence_crc32 ...passed 00:08:48.360 Suite: accel 00:08:48.360 Test: test_spdk_accel_task_complete ...passed 00:08:48.360 Test: test_get_task ...passed 00:08:48.360 Test: test_spdk_accel_submit_copy ...passed 00:08:48.360 Test: test_spdk_accel_submit_dualcast ...[2024-07-13 22:54:37.603248] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 422:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:08:48.360 [2024-07-13 22:54:37.603315] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 422:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:08:48.360 passed 00:08:48.360 Test: test_spdk_accel_submit_compare ...passed 00:08:48.360 Test: test_spdk_accel_submit_fill ...passed 00:08:48.360 Test: test_spdk_accel_submit_crc32c ...passed 00:08:48.360 Test: test_spdk_accel_submit_crc32cv ...passed 00:08:48.360 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:08:48.360 Test: test_spdk_accel_submit_xor ...passed 00:08:48.360 Test: test_spdk_accel_module_find_by_name ...passed 00:08:48.360 Test: test_spdk_accel_module_register ...passed 00:08:48.360 00:08:48.360 Run Summary: Type Total Ran Passed Failed Inactive 00:08:48.360 suites 2 2 n/a 0 0 00:08:48.360 tests 26 26 26 0 0 00:08:48.360 asserts 830 830 830 0 n/a 00:08:48.360 00:08:48.360 Elapsed time = 0.032 seconds 00:08:48.360 00:08:48.360 real 0m0.069s 00:08:48.360 user 0m0.032s 00:08:48.360 sys 0m0.037s 00:08:48.360 22:54:37 unittest.unittest_accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:48.360 22:54:37 unittest.unittest_accel -- common/autotest_common.sh@10 -- # set +x 00:08:48.360 ************************************ 00:08:48.360 END TEST unittest_accel 00:08:48.360 ************************************ 00:08:48.361 22:54:37 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:48.361 22:54:37 unittest -- unit/unittest.sh@240 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:08:48.361 22:54:37 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:48.361 22:54:37 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.361 22:54:37 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:48.361 ************************************ 00:08:48.361 START TEST unittest_ioat 00:08:48.361 ************************************ 00:08:48.361 22:54:37 unittest.unittest_ioat -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:08:48.361 00:08:48.361 00:08:48.361 CUnit - A unit testing framework for C - Version 2.1-3 00:08:48.361 http://cunit.sourceforge.net/ 00:08:48.361 00:08:48.361 00:08:48.361 Suite: ioat 00:08:48.361 Test: ioat_state_check ...passed 00:08:48.361 00:08:48.361 Run Summary: Type Total Ran Passed Failed Inactive 00:08:48.361 suites 1 1 n/a 0 0 00:08:48.361 tests 1 1 1 0 0 00:08:48.361 asserts 32 32 32 0 n/a 00:08:48.361 00:08:48.361 Elapsed time = 0.000 seconds 00:08:48.361 00:08:48.361 real 0m0.029s 00:08:48.361 user 0m0.009s 00:08:48.361 sys 0m0.021s 00:08:48.361 22:54:37 unittest.unittest_ioat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:48.361 22:54:37 unittest.unittest_ioat -- common/autotest_common.sh@10 -- # set +x 00:08:48.361 ************************************ 00:08:48.361 END TEST unittest_ioat 00:08:48.361 ************************************ 00:08:48.361 22:54:37 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:48.361 22:54:37 unittest -- unit/unittest.sh@241 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:48.361 22:54:37 unittest -- unit/unittest.sh@242 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:08:48.361 22:54:37 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:48.361 22:54:37 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.361 22:54:37 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:48.361 ************************************ 00:08:48.361 START TEST unittest_idxd_user 00:08:48.361 ************************************ 00:08:48.361 22:54:37 unittest.unittest_idxd_user -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:08:48.620 00:08:48.620 00:08:48.620 CUnit - A unit testing framework for C - Version 2.1-3 00:08:48.620 http://cunit.sourceforge.net/ 00:08:48.620 00:08:48.620 00:08:48.620 Suite: idxd_user 00:08:48.620 Test: test_idxd_wait_cmd ...[2024-07-13 22:54:37.771354] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:08:48.620 [2024-07-13 22:54:37.771616] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:08:48.620 passed 00:08:48.620 Test: test_idxd_reset_dev ...[2024-07-13 22:54:37.771758] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:08:48.620 passed 00:08:48.620 Test: test_idxd_group_config ...passed 00:08:48.620 Test: test_idxd_wq_config ...passed 00:08:48.620 00:08:48.620 [2024-07-13 22:54:37.771804] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:08:48.620 Run Summary: Type Total Ran Passed Failed Inactive 00:08:48.620 suites 1 1 n/a 0 0 00:08:48.620 tests 4 4 4 0 0 00:08:48.620 asserts 20 20 20 0 n/a 00:08:48.620 00:08:48.620 Elapsed time = 0.001 seconds 00:08:48.620 00:08:48.620 real 0m0.034s 00:08:48.620 user 0m0.034s 00:08:48.620 sys 0m0.000s 00:08:48.620 22:54:37 unittest.unittest_idxd_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:48.620 22:54:37 unittest.unittest_idxd_user -- common/autotest_common.sh@10 -- # set +x 00:08:48.620 ************************************ 00:08:48.620 END TEST unittest_idxd_user 00:08:48.620 ************************************ 00:08:48.620 22:54:37 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:48.620 22:54:37 unittest -- unit/unittest.sh@244 -- # run_test unittest_iscsi unittest_iscsi 00:08:48.620 22:54:37 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:48.620 22:54:37 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.620 22:54:37 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:48.620 ************************************ 00:08:48.620 START TEST unittest_iscsi 00:08:48.620 ************************************ 00:08:48.620 22:54:37 unittest.unittest_iscsi -- common/autotest_common.sh@1123 -- # unittest_iscsi 00:08:48.620 22:54:37 unittest.unittest_iscsi -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:08:48.620 00:08:48.620 00:08:48.620 CUnit - A unit testing framework for C - Version 2.1-3 00:08:48.620 http://cunit.sourceforge.net/ 00:08:48.620 00:08:48.620 00:08:48.620 Suite: conn_suite 00:08:48.620 Test: read_task_split_in_order_case ...passed 00:08:48.620 Test: read_task_split_reverse_order_case ...passed 00:08:48.620 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:08:48.620 Test: process_non_read_task_completion_test ...passed 00:08:48.620 Test: free_tasks_on_connection ...passed 00:08:48.620 Test: free_tasks_with_queued_datain ...passed 00:08:48.620 Test: abort_queued_datain_task_test ...passed 00:08:48.620 Test: abort_queued_datain_tasks_test ...passed 00:08:48.620 00:08:48.620 Run Summary: Type Total Ran Passed Failed Inactive 00:08:48.620 suites 1 1 n/a 0 0 00:08:48.620 tests 8 8 8 0 0 00:08:48.620 asserts 230 230 230 0 n/a 00:08:48.620 00:08:48.620 Elapsed time = 0.000 seconds 00:08:48.620 22:54:37 unittest.unittest_iscsi -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:08:48.620 00:08:48.620 00:08:48.620 CUnit - A unit testing framework for C - Version 2.1-3 00:08:48.620 http://cunit.sourceforge.net/ 00:08:48.620 00:08:48.620 00:08:48.620 Suite: iscsi_suite 00:08:48.620 Test: param_negotiation_test ...passed 00:08:48.620 Test: list_negotiation_test ...passed 00:08:48.620 Test: parse_valid_test ...passed 00:08:48.620 Test: parse_invalid_test ...[2024-07-13 22:54:37.893656] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:08:48.620 [2024-07-13 22:54:37.893931] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:08:48.620 [2024-07-13 22:54:37.893987] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key 00:08:48.620 [2024-07-13 22:54:37.894051] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:08:48.620 [2024-07-13 22:54:37.894199] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256 00:08:48.620 [2024-07-13 22:54:37.894281] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:08:48.620 [2024-07-13 22:54:37.894412] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B 00:08:48.620 passed 00:08:48.620 00:08:48.620 Run Summary: Type Total Ran Passed Failed Inactive 00:08:48.620 suites 1 1 n/a 0 0 00:08:48.620 tests 4 4 4 0 0 00:08:48.620 asserts 161 161 161 0 n/a 00:08:48.620 00:08:48.620 Elapsed time = 0.005 seconds 00:08:48.620 22:54:37 unittest.unittest_iscsi -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:08:48.620 00:08:48.620 00:08:48.620 CUnit - A unit testing framework for C - Version 2.1-3 00:08:48.620 http://cunit.sourceforge.net/ 00:08:48.620 00:08:48.620 00:08:48.620 Suite: iscsi_target_node_suite 00:08:48.620 Test: add_lun_test_cases ...[2024-07-13 22:54:37.929347] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1252:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:08:48.620 [2024-07-13 22:54:37.929633] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1258:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:08:48.620 [2024-07-13 22:54:37.929721] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:08:48.620 [2024-07-13 22:54:37.929757] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:08:48.620 [2024-07-13 22:54:37.929786] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1270:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:08:48.620 passed 00:08:48.620 Test: allow_any_allowed ...passed 00:08:48.620 Test: allow_ipv6_allowed ...passed 00:08:48.620 Test: allow_ipv6_denied ...passed 00:08:48.620 Test: allow_ipv6_invalid ...passed 00:08:48.620 Test: allow_ipv4_allowed ...passed 00:08:48.620 Test: allow_ipv4_denied ...passed 00:08:48.620 Test: allow_ipv4_invalid ...passed 00:08:48.620 Test: node_access_allowed ...passed 00:08:48.620 Test: node_access_denied_by_empty_netmask ...passed 00:08:48.620 Test: node_access_multi_initiator_groups_cases ...passed 00:08:48.620 Test: allow_iscsi_name_multi_maps_case ...passed 00:08:48.620 Test: chap_param_test_cases ...[2024-07-13 22:54:37.930144] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:08:48.620 [2024-07-13 22:54:37.930185] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:08:48.620 [2024-07-13 22:54:37.930236] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:08:48.620 [2024-07-13 22:54:37.930272] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:08:48.620 passed 00:08:48.620 00:08:48.620 Run Summary: Type Total Ran Passed Failed Inactive 00:08:48.620 suites 1 1 n/a 0 0 00:08:48.620 tests 13 13 13 0 0 00:08:48.620 asserts 50 50 50 0 n/a 00:08:48.620 00:08:48.620 Elapsed time = 0.001 seconds[2024-07-13 22:54:37.930308] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1030:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:08:48.620 00:08:48.621 22:54:37 unittest.unittest_iscsi -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:08:48.621 00:08:48.621 00:08:48.621 CUnit - A unit testing framework for C - Version 2.1-3 00:08:48.621 http://cunit.sourceforge.net/ 00:08:48.621 00:08:48.621 00:08:48.621 Suite: iscsi_suite 00:08:48.621 Test: op_login_check_target_test ...[2024-07-13 22:54:37.967100] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1439:iscsi_op_login_check_target: *ERROR*: access denied 00:08:48.621 passed 00:08:48.621 Test: op_login_session_normal_test ...[2024-07-13 22:54:37.967470] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:08:48.621 [2024-07-13 22:54:37.967519] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:08:48.621 [2024-07-13 22:54:37.967561] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:08:48.621 [2024-07-13 22:54:37.967607] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:08:48.621 [2024-07-13 22:54:37.967704] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1472:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:08:48.621 [2024-07-13 22:54:37.967796] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:08:48.621 passed 00:08:48.621 Test: maxburstlength_test ...[2024-07-13 22:54:37.967853] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1472:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:08:48.621 [2024-07-13 22:54:37.968117] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:08:48.621 [2024-07-13 22:54:37.968197] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4566:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:08:48.621 passed 00:08:48.621 Test: underflow_for_read_transfer_test ...passed 00:08:48.621 Test: underflow_for_zero_read_transfer_test ...passed 00:08:48.621 Test: underflow_for_request_sense_test ...passed 00:08:48.621 Test: underflow_for_check_condition_test ...passed 00:08:48.621 Test: add_transfer_task_test ...passed 00:08:48.621 Test: get_transfer_task_test ...passed 00:08:48.621 Test: del_transfer_task_test ...passed 00:08:48.621 Test: clear_all_transfer_tasks_test ...passed 00:08:48.621 Test: build_iovs_test ...passed 00:08:48.621 Test: build_iovs_with_md_test ...passed 00:08:48.621 Test: pdu_hdr_op_login_test ...[2024-07-13 22:54:37.969748] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1256:iscsi_op_login_rsp_init: *ERROR*: transit error 00:08:48.621 [2024-07-13 22:54:37.969900] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1263:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:08:48.621 passed 00:08:48.621 Test: pdu_hdr_op_text_test ...[2024-07-13 22:54:37.969994] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1277:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:08:48.621 [2024-07-13 22:54:37.970093] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2258:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:08:48.621 [2024-07-13 22:54:37.970182] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2290:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:08:48.621 passed 00:08:48.621 Test: pdu_hdr_op_logout_test ...[2024-07-13 22:54:37.970230] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2303:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:08:48.621 [2024-07-13 22:54:37.970300] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2533:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:08:48.621 passed 00:08:48.621 Test: pdu_hdr_op_scsi_test ...[2024-07-13 22:54:37.970468] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:08:48.621 [2024-07-13 22:54:37.970510] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:08:48.621 [2024-07-13 22:54:37.970570] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3382:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:08:48.621 [2024-07-13 22:54:37.970669] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3415:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:08:48.621 [2024-07-13 22:54:37.970762] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3422:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:08:48.621 passed 00:08:48.621 Test: pdu_hdr_op_task_mgmt_test ...[2024-07-13 22:54:37.970941] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3446:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:08:48.621 [2024-07-13 22:54:37.971056] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3623:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:08:48.621 [2024-07-13 22:54:37.971146] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3712:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:08:48.621 passed 00:08:48.621 Test: pdu_hdr_op_nopout_test ...[2024-07-13 22:54:37.971353] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3731:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:08:48.621 [2024-07-13 22:54:37.971461] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:08:48.621 [2024-07-13 22:54:37.971500] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:08:48.621 passed 00:08:48.621 Test: pdu_hdr_op_data_test ...[2024-07-13 22:54:37.971536] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3761:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:08:48.621 [2024-07-13 22:54:37.971589] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4204:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:08:48.621 [2024-07-13 22:54:37.971658] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:08:48.621 [2024-07-13 22:54:37.971729] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:08:48.621 [2024-07-13 22:54:37.971788] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4234:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:08:48.621 [2024-07-13 22:54:37.971849] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4240:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:08:48.621 passed 00:08:48.621 Test: empty_text_with_cbit_test ...[2024-07-13 22:54:37.971926] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4251:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:08:48.621 [2024-07-13 22:54:37.971973] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4261:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:08:48.621 passed 00:08:48.621 Test: pdu_payload_read_test ...[2024-07-13 22:54:37.974109] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4649:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:08:48.621 passed 00:08:48.621 Test: data_out_pdu_sequence_test ...passed 00:08:48.621 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:08:48.621 00:08:48.621 Run Summary: Type Total Ran Passed Failed Inactive 00:08:48.621 suites 1 1 n/a 0 0 00:08:48.621 tests 24 24 24 0 0 00:08:48.621 asserts 150253 150253 150253 0 n/a 00:08:48.621 00:08:48.621 Elapsed time = 0.017 seconds 00:08:48.621 22:54:37 unittest.unittest_iscsi -- unit/unittest.sh@72 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:08:48.621 00:08:48.621 00:08:48.621 CUnit - A unit testing framework for C - Version 2.1-3 00:08:48.621 http://cunit.sourceforge.net/ 00:08:48.621 00:08:48.621 00:08:48.621 Suite: init_grp_suite 00:08:48.621 Test: create_initiator_group_success_case ...passed 00:08:48.621 Test: find_initiator_group_success_case ...passed 00:08:48.621 Test: register_initiator_group_twice_case ...passed 00:08:48.621 Test: add_initiator_name_success_case ...passed 00:08:48.621 Test: add_initiator_name_fail_case ...[2024-07-13 22:54:38.012267] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:08:48.621 passed 00:08:48.621 Test: delete_all_initiator_names_success_case ...passed 00:08:48.621 Test: add_netmask_success_case ...passed 00:08:48.621 Test: add_netmask_fail_case ...[2024-07-13 22:54:38.012598] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:08:48.621 passed 00:08:48.621 Test: delete_all_netmasks_success_case ...passed 00:08:48.621 Test: initiator_name_overwrite_all_to_any_case ...passed 00:08:48.621 Test: netmask_overwrite_all_to_any_case ...passed 00:08:48.621 Test: add_delete_initiator_names_case ...passed 00:08:48.621 Test: add_duplicated_initiator_names_case ...passed 00:08:48.621 Test: delete_nonexisting_initiator_names_case ...passed 00:08:48.621 Test: add_delete_netmasks_case ...passed 00:08:48.621 Test: add_duplicated_netmasks_case ...passed 00:08:48.621 Test: delete_nonexisting_netmasks_case ...passed 00:08:48.621 00:08:48.621 Run Summary: Type Total Ran Passed Failed Inactive 00:08:48.621 suites 1 1 n/a 0 0 00:08:48.621 tests 17 17 17 0 0 00:08:48.621 asserts 108 108 108 0 n/a 00:08:48.621 00:08:48.621 Elapsed time = 0.001 seconds 00:08:48.880 22:54:38 unittest.unittest_iscsi -- unit/unittest.sh@73 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:08:48.880 00:08:48.880 00:08:48.880 CUnit - A unit testing framework for C - Version 2.1-3 00:08:48.880 http://cunit.sourceforge.net/ 00:08:48.880 00:08:48.880 00:08:48.880 Suite: portal_grp_suite 00:08:48.880 Test: portal_create_ipv4_normal_case ...passed 00:08:48.880 Test: portal_create_ipv6_normal_case ...passed 00:08:48.880 Test: portal_create_ipv4_wildcard_case ...passed 00:08:48.880 Test: portal_create_ipv6_wildcard_case ...passed 00:08:48.880 Test: portal_create_twice_case ...[2024-07-13 22:54:38.047393] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:08:48.880 passed 00:08:48.880 Test: portal_grp_register_unregister_case ...passed 00:08:48.880 Test: portal_grp_register_twice_case ...passed 00:08:48.880 Test: portal_grp_add_delete_case ...passed 00:08:48.880 Test: portal_grp_add_delete_twice_case ...passed 00:08:48.880 00:08:48.880 Run Summary: Type Total Ran Passed Failed Inactive 00:08:48.880 suites 1 1 n/a 0 0 00:08:48.880 tests 9 9 9 0 0 00:08:48.880 asserts 44 44 44 0 n/a 00:08:48.880 00:08:48.880 Elapsed time = 0.003 seconds 00:08:48.880 00:08:48.880 real 0m0.229s 00:08:48.880 user 0m0.177s 00:08:48.880 sys 0m0.055s 00:08:48.880 22:54:38 unittest.unittest_iscsi -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:48.880 22:54:38 unittest.unittest_iscsi -- common/autotest_common.sh@10 -- # set +x 00:08:48.880 ************************************ 00:08:48.880 END TEST unittest_iscsi 00:08:48.880 ************************************ 00:08:48.880 22:54:38 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:48.880 22:54:38 unittest -- unit/unittest.sh@245 -- # run_test unittest_json unittest_json 00:08:48.880 22:54:38 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:48.880 22:54:38 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.880 22:54:38 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:48.880 ************************************ 00:08:48.880 START TEST unittest_json 00:08:48.880 ************************************ 00:08:48.880 22:54:38 unittest.unittest_json -- common/autotest_common.sh@1123 -- # unittest_json 00:08:48.880 22:54:38 unittest.unittest_json -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:08:48.880 00:08:48.880 00:08:48.880 CUnit - A unit testing framework for C - Version 2.1-3 00:08:48.880 http://cunit.sourceforge.net/ 00:08:48.880 00:08:48.880 00:08:48.880 Suite: json 00:08:48.880 Test: test_parse_literal ...passed 00:08:48.880 Test: test_parse_string_simple ...passed 00:08:48.880 Test: test_parse_string_control_chars ...passed 00:08:48.880 Test: test_parse_string_utf8 ...passed 00:08:48.880 Test: test_parse_string_escapes_twochar ...passed 00:08:48.880 Test: test_parse_string_escapes_unicode ...passed 00:08:48.880 Test: test_parse_number ...passed 00:08:48.880 Test: test_parse_array ...passed 00:08:48.880 Test: test_parse_object ...passed 00:08:48.880 Test: test_parse_nesting ...passed 00:08:48.880 Test: test_parse_comment ...passed 00:08:48.880 00:08:48.880 Run Summary: Type Total Ran Passed Failed Inactive 00:08:48.880 suites 1 1 n/a 0 0 00:08:48.880 tests 11 11 11 0 0 00:08:48.880 asserts 1516 1516 1516 0 n/a 00:08:48.880 00:08:48.880 Elapsed time = 0.001 seconds 00:08:48.880 22:54:38 unittest.unittest_json -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:08:48.880 00:08:48.880 00:08:48.880 CUnit - A unit testing framework for C - Version 2.1-3 00:08:48.880 http://cunit.sourceforge.net/ 00:08:48.880 00:08:48.880 00:08:48.880 Suite: json 00:08:48.880 Test: test_strequal ...passed 00:08:48.880 Test: test_num_to_uint16 ...passed 00:08:48.880 Test: test_num_to_int32 ...passed 00:08:48.880 Test: test_num_to_uint64 ...passed 00:08:48.880 Test: test_decode_object ...passed 00:08:48.880 Test: test_decode_array ...passed 00:08:48.880 Test: test_decode_bool ...passed 00:08:48.880 Test: test_decode_uint16 ...passed 00:08:48.880 Test: test_decode_int32 ...passed 00:08:48.880 Test: test_decode_uint32 ...passed 00:08:48.880 Test: test_decode_uint64 ...passed 00:08:48.880 Test: test_decode_string ...passed 00:08:48.880 Test: test_decode_uuid ...passed 00:08:48.880 Test: test_find ...passed 00:08:48.880 Test: test_find_array ...passed 00:08:48.880 Test: test_iterating ...passed 00:08:48.880 Test: test_free_object ...passed 00:08:48.880 00:08:48.880 Run Summary: Type Total Ran Passed Failed Inactive 00:08:48.880 suites 1 1 n/a 0 0 00:08:48.880 tests 17 17 17 0 0 00:08:48.880 asserts 236 236 236 0 n/a 00:08:48.880 00:08:48.880 Elapsed time = 0.001 seconds 00:08:48.880 22:54:38 unittest.unittest_json -- unit/unittest.sh@79 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:08:48.880 00:08:48.880 00:08:48.880 CUnit - A unit testing framework for C - Version 2.1-3 00:08:48.880 http://cunit.sourceforge.net/ 00:08:48.880 00:08:48.880 00:08:48.880 Suite: json 00:08:48.880 Test: test_write_literal ...passed 00:08:48.880 Test: test_write_string_simple ...passed 00:08:48.880 Test: test_write_string_escapes ...passed 00:08:48.880 Test: test_write_string_utf16le ...passed 00:08:48.880 Test: test_write_number_int32 ...passed 00:08:48.880 Test: test_write_number_uint32 ...passed 00:08:48.880 Test: test_write_number_uint128 ...passed 00:08:48.880 Test: test_write_string_number_uint128 ...passed 00:08:48.880 Test: test_write_number_int64 ...passed 00:08:48.880 Test: test_write_number_uint64 ...passed 00:08:48.880 Test: test_write_number_double ...passed 00:08:48.880 Test: test_write_uuid ...passed 00:08:48.880 Test: test_write_array ...passed 00:08:48.880 Test: test_write_object ...passed 00:08:48.880 Test: test_write_nesting ...passed 00:08:48.881 Test: test_write_val ...passed 00:08:48.881 00:08:48.881 Run Summary: Type Total Ran Passed Failed Inactive 00:08:48.881 suites 1 1 n/a 0 0 00:08:48.881 tests 16 16 16 0 0 00:08:48.881 asserts 918 918 918 0 n/a 00:08:48.881 00:08:48.881 Elapsed time = 0.004 seconds 00:08:48.881 22:54:38 unittest.unittest_json -- unit/unittest.sh@80 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:08:48.881 00:08:48.881 00:08:48.881 CUnit - A unit testing framework for C - Version 2.1-3 00:08:48.881 http://cunit.sourceforge.net/ 00:08:48.881 00:08:48.881 00:08:48.881 Suite: jsonrpc 00:08:48.881 Test: test_parse_request ...passed 00:08:48.881 Test: test_parse_request_streaming ...passed 00:08:48.881 00:08:48.881 Run Summary: Type Total Ran Passed Failed Inactive 00:08:48.881 suites 1 1 n/a 0 0 00:08:48.881 tests 2 2 2 0 0 00:08:48.881 asserts 289 289 289 0 n/a 00:08:48.881 00:08:48.881 Elapsed time = 0.003 seconds 00:08:48.881 00:08:48.881 real 0m0.126s 00:08:48.881 user 0m0.066s 00:08:48.881 sys 0m0.061s 00:08:48.881 22:54:38 unittest.unittest_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:48.881 22:54:38 unittest.unittest_json -- common/autotest_common.sh@10 -- # set +x 00:08:48.881 ************************************ 00:08:48.881 END TEST unittest_json 00:08:48.881 ************************************ 00:08:49.139 22:54:38 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:49.139 22:54:38 unittest -- unit/unittest.sh@246 -- # run_test unittest_rpc unittest_rpc 00:08:49.139 22:54:38 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:49.139 22:54:38 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.139 22:54:38 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:49.139 ************************************ 00:08:49.139 START TEST unittest_rpc 00:08:49.139 ************************************ 00:08:49.139 22:54:38 unittest.unittest_rpc -- common/autotest_common.sh@1123 -- # unittest_rpc 00:08:49.139 22:54:38 unittest.unittest_rpc -- unit/unittest.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:08:49.139 00:08:49.139 00:08:49.139 CUnit - A unit testing framework for C - Version 2.1-3 00:08:49.139 http://cunit.sourceforge.net/ 00:08:49.139 00:08:49.139 00:08:49.139 Suite: rpc 00:08:49.139 Test: test_jsonrpc_handler ...passed 00:08:49.139 Test: test_spdk_rpc_is_method_allowed ...passed 00:08:49.139 Test: test_rpc_get_methods ...[2024-07-13 22:54:38.315639] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:08:49.139 passed 00:08:49.139 Test: test_rpc_spdk_get_version ...passed 00:08:49.139 Test: test_spdk_rpc_listen_close ...passed 00:08:49.139 Test: test_rpc_run_multiple_servers ...passed 00:08:49.139 00:08:49.139 Run Summary: Type Total Ran Passed Failed Inactive 00:08:49.139 suites 1 1 n/a 0 0 00:08:49.139 tests 6 6 6 0 0 00:08:49.139 asserts 23 23 23 0 n/a 00:08:49.139 00:08:49.139 Elapsed time = 0.000 seconds 00:08:49.139 00:08:49.139 real 0m0.035s 00:08:49.139 user 0m0.027s 00:08:49.139 sys 0m0.007s 00:08:49.139 22:54:38 unittest.unittest_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:49.139 ************************************ 00:08:49.139 END TEST unittest_rpc 00:08:49.139 ************************************ 00:08:49.139 22:54:38 unittest.unittest_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.139 22:54:38 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:49.140 22:54:38 unittest -- unit/unittest.sh@247 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:08:49.140 22:54:38 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:49.140 22:54:38 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.140 22:54:38 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:49.140 ************************************ 00:08:49.140 START TEST unittest_notify 00:08:49.140 ************************************ 00:08:49.140 22:54:38 unittest.unittest_notify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:08:49.140 00:08:49.140 00:08:49.140 CUnit - A unit testing framework for C - Version 2.1-3 00:08:49.140 http://cunit.sourceforge.net/ 00:08:49.140 00:08:49.140 00:08:49.140 Suite: app_suite 00:08:49.140 Test: notify ...passed 00:08:49.140 00:08:49.140 Run Summary: Type Total Ran Passed Failed Inactive 00:08:49.140 suites 1 1 n/a 0 0 00:08:49.140 tests 1 1 1 0 0 00:08:49.140 asserts 13 13 13 0 n/a 00:08:49.140 00:08:49.140 Elapsed time = 0.000 seconds 00:08:49.140 00:08:49.140 real 0m0.034s 00:08:49.140 user 0m0.025s 00:08:49.140 sys 0m0.009s 00:08:49.140 22:54:38 unittest.unittest_notify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:49.140 22:54:38 unittest.unittest_notify -- common/autotest_common.sh@10 -- # set +x 00:08:49.140 ************************************ 00:08:49.140 END TEST unittest_notify 00:08:49.140 ************************************ 00:08:49.140 22:54:38 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:49.140 22:54:38 unittest -- unit/unittest.sh@248 -- # run_test unittest_nvme unittest_nvme 00:08:49.140 22:54:38 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:49.140 22:54:38 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.140 22:54:38 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:49.140 ************************************ 00:08:49.140 START TEST unittest_nvme 00:08:49.140 ************************************ 00:08:49.140 22:54:38 unittest.unittest_nvme -- common/autotest_common.sh@1123 -- # unittest_nvme 00:08:49.140 22:54:38 unittest.unittest_nvme -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:08:49.140 00:08:49.140 00:08:49.140 CUnit - A unit testing framework for C - Version 2.1-3 00:08:49.140 http://cunit.sourceforge.net/ 00:08:49.140 00:08:49.140 00:08:49.140 Suite: nvme 00:08:49.140 Test: test_opc_data_transfer ...passed 00:08:49.140 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:08:49.140 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:08:49.140 Test: test_trid_parse_and_compare ...[2024-07-13 22:54:38.483952] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1199:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:08:49.140 [2024-07-13 22:54:38.484276] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:08:49.140 [2024-07-13 22:54:38.484372] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1211:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:08:49.140 [2024-07-13 22:54:38.484416] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:08:49.140 [2024-07-13 22:54:38.484453] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1222:parse_next_key: *ERROR*: Key without value 00:08:49.140 [2024-07-13 22:54:38.484556] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:08:49.140 passed 00:08:49.140 Test: test_trid_trtype_str ...passed 00:08:49.140 Test: test_trid_adrfam_str ...passed 00:08:49.140 Test: test_nvme_ctrlr_probe ...[2024-07-13 22:54:38.484792] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:08:49.140 passed 00:08:49.140 Test: test_spdk_nvme_probe ...[2024-07-13 22:54:38.484938] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:08:49.140 [2024-07-13 22:54:38.484976] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:08:49.140 [2024-07-13 22:54:38.485075] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:08:49.140 [2024-07-13 22:54:38.485119] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:08:49.140 passed 00:08:49.140 Test: test_spdk_nvme_connect ...[2024-07-13 22:54:38.485230] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1010:spdk_nvme_connect: *ERROR*: No transport ID specified 00:08:49.140 passed 00:08:49.140 Test: test_nvme_ctrlr_probe_internal ...[2024-07-13 22:54:38.485591] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:08:49.140 [2024-07-13 22:54:38.485753] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:08:49.140 [2024-07-13 22:54:38.485794] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:08:49.140 passed 00:08:49.140 Test: test_nvme_init_controllers ...passed 00:08:49.140 Test: test_nvme_driver_init ...[2024-07-13 22:54:38.485949] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:08:49.140 [2024-07-13 22:54:38.486061] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:08:49.140 [2024-07-13 22:54:38.486102] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:08:49.399 [2024-07-13 22:54:38.599741] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:08:49.399 passed 00:08:49.399 Test: test_spdk_nvme_detach ...[2024-07-13 22:54:38.599956] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:08:49.399 passed 00:08:49.399 Test: test_nvme_completion_poll_cb ...passed 00:08:49.399 Test: test_nvme_user_copy_cmd_complete ...passed 00:08:49.399 Test: test_nvme_allocate_request_null ...passed 00:08:49.399 Test: test_nvme_allocate_request ...passed 00:08:49.399 Test: test_nvme_free_request ...passed 00:08:49.399 Test: test_nvme_allocate_request_user_copy ...passed 00:08:49.399 Test: test_nvme_robust_mutex_init_shared ...passed 00:08:49.399 Test: test_nvme_request_check_timeout ...passed 00:08:49.399 Test: test_nvme_wait_for_completion ...passed 00:08:49.399 Test: test_spdk_nvme_parse_func ...passed 00:08:49.399 Test: test_spdk_nvme_detach_async ...passed 00:08:49.399 Test: test_nvme_parse_addr ...[2024-07-13 22:54:38.600967] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1609:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:08:49.399 passed 00:08:49.399 00:08:49.399 Run Summary: Type Total Ran Passed Failed Inactive 00:08:49.399 suites 1 1 n/a 0 0 00:08:49.399 tests 25 25 25 0 0 00:08:49.399 asserts 326 326 326 0 n/a 00:08:49.399 00:08:49.399 Elapsed time = 0.007 seconds 00:08:49.399 22:54:38 unittest.unittest_nvme -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:08:49.399 00:08:49.399 00:08:49.399 CUnit - A unit testing framework for C - Version 2.1-3 00:08:49.399 http://cunit.sourceforge.net/ 00:08:49.399 00:08:49.399 00:08:49.399 Suite: nvme_ctrlr 00:08:49.399 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-07-13 22:54:38.629829] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:49.399 passed 00:08:49.399 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-07-13 22:54:38.631493] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:49.399 passed 00:08:49.399 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-07-13 22:54:38.632838] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:49.399 passed 00:08:49.399 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-07-13 22:54:38.634207] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:49.399 passed 00:08:49.399 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-07-13 22:54:38.635468] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:49.399 [2024-07-13 22:54:38.636644] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-13 22:54:38.637851] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-13 22:54:38.639057] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:08:49.399 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-07-13 22:54:38.641458] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:49.399 [2024-07-13 22:54:38.643810] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-13 22:54:38.645014] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:08:49.399 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-07-13 22:54:38.647494] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:49.399 [2024-07-13 22:54:38.648743] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-13 22:54:38.651217] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:08:49.399 Test: test_nvme_ctrlr_init_delay ...[2024-07-13 22:54:38.653731] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:49.399 passed 00:08:49.399 Test: test_alloc_io_qpair_rr_1 ...[2024-07-13 22:54:38.655070] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:49.399 [2024-07-13 22:54:38.655276] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:08:49.399 [2024-07-13 22:54:38.655507] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:08:49.399 passed 00:08:49.399 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:08:49.399 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:08:49.399 Test: test_alloc_io_qpair_wrr_1 ...[2024-07-13 22:54:38.655583] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:08:49.399 [2024-07-13 22:54:38.655626] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:08:49.399 [2024-07-13 22:54:38.655733] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:49.399 passed 00:08:49.399 Test: test_alloc_io_qpair_wrr_2 ...[2024-07-13 22:54:38.655945] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:49.399 [2024-07-13 22:54:38.656074] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:08:49.399 passed 00:08:49.399 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-07-13 22:54:38.656323] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4993:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:08:49.399 [2024-07-13 22:54:38.656482] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5030:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:08:49.399 [2024-07-13 22:54:38.656608] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5070:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:08:49.399 [2024-07-13 22:54:38.656699] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5030:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:08:49.399 passed 00:08:49.399 Test: test_nvme_ctrlr_fail ...passed 00:08:49.399 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:08:49.399 Test: test_nvme_ctrlr_set_supported_features ...passed 00:08:49.399 Test: test_nvme_ctrlr_set_host_feature ...[2024-07-13 22:54:38.656775] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:08:49.399 [2024-07-13 22:54:38.656957] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:49.399 passed 00:08:49.399 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:08:49.399 Test: test_nvme_ctrlr_test_active_ns ...[2024-07-13 22:54:38.658391] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:49.658 passed 00:08:49.658 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:08:49.658 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:08:49.658 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:08:49.658 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-07-13 22:54:38.980502] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:49.658 passed 00:08:49.658 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-07-13 22:54:38.987827] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:49.658 passed 00:08:49.658 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-07-13 22:54:38.989104] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:49.658 [2024-07-13 22:54:38.989221] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3002:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:08:49.658 passed 00:08:49.658 Test: test_alloc_io_qpair_fail ...[2024-07-13 22:54:38.990398] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:49.658 passed 00:08:49.658 Test: test_nvme_ctrlr_add_remove_process ...passed 00:08:49.658 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:08:49.658 Test: test_nvme_ctrlr_set_state ...[2024-07-13 22:54:38.990499] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 506:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:08:49.658 [2024-07-13 22:54:38.990642] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1546:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:08:49.658 passed 00:08:49.658 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-07-13 22:54:38.990716] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:49.658 passed 00:08:49.658 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-07-13 22:54:39.013635] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:49.658 passed 00:08:49.658 Test: test_nvme_ctrlr_ns_mgmt ...[2024-07-13 22:54:39.056113] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:49.658 passed 00:08:49.658 Test: test_nvme_ctrlr_reset ...[2024-07-13 22:54:39.057733] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:49.658 passed 00:08:49.658 Test: test_nvme_ctrlr_aer_callback ...[2024-07-13 22:54:39.058105] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:49.658 passed 00:08:49.658 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-07-13 22:54:39.059601] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:49.658 passed 00:08:49.658 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:08:49.658 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:08:49.658 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-07-13 22:54:39.061437] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:49.658 passed 00:08:49.658 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:08:49.658 Test: test_nvme_ctrlr_ana_resize ...[2024-07-13 22:54:39.062889] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:49.917 passed 00:08:49.917 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:08:49.917 Test: test_nvme_transport_ctrlr_ready ...[2024-07-13 22:54:39.064536] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4152:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:08:49.917 [2024-07-13 22:54:39.064608] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4204:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 53 (error) 00:08:49.917 passed 00:08:49.917 Test: test_nvme_ctrlr_disable ...[2024-07-13 22:54:39.064667] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:49.917 passed 00:08:49.917 00:08:49.917 Run Summary: Type Total Ran Passed Failed Inactive 00:08:49.917 suites 1 1 n/a 0 0 00:08:49.917 tests 44 44 44 0 0 00:08:49.917 asserts 10434 10434 10434 0 n/a 00:08:49.917 00:08:49.917 Elapsed time = 0.393 seconds 00:08:49.917 22:54:39 unittest.unittest_nvme -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:08:49.917 00:08:49.917 00:08:49.917 CUnit - A unit testing framework for C - Version 2.1-3 00:08:49.917 http://cunit.sourceforge.net/ 00:08:49.917 00:08:49.917 00:08:49.917 Suite: nvme_ctrlr_cmd 00:08:49.917 Test: test_get_log_pages ...passed 00:08:49.917 Test: test_set_feature_cmd ...passed 00:08:49.917 Test: test_set_feature_ns_cmd ...passed 00:08:49.917 Test: test_get_feature_cmd ...passed 00:08:49.917 Test: test_get_feature_ns_cmd ...passed 00:08:49.917 Test: test_abort_cmd ...passed 00:08:49.917 Test: test_set_host_id_cmds ...[2024-07-13 22:54:39.111903] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:08:49.918 passed 00:08:49.918 Test: test_io_cmd_raw_no_payload_build ...passed 00:08:49.918 Test: test_io_raw_cmd ...passed 00:08:49.918 Test: test_io_raw_cmd_with_md ...passed 00:08:49.918 Test: test_namespace_attach ...passed 00:08:49.918 Test: test_namespace_detach ...passed 00:08:49.918 Test: test_namespace_create ...passed 00:08:49.918 Test: test_namespace_delete ...passed 00:08:49.918 Test: test_doorbell_buffer_config ...passed 00:08:49.918 Test: test_format_nvme ...passed 00:08:49.918 Test: test_fw_commit ...passed 00:08:49.918 Test: test_fw_image_download ...passed 00:08:49.918 Test: test_sanitize ...passed 00:08:49.918 Test: test_directive ...passed 00:08:49.918 Test: test_nvme_request_add_abort ...passed 00:08:49.918 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:08:49.918 Test: test_nvme_ctrlr_cmd_identify ...passed 00:08:49.918 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:08:49.918 00:08:49.918 Run Summary: Type Total Ran Passed Failed Inactive 00:08:49.918 suites 1 1 n/a 0 0 00:08:49.918 tests 24 24 24 0 0 00:08:49.918 asserts 198 198 198 0 n/a 00:08:49.918 00:08:49.918 Elapsed time = 0.001 seconds 00:08:49.918 22:54:39 unittest.unittest_nvme -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:08:49.918 00:08:49.918 00:08:49.918 CUnit - A unit testing framework for C - Version 2.1-3 00:08:49.918 http://cunit.sourceforge.net/ 00:08:49.918 00:08:49.918 00:08:49.918 Suite: nvme_ctrlr_cmd 00:08:49.918 Test: test_geometry_cmd ...passed 00:08:49.918 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:08:49.918 00:08:49.918 Run Summary: Type Total Ran Passed Failed Inactive 00:08:49.918 suites 1 1 n/a 0 0 00:08:49.918 tests 2 2 2 0 0 00:08:49.918 asserts 7 7 7 0 n/a 00:08:49.918 00:08:49.918 Elapsed time = 0.000 seconds 00:08:49.918 22:54:39 unittest.unittest_nvme -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:08:49.918 00:08:49.918 00:08:49.918 CUnit - A unit testing framework for C - Version 2.1-3 00:08:49.918 http://cunit.sourceforge.net/ 00:08:49.918 00:08:49.918 00:08:49.918 Suite: nvme 00:08:49.918 Test: test_nvme_ns_construct ...passed 00:08:49.918 Test: test_nvme_ns_uuid ...passed 00:08:49.918 Test: test_nvme_ns_csi ...passed 00:08:49.918 Test: test_nvme_ns_data ...passed 00:08:49.918 Test: test_nvme_ns_set_identify_data ...passed 00:08:49.918 Test: test_spdk_nvme_ns_get_values ...passed 00:08:49.918 Test: test_spdk_nvme_ns_is_active ...passed 00:08:49.918 Test: spdk_nvme_ns_supports ...passed 00:08:49.918 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:08:49.918 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:08:49.918 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:08:49.918 Test: test_nvme_ns_find_id_desc ...passed 00:08:49.918 00:08:49.918 Run Summary: Type Total Ran Passed Failed Inactive 00:08:49.918 suites 1 1 n/a 0 0 00:08:49.918 tests 12 12 12 0 0 00:08:49.918 asserts 95 95 95 0 n/a 00:08:49.918 00:08:49.918 Elapsed time = 0.000 seconds 00:08:49.918 22:54:39 unittest.unittest_nvme -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:08:49.918 00:08:49.918 00:08:49.918 CUnit - A unit testing framework for C - Version 2.1-3 00:08:49.918 http://cunit.sourceforge.net/ 00:08:49.918 00:08:49.918 00:08:49.918 Suite: nvme_ns_cmd 00:08:49.918 Test: split_test ...passed 00:08:49.918 Test: split_test2 ...passed 00:08:49.918 Test: split_test3 ...passed 00:08:49.918 Test: split_test4 ...passed 00:08:49.918 Test: test_nvme_ns_cmd_flush ...passed 00:08:49.918 Test: test_nvme_ns_cmd_dataset_management ...passed 00:08:49.918 Test: test_nvme_ns_cmd_copy ...passed 00:08:49.918 Test: test_io_flags ...[2024-07-13 22:54:39.202291] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:08:49.918 passed 00:08:49.918 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:08:49.918 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:08:49.918 Test: test_nvme_ns_cmd_reservation_register ...passed 00:08:49.918 Test: test_nvme_ns_cmd_reservation_release ...passed 00:08:49.918 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:08:49.918 Test: test_nvme_ns_cmd_reservation_report ...passed 00:08:49.918 Test: test_cmd_child_request ...passed 00:08:49.918 Test: test_nvme_ns_cmd_readv ...passed 00:08:49.918 Test: test_nvme_ns_cmd_read_with_md ...passed 00:08:49.918 Test: test_nvme_ns_cmd_writev ...[2024-07-13 22:54:39.204522] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 291:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:08:49.918 passed 00:08:49.918 Test: test_nvme_ns_cmd_write_with_md ...passed 00:08:49.918 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:08:49.918 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:08:49.918 Test: test_nvme_ns_cmd_comparev ...passed 00:08:49.918 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:08:49.918 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:08:49.918 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:08:49.918 Test: test_nvme_ns_cmd_setup_request ...passed 00:08:49.918 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:08:49.918 Test: test_spdk_nvme_ns_cmd_writev_ext ...[2024-07-13 22:54:39.207199] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:08:49.918 passed 00:08:49.918 Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-07-13 22:54:39.207353] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:08:49.918 passed 00:08:49.918 Test: test_nvme_ns_cmd_verify ...passed 00:08:49.918 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:08:49.918 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:08:49.918 00:08:49.918 Run Summary: Type Total Ran Passed Failed Inactive 00:08:49.918 suites 1 1 n/a 0 0 00:08:49.918 tests 32 32 32 0 0 00:08:49.918 asserts 550 550 550 0 n/a 00:08:49.918 00:08:49.918 Elapsed time = 0.008 seconds 00:08:49.918 22:54:39 unittest.unittest_nvme -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:08:49.918 00:08:49.918 00:08:49.918 CUnit - A unit testing framework for C - Version 2.1-3 00:08:49.918 http://cunit.sourceforge.net/ 00:08:49.918 00:08:49.918 00:08:49.918 Suite: nvme_ns_cmd 00:08:49.918 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:08:49.918 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:08:49.918 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:08:49.918 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:08:49.918 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:08:49.918 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:08:49.918 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:08:49.918 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:08:49.918 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:08:49.918 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:08:49.918 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:08:49.918 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:08:49.918 00:08:49.918 Run Summary: Type Total Ran Passed Failed Inactive 00:08:49.918 suites 1 1 n/a 0 0 00:08:49.918 tests 12 12 12 0 0 00:08:49.918 asserts 123 123 123 0 n/a 00:08:49.918 00:08:49.918 Elapsed time = 0.001 seconds 00:08:49.918 22:54:39 unittest.unittest_nvme -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:08:49.918 00:08:49.918 00:08:49.918 CUnit - A unit testing framework for C - Version 2.1-3 00:08:49.918 http://cunit.sourceforge.net/ 00:08:49.918 00:08:49.918 00:08:49.918 Suite: nvme_qpair 00:08:49.918 Test: test3 ...passed 00:08:49.918 Test: test_ctrlr_failed ...passed 00:08:49.918 Test: struct_packing ...passed 00:08:49.918 Test: test_nvme_qpair_process_completions ...[2024-07-13 22:54:39.268057] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:08:49.918 [2024-07-13 22:54:39.268438] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:08:49.918 [2024-07-13 22:54:39.268520] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:08:49.918 passed 00:08:49.918 Test: test_nvme_completion_is_retry ...passed 00:08:49.918 Test: test_get_status_string ...[2024-07-13 22:54:39.268618] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:08:49.918 passed 00:08:49.918 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:08:49.918 Test: test_nvme_qpair_submit_request ...passed 00:08:49.918 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:08:49.918 Test: test_nvme_qpair_manual_complete_request ...passed 00:08:49.918 Test: test_nvme_qpair_init_deinit ...passed 00:08:49.918 Test: test_nvme_get_sgl_print_info ...[2024-07-13 22:54:39.269074] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:08:49.918 passed 00:08:49.918 00:08:49.918 Run Summary: Type Total Ran Passed Failed Inactive 00:08:49.918 suites 1 1 n/a 0 0 00:08:49.918 tests 12 12 12 0 0 00:08:49.918 asserts 154 154 154 0 n/a 00:08:49.918 00:08:49.918 Elapsed time = 0.001 seconds 00:08:49.918 22:54:39 unittest.unittest_nvme -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:08:49.918 00:08:49.918 00:08:49.918 CUnit - A unit testing framework for C - Version 2.1-3 00:08:49.918 http://cunit.sourceforge.net/ 00:08:49.918 00:08:49.918 00:08:49.918 Suite: nvme_pcie 00:08:49.918 Test: test_prp_list_append ...[2024-07-13 22:54:39.302829] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:08:49.919 [2024-07-13 22:54:39.303155] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1234:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:08:49.919 [2024-07-13 22:54:39.303209] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1224:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:08:49.919 [2024-07-13 22:54:39.303487] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:08:49.919 passed 00:08:49.919 Test: test_nvme_pcie_hotplug_monitor ...[2024-07-13 22:54:39.303596] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:08:49.919 passed 00:08:49.919 Test: test_shadow_doorbell_update ...passed 00:08:49.919 Test: test_build_contig_hw_sgl_request ...passed 00:08:49.919 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:08:49.919 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:08:49.919 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:08:49.919 Test: test_nvme_pcie_qpair_build_contig_request ...[2024-07-13 22:54:39.303797] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:08:49.919 passed 00:08:49.919 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:08:49.919 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:08:49.919 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:08:49.919 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...[2024-07-13 22:54:39.303888] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:08:49.919 [2024-07-13 22:54:39.303967] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:08:49.919 passed 00:08:49.919 Test: test_nvme_pcie_ctrlr_config_pmr ...passed 00:08:49.919 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:08:49.919 00:08:49.919 [2024-07-13 22:54:39.304019] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:08:49.919 [2024-07-13 22:54:39.304069] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:08:49.919 Run Summary: Type Total Ran Passed Failed Inactive 00:08:49.919 suites 1 1 n/a 0 0 00:08:49.919 tests 14 14 14 0 0 00:08:49.919 asserts 235 235 235 0 n/a 00:08:49.919 00:08:49.919 Elapsed time = 0.001 seconds 00:08:50.179 22:54:39 unittest.unittest_nvme -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:08:50.179 00:08:50.179 00:08:50.179 CUnit - A unit testing framework for C - Version 2.1-3 00:08:50.179 http://cunit.sourceforge.net/ 00:08:50.179 00:08:50.179 00:08:50.179 Suite: nvme_ns_cmd 00:08:50.179 Test: nvme_poll_group_create_test ...passed 00:08:50.179 Test: nvme_poll_group_add_remove_test ...passed 00:08:50.179 Test: nvme_poll_group_process_completions ...passed 00:08:50.179 Test: nvme_poll_group_destroy_test ...passed 00:08:50.179 Test: nvme_poll_group_get_free_stats ...passed 00:08:50.179 00:08:50.179 Run Summary: Type Total Ran Passed Failed Inactive 00:08:50.179 suites 1 1 n/a 0 0 00:08:50.179 tests 5 5 5 0 0 00:08:50.179 asserts 75 75 75 0 n/a 00:08:50.179 00:08:50.179 Elapsed time = 0.000 seconds 00:08:50.179 22:54:39 unittest.unittest_nvme -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:08:50.179 00:08:50.179 00:08:50.179 CUnit - A unit testing framework for C - Version 2.1-3 00:08:50.179 http://cunit.sourceforge.net/ 00:08:50.179 00:08:50.179 00:08:50.179 Suite: nvme_quirks 00:08:50.179 Test: test_nvme_quirks_striping ...passed 00:08:50.179 00:08:50.179 Run Summary: Type Total Ran Passed Failed Inactive 00:08:50.179 suites 1 1 n/a 0 0 00:08:50.179 tests 1 1 1 0 0 00:08:50.179 asserts 5 5 5 0 n/a 00:08:50.179 00:08:50.179 Elapsed time = 0.000 seconds 00:08:50.179 22:54:39 unittest.unittest_nvme -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:08:50.179 00:08:50.179 00:08:50.179 CUnit - A unit testing framework for C - Version 2.1-3 00:08:50.179 http://cunit.sourceforge.net/ 00:08:50.179 00:08:50.179 00:08:50.179 Suite: nvme_tcp 00:08:50.179 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:08:50.179 Test: test_nvme_tcp_build_iovs ...passed 00:08:50.179 Test: test_nvme_tcp_build_sgl_request ...[2024-07-13 22:54:39.397572] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 848:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7ffd7102b1f0, and the iovcnt=16, remaining_size=28672 00:08:50.179 passed 00:08:50.179 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:08:50.179 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:08:50.179 Test: test_nvme_tcp_req_complete_safe ...passed 00:08:50.179 Test: test_nvme_tcp_req_get ...passed 00:08:50.179 Test: test_nvme_tcp_req_init ...passed 00:08:50.179 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:08:50.179 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:08:50.179 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:08:50.179 Test: test_nvme_tcp_alloc_reqs ...[2024-07-13 22:54:39.398195] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd7102cf30 is same with the state(6) to be set 00:08:50.179 passed 00:08:50.179 Test: test_nvme_tcp_qpair_send_h2c_term_req ...[2024-07-13 22:54:39.398511] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd7102c0e0 is same with the state(5) to be set 00:08:50.179 passed 00:08:50.179 Test: test_nvme_tcp_pdu_ch_handle ...[2024-07-13 22:54:39.398594] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1190:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7ffd7102cc70 00:08:50.179 [2024-07-13 22:54:39.398654] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1249:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:08:50.179 [2024-07-13 22:54:39.398740] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd7102c5a0 is same with the state(5) to be set 00:08:50.179 [2024-07-13 22:54:39.398807] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1200:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:08:50.179 [2024-07-13 22:54:39.398889] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd7102c5a0 is same with the state(5) to be set 00:08:50.179 [2024-07-13 22:54:39.398936] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:08:50.179 [2024-07-13 22:54:39.398973] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd7102c5a0 is same with the state(5) to be set 00:08:50.179 [2024-07-13 22:54:39.399028] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd7102c5a0 is same with the state(5) to be set 00:08:50.179 [2024-07-13 22:54:39.399087] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd7102c5a0 is same with the state(5) to be set 00:08:50.179 [2024-07-13 22:54:39.399156] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd7102c5a0 is same with the state(5) to be set 00:08:50.179 [2024-07-13 22:54:39.399199] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd7102c5a0 is same with the state(5) to be set 00:08:50.179 [2024-07-13 22:54:39.399251] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd7102c5a0 is same with the state(5) to be set 00:08:50.179 passed 00:08:50.179 Test: test_nvme_tcp_qpair_connect_sock ...[2024-07-13 22:54:39.399422] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:08:50.179 [2024-07-13 22:54:39.399476] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2345:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:08:50.179 [2024-07-13 22:54:39.399744] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2345:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:08:50.179 passed 00:08:50.179 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:08:50.179 Test: test_nvme_tcp_c2h_payload_handle ...[2024-07-13 22:54:39.399896] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1357:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffd7102c7b0): PDU Sequence Error 00:08:50.179 passed 00:08:50.179 Test: test_nvme_tcp_icresp_handle ...[2024-07-13 22:54:39.399961] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1576:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:08:50.179 [2024-07-13 22:54:39.400006] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1583:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:08:50.179 [2024-07-13 22:54:39.400050] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd7102c0f0 is same with the state(5) to be set 00:08:50.179 [2024-07-13 22:54:39.400100] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1592:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:08:50.179 [2024-07-13 22:54:39.400160] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd7102c0f0 is same with the state(5) to be set 00:08:50.179 passed 00:08:50.179 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:08:50.179 Test: test_nvme_tcp_capsule_resp_hdr_handle ...passed 00:08:50.179 Test: test_nvme_tcp_ctrlr_connect_qpair ...[2024-07-13 22:54:39.400221] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd7102c0f0 is same with the state(0) to be set 00:08:50.179 [2024-07-13 22:54:39.400284] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1357:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffd7102cc70): PDU Sequence Error 00:08:50.179 [2024-07-13 22:54:39.400376] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1653:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7ffd7102b3b0 00:08:50.179 passed 00:08:50.179 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-07-13 22:54:39.400527] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 358:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7ffd7102aa30, errno=0, rc=0 00:08:50.179 [2024-07-13 22:54:39.400591] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd7102aa30 is same with the state(5) to be set 00:08:50.179 [2024-07-13 22:54:39.400668] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd7102aa30 is same with the state(5) to be set 00:08:50.179 [2024-07-13 22:54:39.400732] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffd7102aa30 (0): Success 00:08:50.179 [2024-07-13 22:54:39.400782] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffd7102aa30 (0): Success 00:08:50.179 passed 00:08:50.179 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-07-13 22:54:39.520741] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2516:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:08:50.179 [2024-07-13 22:54:39.520847] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2516:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:50.179 passed 00:08:50.179 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:08:50.179 Test: test_nvme_tcp_poll_group_get_stats ...[2024-07-13 22:54:39.521169] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2964:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:50.179 [2024-07-13 22:54:39.521216] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2964:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:50.179 passed 00:08:50.179 Test: test_nvme_tcp_ctrlr_construct ...[2024-07-13 22:54:39.521413] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2516:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:50.179 [2024-07-13 22:54:39.521471] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:08:50.179 [2024-07-13 22:54:39.521582] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:08:50.179 [2024-07-13 22:54:39.521643] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:08:50.179 [2024-07-13 22:54:39.521757] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000007d80 with addr=192.168.1.78, port=23 00:08:50.179 passed 00:08:50.179 Test: test_nvme_tcp_qpair_submit_request ...[2024-07-13 22:54:39.521831] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:08:50.179 [2024-07-13 22:54:39.521973] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 848:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x614000000c40, and the iovcnt=1, remaining_size=1024 00:08:50.179 passed 00:08:50.179 00:08:50.179 [2024-07-13 22:54:39.522022] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1035:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:08:50.179 Run Summary: Type Total Ran Passed Failed Inactive 00:08:50.179 suites 1 1 n/a 0 0 00:08:50.180 tests 27 27 27 0 0 00:08:50.180 asserts 624 624 624 0 n/a 00:08:50.180 00:08:50.180 Elapsed time = 0.124 seconds 00:08:50.180 22:54:39 unittest.unittest_nvme -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:08:50.180 00:08:50.180 00:08:50.180 CUnit - A unit testing framework for C - Version 2.1-3 00:08:50.180 http://cunit.sourceforge.net/ 00:08:50.180 00:08:50.180 00:08:50.180 Suite: nvme_transport 00:08:50.180 Test: test_nvme_get_transport ...passed 00:08:50.180 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:08:50.180 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:08:50.180 Test: test_nvme_transport_poll_group_add_remove ...passed 00:08:50.180 Test: test_ctrlr_get_memory_domains ...passed 00:08:50.180 00:08:50.180 Run Summary: Type Total Ran Passed Failed Inactive 00:08:50.180 suites 1 1 n/a 0 0 00:08:50.180 tests 5 5 5 0 0 00:08:50.180 asserts 28 28 28 0 n/a 00:08:50.180 00:08:50.180 Elapsed time = 0.000 seconds 00:08:50.180 22:54:39 unittest.unittest_nvme -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:08:50.438 00:08:50.438 00:08:50.438 CUnit - A unit testing framework for C - Version 2.1-3 00:08:50.438 http://cunit.sourceforge.net/ 00:08:50.438 00:08:50.438 00:08:50.438 Suite: nvme_io_msg 00:08:50.438 Test: test_nvme_io_msg_send ...passed 00:08:50.438 Test: test_nvme_io_msg_process ...passed 00:08:50.438 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:08:50.438 00:08:50.438 Run Summary: Type Total Ran Passed Failed Inactive 00:08:50.438 suites 1 1 n/a 0 0 00:08:50.438 tests 3 3 3 0 0 00:08:50.438 asserts 56 56 56 0 n/a 00:08:50.438 00:08:50.438 Elapsed time = 0.000 seconds 00:08:50.438 22:54:39 unittest.unittest_nvme -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:08:50.438 00:08:50.438 00:08:50.438 CUnit - A unit testing framework for C - Version 2.1-3 00:08:50.438 http://cunit.sourceforge.net/ 00:08:50.438 00:08:50.438 00:08:50.438 Suite: nvme_pcie_common 00:08:50.438 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-07-13 22:54:39.626517] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:08:50.438 passed 00:08:50.438 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:08:50.438 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:08:50.438 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-07-13 22:54:39.627259] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 504:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:08:50.438 [2024-07-13 22:54:39.627402] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 457:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:08:50.438 passed 00:08:50.438 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-07-13 22:54:39.627453] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 551:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:08:50.438 passed 00:08:50.438 Test: test_nvme_pcie_poll_group_get_stats ...[2024-07-13 22:54:39.627872] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:50.438 [2024-07-13 22:54:39.627933] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:50.438 passed 00:08:50.438 00:08:50.438 Run Summary: Type Total Ran Passed Failed Inactive 00:08:50.438 suites 1 1 n/a 0 0 00:08:50.438 tests 6 6 6 0 0 00:08:50.438 asserts 148 148 148 0 n/a 00:08:50.438 00:08:50.438 Elapsed time = 0.002 seconds 00:08:50.438 22:54:39 unittest.unittest_nvme -- unit/unittest.sh@103 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:08:50.438 00:08:50.438 00:08:50.438 CUnit - A unit testing framework for C - Version 2.1-3 00:08:50.438 http://cunit.sourceforge.net/ 00:08:50.438 00:08:50.438 00:08:50.438 Suite: nvme_fabric 00:08:50.438 Test: test_nvme_fabric_prop_set_cmd ...passed 00:08:50.438 Test: test_nvme_fabric_prop_get_cmd ...passed 00:08:50.438 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:08:50.438 Test: test_nvme_fabric_discover_probe ...passed 00:08:50.438 Test: test_nvme_fabric_qpair_connect ...[2024-07-13 22:54:39.657906] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:08:50.438 passed 00:08:50.438 00:08:50.438 Run Summary: Type Total Ran Passed Failed Inactive 00:08:50.439 suites 1 1 n/a 0 0 00:08:50.439 tests 5 5 5 0 0 00:08:50.439 asserts 60 60 60 0 n/a 00:08:50.439 00:08:50.439 Elapsed time = 0.001 seconds 00:08:50.439 22:54:39 unittest.unittest_nvme -- unit/unittest.sh@104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:08:50.439 00:08:50.439 00:08:50.439 CUnit - A unit testing framework for C - Version 2.1-3 00:08:50.439 http://cunit.sourceforge.net/ 00:08:50.439 00:08:50.439 00:08:50.439 Suite: nvme_opal 00:08:50.439 Test: test_opal_nvme_security_recv_send_done ...passed 00:08:50.439 Test: test_opal_add_short_atom_header ...passed 00:08:50.439 00:08:50.439 [2024-07-13 22:54:39.685255] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:08:50.439 Run Summary: Type Total Ran Passed Failed Inactive 00:08:50.439 suites 1 1 n/a 0 0 00:08:50.439 tests 2 2 2 0 0 00:08:50.439 asserts 22 22 22 0 n/a 00:08:50.439 00:08:50.439 Elapsed time = 0.000 seconds 00:08:50.439 00:08:50.439 real 0m1.232s 00:08:50.439 user 0m0.674s 00:08:50.439 sys 0m0.410s 00:08:50.439 22:54:39 unittest.unittest_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:50.439 22:54:39 unittest.unittest_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:50.439 ************************************ 00:08:50.439 END TEST unittest_nvme 00:08:50.439 ************************************ 00:08:50.439 22:54:39 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:50.439 22:54:39 unittest -- unit/unittest.sh@249 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:08:50.439 22:54:39 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:50.439 22:54:39 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.439 22:54:39 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:50.439 ************************************ 00:08:50.439 START TEST unittest_log 00:08:50.439 ************************************ 00:08:50.439 22:54:39 unittest.unittest_log -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:08:50.439 00:08:50.439 00:08:50.439 CUnit - A unit testing framework for C - Version 2.1-3 00:08:50.439 http://cunit.sourceforge.net/ 00:08:50.439 00:08:50.439 00:08:50.439 Suite: log 00:08:50.439 Test: log_test ...[2024-07-13 22:54:39.771291] log_ut.c: 56:log_test: *WARNING*: log warning unit test 00:08:50.439 [2024-07-13 22:54:39.771564] log_ut.c: 57:log_test: *DEBUG*: log test 00:08:50.439 passed 00:08:50.439 Test: deprecation ...log dump test: 00:08:50.439 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:08:50.439 spdk dump test: 00:08:50.439 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:08:50.439 spdk dump test: 00:08:50.439 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:08:50.439 00000010 65 20 63 68 61 72 73 e chars 00:08:51.374 passed 00:08:51.374 00:08:51.374 Run Summary: Type Total Ran Passed Failed Inactive 00:08:51.374 suites 1 1 n/a 0 0 00:08:51.374 tests 2 2 2 0 0 00:08:51.374 asserts 73 73 73 0 n/a 00:08:51.374 00:08:51.374 Elapsed time = 0.001 seconds 00:08:51.640 00:08:51.640 real 0m1.032s 00:08:51.640 user 0m0.021s 00:08:51.640 sys 0m0.012s 00:08:51.640 22:54:40 unittest.unittest_log -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:51.640 22:54:40 unittest.unittest_log -- common/autotest_common.sh@10 -- # set +x 00:08:51.640 ************************************ 00:08:51.640 END TEST unittest_log 00:08:51.640 ************************************ 00:08:51.640 22:54:40 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:51.640 22:54:40 unittest -- unit/unittest.sh@250 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:08:51.640 22:54:40 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:51.640 22:54:40 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:51.641 22:54:40 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:51.641 ************************************ 00:08:51.641 START TEST unittest_lvol 00:08:51.641 ************************************ 00:08:51.641 22:54:40 unittest.unittest_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:08:51.641 00:08:51.641 00:08:51.641 CUnit - A unit testing framework for C - Version 2.1-3 00:08:51.641 http://cunit.sourceforge.net/ 00:08:51.641 00:08:51.641 00:08:51.641 Suite: lvol 00:08:51.641 Test: lvs_init_unload_success ...[2024-07-13 22:54:40.858176] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:08:51.641 passed 00:08:51.641 Test: lvs_init_destroy_success ...[2024-07-13 22:54:40.858691] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:08:51.641 passed 00:08:51.641 Test: lvs_init_opts_success ...passed 00:08:51.641 Test: lvs_unload_lvs_is_null_fail ...[2024-07-13 22:54:40.858960] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:08:51.641 passed 00:08:51.641 Test: lvs_names ...[2024-07-13 22:54:40.859029] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:08:51.641 [2024-07-13 22:54:40.859096] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:08:51.641 [2024-07-13 22:54:40.859264] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:08:51.641 passed 00:08:51.641 Test: lvol_create_destroy_success ...passed 00:08:51.641 Test: lvol_create_fail ...[2024-07-13 22:54:40.859841] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:08:51.641 passed 00:08:51.641 Test: lvol_destroy_fail ...[2024-07-13 22:54:40.859970] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:08:51.641 [2024-07-13 22:54:40.860284] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:08:51.641 passed 00:08:51.641 Test: lvol_close ...[2024-07-13 22:54:40.860518] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:08:51.641 passed 00:08:51.641 Test: lvol_resize ...[2024-07-13 22:54:40.860585] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:08:51.641 passed 00:08:51.641 Test: lvol_set_read_only ...passed 00:08:51.641 Test: test_lvs_load ...[2024-07-13 22:54:40.861408] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:08:51.641 passed 00:08:51.641 Test: lvols_load ...[2024-07-13 22:54:40.861467] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:08:51.641 [2024-07-13 22:54:40.861720] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:08:51.641 passed 00:08:51.641 Test: lvol_open ...[2024-07-13 22:54:40.861846] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:08:51.641 passed 00:08:51.641 Test: lvol_snapshot ...passed 00:08:51.641 Test: lvol_snapshot_fail ...[2024-07-13 22:54:40.862523] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:08:51.641 passed 00:08:51.641 Test: lvol_clone ...passed 00:08:51.641 Test: lvol_clone_fail ...[2024-07-13 22:54:40.863068] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:08:51.641 passed 00:08:51.641 Test: lvol_iter_clones ...passed 00:08:51.641 Test: lvol_refcnt ...[2024-07-13 22:54:40.863582] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 12a4e1b2-bf4c-4c39-bc61-4983a00ea7f2 because it is still open 00:08:51.641 passed 00:08:51.641 Test: lvol_names ...[2024-07-13 22:54:40.863751] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:08:51.641 [2024-07-13 22:54:40.863842] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:51.641 [2024-07-13 22:54:40.864064] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:08:51.641 passed 00:08:51.641 Test: lvol_create_thin_provisioned ...passed 00:08:51.641 Test: lvol_rename ...[2024-07-13 22:54:40.864452] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:51.641 [2024-07-13 22:54:40.864569] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:08:51.641 passed 00:08:51.641 Test: lvs_rename ...[2024-07-13 22:54:40.864814] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:08:51.641 passed 00:08:51.641 Test: lvol_inflate ...[2024-07-13 22:54:40.865059] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:08:51.641 passed 00:08:51.641 Test: lvol_decouple_parent ...[2024-07-13 22:54:40.865297] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:08:51.641 passed 00:08:51.641 Test: lvol_get_xattr ...passed 00:08:51.641 Test: lvol_esnap_reload ...passed 00:08:51.641 Test: lvol_esnap_create_bad_args ...[2024-07-13 22:54:40.865791] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:08:51.641 [2024-07-13 22:54:40.865846] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:08:51.641 [2024-07-13 22:54:40.865905] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:08:51.641 [2024-07-13 22:54:40.866031] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:51.641 [2024-07-13 22:54:40.866194] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:08:51.641 passed 00:08:51.641 Test: lvol_esnap_create_delete ...passed 00:08:51.641 Test: lvol_esnap_load_esnaps ...[2024-07-13 22:54:40.866471] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:08:51.641 passed 00:08:51.641 Test: lvol_esnap_missing ...[2024-07-13 22:54:40.866629] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:08:51.641 [2024-07-13 22:54:40.866686] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:08:51.641 passed 00:08:51.641 Test: lvol_esnap_hotplug ... 00:08:51.641 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:08:51.641 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:08:51.641 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:08:51.641 [2024-07-13 22:54:40.867337] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 7262ba1e-9203-48f4-9e4a-512c8a37c7b7: failed to create esnap bs_dev: error -12 00:08:51.641 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:08:51.641 [2024-07-13 22:54:40.867602] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 6f69556e-2995-41d8-ae77-19755a3e840f: failed to create esnap bs_dev: error -12 00:08:51.641 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:08:51.641 [2024-07-13 22:54:40.867736] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol d3f5b82d-f606-4414-a687-9061ccbfa47c: failed to create esnap bs_dev: error -12 00:08:51.641 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:08:51.642 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:08:51.642 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:08:51.642 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:08:51.642 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:08:51.642 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:08:51.642 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:08:51.642 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:08:51.642 passed 00:08:51.642 Test: lvol_get_by ...passed 00:08:51.642 Test: lvol_shallow_copy ...[2024-07-13 22:54:40.868881] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2274:spdk_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:08:51.642 [2024-07-13 22:54:40.868971] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2281:spdk_lvol_shallow_copy: *ERROR*: lvol c264deb8-1718-4edd-9414-43bb7fca6a0c shallow copy, ext_dev must not be NULL 00:08:51.642 passed 00:08:51.642 Test: lvol_set_parent ...[2024-07-13 22:54:40.869193] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2338:spdk_lvol_set_parent: *ERROR*: lvol must not be NULL 00:08:51.642 [2024-07-13 22:54:40.869247] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2344:spdk_lvol_set_parent: *ERROR*: snapshot must not be NULL 00:08:51.642 passed 00:08:51.642 Test: lvol_set_external_parent ...[2024-07-13 22:54:40.869475] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2393:spdk_lvol_set_external_parent: *ERROR*: lvol must not be NULL 00:08:51.642 [2024-07-13 22:54:40.869526] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2399:spdk_lvol_set_external_parent: *ERROR*: snapshot must not be NULL 00:08:51.642 [2024-07-13 22:54:40.869589] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2406:spdk_lvol_set_external_parent: *ERROR*: lvol lvol and esnap have the same UUID 00:08:51.642 passed 00:08:51.642 00:08:51.642 Run Summary: Type Total Ran Passed Failed Inactive 00:08:51.642 suites 1 1 n/a 0 0 00:08:51.642 tests 37 37 37 0 0 00:08:51.642 asserts 1505 1505 1505 0 n/a 00:08:51.642 00:08:51.642 Elapsed time = 0.012 seconds 00:08:51.642 00:08:51.642 real 0m0.051s 00:08:51.642 user 0m0.024s 00:08:51.642 sys 0m0.027s 00:08:51.642 22:54:40 unittest.unittest_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:51.642 22:54:40 unittest.unittest_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:51.642 ************************************ 00:08:51.642 END TEST unittest_lvol 00:08:51.642 ************************************ 00:08:51.642 22:54:40 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:51.642 22:54:40 unittest -- unit/unittest.sh@251 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:51.642 22:54:40 unittest -- unit/unittest.sh@252 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:08:51.642 22:54:40 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:51.642 22:54:40 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:51.642 22:54:40 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:51.642 ************************************ 00:08:51.642 START TEST unittest_nvme_rdma 00:08:51.642 ************************************ 00:08:51.642 22:54:40 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:08:51.642 00:08:51.642 00:08:51.642 CUnit - A unit testing framework for C - Version 2.1-3 00:08:51.642 http://cunit.sourceforge.net/ 00:08:51.642 00:08:51.642 00:08:51.642 Suite: nvme_rdma 00:08:51.642 Test: test_nvme_rdma_build_sgl_request ...[2024-07-13 22:54:40.959426] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:08:51.642 [2024-07-13 22:54:40.959731] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1552:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:08:51.642 passed 00:08:51.642 Test: test_nvme_rdma_build_sgl_inline_request ...[2024-07-13 22:54:40.959825] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1608:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:08:51.642 passed 00:08:51.642 Test: test_nvme_rdma_build_contig_request ...passed 00:08:51.642 Test: test_nvme_rdma_build_contig_inline_request ...passed[2024-07-13 22:54:40.959922] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1489:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:08:51.642 00:08:51.642 Test: test_nvme_rdma_create_reqs ...[2024-07-13 22:54:40.960032] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 931:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:08:51.642 passed 00:08:51.642 Test: test_nvme_rdma_create_rsps ...[2024-07-13 22:54:40.960336] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 849:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:08:51.642 passed 00:08:51.642 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-07-13 22:54:40.960517] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1746:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:08:51.642 passed 00:08:51.642 Test: test_nvme_rdma_poller_create ...[2024-07-13 22:54:40.960578] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1746:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:51.642 passed 00:08:51.642 Test: test_nvme_rdma_qpair_process_cm_event ...passed 00:08:51.642 Test: test_nvme_rdma_ctrlr_construct ...[2024-07-13 22:54:40.960733] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 450:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:08:51.642 passed 00:08:51.642 Test: test_nvme_rdma_req_put_and_get ...passed 00:08:51.642 Test: test_nvme_rdma_req_init ...passed 00:08:51.642 Test: test_nvme_rdma_validate_cm_event ...passed 00:08:51.642 Test: test_nvme_rdma_qpair_init ...passed[2024-07-13 22:54:40.961106] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:08:51.642 [2024-07-13 22:54:40.961162] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:08:51.642 00:08:51.642 Test: test_nvme_rdma_qpair_submit_request ...passed 00:08:51.642 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:08:51.642 Test: test_rdma_get_memory_translation ...passed 00:08:51.642 Test: test_get_rdma_qpair_from_wc ...passed 00:08:51.642 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:08:51.642 Test: test_nvme_rdma_poll_group_get_stats ...[2024-07-13 22:54:40.961294] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1368:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:08:51.642 [2024-07-13 22:54:40.961343] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:08:51.642 [2024-07-13 22:54:40.961486] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3204:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:51.642 [2024-07-13 22:54:40.961536] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3204:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:51.642 passed 00:08:51.642 Test: test_nvme_rdma_qpair_set_poller ...[2024-07-13 22:54:40.961741] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2916:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:08:51.642 [2024-07-13 22:54:40.961796] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2962:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:08:51.642 [2024-07-13 22:54:40.961843] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 647:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffee2854390 on poll group 0x60c000000040 00:08:51.643 [2024-07-13 22:54:40.961879] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2916:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:08:51.643 [2024-07-13 22:54:40.961938] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2962:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:08:51.643 [2024-07-13 22:54:40.961978] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 647:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffee2854390 on poll group 0x60c000000040 00:08:51.643 passed 00:08:51.643 00:08:51.643 Run Summary: Type Total Ran Passed Failed Inactive 00:08:51.643 suites 1 1 n/a 0 0 00:08:51.643 tests 21 21 21 0 0 00:08:51.643 asserts 397 397 397 0 n/a 00:08:51.643 00:08:51.643 Elapsed time = 0.003 seconds 00:08:51.643 [2024-07-13 22:54:40.962056] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 625:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:08:51.643 00:08:51.643 real 0m0.039s 00:08:51.643 user 0m0.026s 00:08:51.643 sys 0m0.013s 00:08:51.643 22:54:40 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:51.643 22:54:40 unittest.unittest_nvme_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:51.643 ************************************ 00:08:51.643 END TEST unittest_nvme_rdma 00:08:51.643 ************************************ 00:08:51.643 22:54:41 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:51.643 22:54:41 unittest -- unit/unittest.sh@253 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:08:51.643 22:54:41 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:51.643 22:54:41 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:51.643 22:54:41 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:51.643 ************************************ 00:08:51.643 START TEST unittest_nvmf_transport 00:08:51.643 ************************************ 00:08:51.643 22:54:41 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:08:51.915 00:08:51.915 00:08:51.915 CUnit - A unit testing framework for C - Version 2.1-3 00:08:51.915 http://cunit.sourceforge.net/ 00:08:51.915 00:08:51.915 00:08:51.915 Suite: nvmf 00:08:51.915 Test: test_spdk_nvmf_transport_create ...[2024-07-13 22:54:41.056236] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 251:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:08:51.915 [2024-07-13 22:54:41.056582] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:08:51.915 [2024-07-13 22:54:41.056663] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 275:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:08:51.915 [2024-07-13 22:54:41.056821] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 258:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:08:51.915 passed 00:08:51.915 Test: test_nvmf_transport_poll_group_create ...passed 00:08:51.915 Test: test_spdk_nvmf_transport_opts_init ...[2024-07-13 22:54:41.057181] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 792:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:08:51.915 [2024-07-13 22:54:41.057287] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 797:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:08:51.915 [2024-07-13 22:54:41.057339] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 802:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:08:51.915 passed 00:08:51.915 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:08:51.915 00:08:51.915 Run Summary: Type Total Ran Passed Failed Inactive 00:08:51.915 suites 1 1 n/a 0 0 00:08:51.915 tests 4 4 4 0 0 00:08:51.915 asserts 49 49 49 0 n/a 00:08:51.915 00:08:51.915 Elapsed time = 0.001 seconds 00:08:51.915 00:08:51.915 real 0m0.042s 00:08:51.915 user 0m0.029s 00:08:51.915 sys 0m0.014s 00:08:51.915 22:54:41 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:51.915 ************************************ 00:08:51.915 END TEST unittest_nvmf_transport 00:08:51.915 ************************************ 00:08:51.915 22:54:41 unittest.unittest_nvmf_transport -- common/autotest_common.sh@10 -- # set +x 00:08:51.915 22:54:41 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:51.915 22:54:41 unittest -- unit/unittest.sh@254 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:08:51.915 22:54:41 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:51.915 22:54:41 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:51.915 22:54:41 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:51.915 ************************************ 00:08:51.915 START TEST unittest_rdma 00:08:51.915 ************************************ 00:08:51.915 22:54:41 unittest.unittest_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:08:51.915 00:08:51.915 00:08:51.915 CUnit - A unit testing framework for C - Version 2.1-3 00:08:51.915 http://cunit.sourceforge.net/ 00:08:51.915 00:08:51.915 00:08:51.915 Suite: rdma_common 00:08:51.915 Test: test_spdk_rdma_pd ...[2024-07-13 22:54:41.144768] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 398:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:08:51.915 [2024-07-13 22:54:41.145204] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 398:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:08:51.915 passed 00:08:51.915 00:08:51.915 Run Summary: Type Total Ran Passed Failed Inactive 00:08:51.915 suites 1 1 n/a 0 0 00:08:51.915 tests 1 1 1 0 0 00:08:51.915 asserts 31 31 31 0 n/a 00:08:51.915 00:08:51.915 Elapsed time = 0.001 seconds 00:08:51.915 00:08:51.915 real 0m0.031s 00:08:51.915 user 0m0.026s 00:08:51.915 sys 0m0.004s 00:08:51.915 22:54:41 unittest.unittest_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:51.915 22:54:41 unittest.unittest_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:51.915 ************************************ 00:08:51.915 END TEST unittest_rdma 00:08:51.915 ************************************ 00:08:51.915 22:54:41 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:51.915 22:54:41 unittest -- unit/unittest.sh@257 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:51.915 22:54:41 unittest -- unit/unittest.sh@258 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:08:51.915 22:54:41 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:51.915 22:54:41 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:51.915 22:54:41 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:51.915 ************************************ 00:08:51.915 START TEST unittest_nvme_cuse 00:08:51.915 ************************************ 00:08:51.915 22:54:41 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:08:51.915 00:08:51.915 00:08:51.915 CUnit - A unit testing framework for C - Version 2.1-3 00:08:51.915 http://cunit.sourceforge.net/ 00:08:51.915 00:08:51.915 00:08:51.915 Suite: nvme_cuse 00:08:51.915 Test: test_cuse_nvme_submit_io_read_write ...passed 00:08:51.915 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:08:51.915 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:08:51.915 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:08:51.915 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:08:51.915 Test: test_cuse_nvme_submit_io ...[2024-07-13 22:54:41.230968] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 667:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:08:51.915 passed 00:08:51.915 Test: test_cuse_nvme_reset ...[2024-07-13 22:54:41.231298] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 352:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:08:51.915 passed 00:08:52.482 Test: test_nvme_cuse_stop ...passed 00:08:52.482 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:08:52.482 00:08:52.482 Run Summary: Type Total Ran Passed Failed Inactive 00:08:52.482 suites 1 1 n/a 0 0 00:08:52.482 tests 9 9 9 0 0 00:08:52.483 asserts 118 118 118 0 n/a 00:08:52.483 00:08:52.483 Elapsed time = 0.504 seconds 00:08:52.483 00:08:52.483 real 0m0.539s 00:08:52.483 user 0m0.314s 00:08:52.483 sys 0m0.226s 00:08:52.483 22:54:41 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:52.483 ************************************ 00:08:52.483 END TEST unittest_nvme_cuse 00:08:52.483 ************************************ 00:08:52.483 22:54:41 unittest.unittest_nvme_cuse -- common/autotest_common.sh@10 -- # set +x 00:08:52.483 22:54:41 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:52.483 22:54:41 unittest -- unit/unittest.sh@261 -- # run_test unittest_nvmf unittest_nvmf 00:08:52.483 22:54:41 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:52.483 22:54:41 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:52.483 22:54:41 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:52.483 ************************************ 00:08:52.483 START TEST unittest_nvmf 00:08:52.483 ************************************ 00:08:52.483 22:54:41 unittest.unittest_nvmf -- common/autotest_common.sh@1123 -- # unittest_nvmf 00:08:52.483 22:54:41 unittest.unittest_nvmf -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:08:52.483 00:08:52.483 00:08:52.483 CUnit - A unit testing framework for C - Version 2.1-3 00:08:52.483 http://cunit.sourceforge.net/ 00:08:52.483 00:08:52.483 00:08:52.483 Suite: nvmf 00:08:52.483 Test: test_get_log_page ...[2024-07-13 22:54:41.822750] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:08:52.483 passed 00:08:52.483 Test: test_process_fabrics_cmd ...[2024-07-13 22:54:41.823109] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4730:nvmf_check_qpair_active: *ERROR*: Received command 0x0 on qid 0 before CONNECT 00:08:52.483 passed 00:08:52.483 Test: test_connect ...[2024-07-13 22:54:41.823765] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1012:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:08:52.483 [2024-07-13 22:54:41.823888] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 875:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:08:52.483 [2024-07-13 22:54:41.823936] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1051:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:08:52.483 [2024-07-13 22:54:41.823985] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:08:52.483 [2024-07-13 22:54:41.824091] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 886:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:08:52.483 [2024-07-13 22:54:41.824159] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 893:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:08:52.483 [2024-07-13 22:54:41.824199] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 899:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:08:52.483 [2024-07-13 22:54:41.824259] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 926:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:08:52.483 [2024-07-13 22:54:41.824374] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:08:52.483 [2024-07-13 22:54:41.824466] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 676:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:08:52.483 [2024-07-13 22:54:41.824771] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 682:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:08:52.483 [2024-07-13 22:54:41.824903] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 688:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:08:52.483 [2024-07-13 22:54:41.825012] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 695:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:08:52.483 [2024-07-13 22:54:41.825107] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 719:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:08:52.483 [2024-07-13 22:54:41.825203] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 294:nvmf_ctrlr_add_qpair: *ERROR*: Got I/O connect with duplicate QID 1 (cntlid:0) 00:08:52.483 [2024-07-13 22:54:41.825378] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 4, group (nil)) 00:08:52.483 [2024-07-13 22:54:41.825460] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group (nil)) 00:08:52.483 passed 00:08:52.483 Test: test_get_ns_id_desc_list ...passed 00:08:52.483 Test: test_identify_ns ...[2024-07-13 22:54:41.825745] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:52.483 [2024-07-13 22:54:41.826051] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:08:52.483 [2024-07-13 22:54:41.826186] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:08:52.483 passed 00:08:52.483 Test: test_identify_ns_iocs_specific ...[2024-07-13 22:54:41.826350] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:52.483 [2024-07-13 22:54:41.826662] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:52.483 passed 00:08:52.483 Test: test_reservation_write_exclusive ...passed 00:08:52.483 Test: test_reservation_exclusive_access ...passed 00:08:52.483 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:08:52.483 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:08:52.483 Test: test_reservation_notification_log_page ...passed 00:08:52.483 Test: test_get_dif_ctx ...passed 00:08:52.483 Test: test_set_get_features ...[2024-07-13 22:54:41.827182] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:08:52.483 [2024-07-13 22:54:41.827261] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:08:52.483 passed 00:08:52.483 Test: test_identify_ctrlr ...passed 00:08:52.483 Test: test_identify_ctrlr_iocs_specific ...[2024-07-13 22:54:41.827309] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1659:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:08:52.483 [2024-07-13 22:54:41.827347] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1735:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:08:52.483 passed 00:08:52.483 Test: test_custom_admin_cmd ...passed 00:08:52.483 Test: test_fused_compare_and_write ...[2024-07-13 22:54:41.827859] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4238:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:08:52.483 [2024-07-13 22:54:41.827916] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4227:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:08:52.483 passed 00:08:52.483 Test: test_multi_async_event_reqs ...[2024-07-13 22:54:41.827969] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4245:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:08:52.483 passed 00:08:52.483 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:08:52.483 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:08:52.483 Test: test_multi_async_events ...passed 00:08:52.483 Test: test_rae ...passed 00:08:52.483 Test: test_nvmf_ctrlr_create_destruct ...passed 00:08:52.483 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:08:52.483 Test: test_spdk_nvmf_request_zcopy_start ...[2024-07-13 22:54:41.828552] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4730:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 before CONNECT 00:08:52.483 passed 00:08:52.483 Test: test_zcopy_read ...[2024-07-13 22:54:41.828631] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4756:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 in state 4 00:08:52.483 passed 00:08:52.483 Test: test_zcopy_write ...passed 00:08:52.483 Test: test_nvmf_property_set ...passed 00:08:52.483 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-07-13 22:54:41.828860] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1946:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:08:52.483 [2024-07-13 22:54:41.829022] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1946:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:08:52.483 passed 00:08:52.483 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-07-13 22:54:41.829083] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1969:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:08:52.483 passed 00:08:52.483 Test: test_nvmf_ctrlr_ns_attachment ...[2024-07-13 22:54:41.829119] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1975:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:08:52.483 [2024-07-13 22:54:41.829193] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1987:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:08:52.483 passed 00:08:52.483 Test: test_nvmf_check_qpair_active ...[2024-07-13 22:54:41.829304] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4730:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before CONNECT 00:08:52.483 [2024-07-13 22:54:41.829354] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4744:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before authentication 00:08:52.483 [2024-07-13 22:54:41.829395] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4756:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 0 00:08:52.483 [2024-07-13 22:54:41.829437] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4756:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 4 00:08:52.483 passed 00:08:52.483 00:08:52.483 Run Summary: Type Total Ran Passed Failed Inactive 00:08:52.483 suites 1 1 n/a 0 0 00:08:52.483 tests 32 32 32 0 0 00:08:52.483 asserts 977 977 977 0 n/a 00:08:52.483 00:08:52.483 Elapsed time = 0.007 seconds 00:08:52.483 [2024-07-13 22:54:41.829465] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4756:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 5 00:08:52.483 22:54:41 unittest.unittest_nvmf -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:08:52.483 00:08:52.483 00:08:52.483 CUnit - A unit testing framework for C - Version 2.1-3 00:08:52.483 http://cunit.sourceforge.net/ 00:08:52.483 00:08:52.483 00:08:52.483 Suite: nvmf 00:08:52.483 Test: test_get_rw_params ...passed 00:08:52.483 Test: test_get_rw_ext_params ...passed 00:08:52.483 Test: test_lba_in_range ...passed 00:08:52.483 Test: test_get_dif_ctx ...passed 00:08:52.483 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:08:52.483 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-07-13 22:54:41.864229] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 447:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:08:52.483 [2024-07-13 22:54:41.864569] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 455:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:08:52.483 [2024-07-13 22:54:41.864678] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 462:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:08:52.483 passed 00:08:52.483 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-07-13 22:54:41.864741] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 965:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:08:52.483 [2024-07-13 22:54:41.864837] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 972:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:08:52.483 passed 00:08:52.483 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-07-13 22:54:41.865050] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 401:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:08:52.483 [2024-07-13 22:54:41.865097] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 408:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:08:52.483 [2024-07-13 22:54:41.865180] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 500:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:08:52.483 passed 00:08:52.483 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:08:52.484 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...[2024-07-13 22:54:41.865220] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 507:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:08:52.484 passed 00:08:52.484 00:08:52.484 Run Summary: Type Total Ran Passed Failed Inactive 00:08:52.484 suites 1 1 n/a 0 0 00:08:52.484 tests 10 10 10 0 0 00:08:52.484 asserts 159 159 159 0 n/a 00:08:52.484 00:08:52.484 Elapsed time = 0.001 seconds 00:08:52.484 22:54:41 unittest.unittest_nvmf -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:08:52.743 00:08:52.743 00:08:52.743 CUnit - A unit testing framework for C - Version 2.1-3 00:08:52.743 http://cunit.sourceforge.net/ 00:08:52.743 00:08:52.743 00:08:52.743 Suite: nvmf 00:08:52.743 Test: test_discovery_log ...passed 00:08:52.743 Test: test_discovery_log_with_filters ...passed 00:08:52.743 00:08:52.743 Run Summary: Type Total Ran Passed Failed Inactive 00:08:52.743 suites 1 1 n/a 0 0 00:08:52.743 tests 2 2 2 0 0 00:08:52.743 asserts 238 238 238 0 n/a 00:08:52.743 00:08:52.743 Elapsed time = 0.003 seconds 00:08:52.743 22:54:41 unittest.unittest_nvmf -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:08:52.743 00:08:52.743 00:08:52.743 CUnit - A unit testing framework for C - Version 2.1-3 00:08:52.743 http://cunit.sourceforge.net/ 00:08:52.743 00:08:52.743 00:08:52.743 Suite: nvmf 00:08:52.743 Test: nvmf_test_create_subsystem ...[2024-07-13 22:54:41.929780] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:08:52.743 [2024-07-13 22:54:41.930053] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:' is invalid 00:08:52.743 [2024-07-13 22:54:41.930226] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:08:52.743 [2024-07-13 22:54:41.930313] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub' is invalid 00:08:52.743 [2024-07-13 22:54:41.930353] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:08:52.743 [2024-07-13 22:54:41.930408] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.3spdk:sub' is invalid 00:08:52.743 [2024-07-13 22:54:41.930491] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:08:52.743 [2024-07-13 22:54:41.930546] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.-spdk:subsystem1' is invalid 00:08:52.743 [2024-07-13 22:54:41.930583] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:08:52.743 [2024-07-13 22:54:41.930632] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk-:subsystem1' is invalid 00:08:52.743 [2024-07-13 22:54:41.930668] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:08:52.744 [2024-07-13 22:54:41.930710] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io..spdk:subsystem1' is invalid 00:08:52.744 [2024-07-13 22:54:41.930831] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:08:52.744 [2024-07-13 22:54:41.930926] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' is invalid 00:08:52.744 [2024-07-13 22:54:41.931036] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:08:52.744 [2024-07-13 22:54:41.931091] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:�subsystem1' is invalid 00:08:52.744 [2024-07-13 22:54:41.931196] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:08:52.744 [2024-07-13 22:54:41.931239] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa' is invalid 00:08:52.744 [2024-07-13 22:54:41.931280] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:08:52.744 [2024-07-13 22:54:41.931335] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2' is invalid 00:08:52.744 [2024-07-13 22:54:41.931376] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:08:52.744 [2024-07-13 22:54:41.931411] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2' is invalid 00:08:52.744 passed 00:08:52.744 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-07-13 22:54:41.931600] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:08:52.744 passed 00:08:52.744 Test: test_spdk_nvmf_subsystem_add_fdp_ns ...[2024-07-13 22:54:41.931655] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2027:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:08:52.744 [2024-07-13 22:54:41.931896] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem with id: 0 can only add FDP namespace. 00:08:52.744 passed 00:08:52.744 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:08:52.744 Test: test_spdk_nvmf_ns_visible ...[2024-07-13 22:54:41.932130] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11 00:08:52.744 passed 00:08:52.744 Test: test_reservation_register ...[2024-07-13 22:54:41.932560] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3102:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:52.744 [2024-07-13 22:54:41.932690] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3160:nvmf_ns_reservation_register: *ERROR*: No registrant 00:08:52.744 passed 00:08:52.744 Test: test_reservation_register_with_ptpl ...passed 00:08:52.744 Test: test_reservation_acquire_preempt_1 ...[2024-07-13 22:54:41.933672] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3102:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:52.744 passed 00:08:52.744 Test: test_reservation_acquire_release_with_ptpl ...passed 00:08:52.744 Test: test_reservation_release ...[2024-07-13 22:54:41.935380] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3102:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:52.744 passed 00:08:52.744 Test: test_reservation_unregister_notification ...[2024-07-13 22:54:41.935676] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3102:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:52.744 passed 00:08:52.744 Test: test_reservation_release_notification ...[2024-07-13 22:54:41.935907] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3102:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:52.744 passed 00:08:52.744 Test: test_reservation_release_notification_write_exclusive ...[2024-07-13 22:54:41.936146] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3102:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:52.744 passed 00:08:52.744 Test: test_reservation_clear_notification ...[2024-07-13 22:54:41.936371] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3102:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:52.744 passed 00:08:52.744 Test: test_reservation_preempt_notification ...[2024-07-13 22:54:41.936649] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3102:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:52.744 passed 00:08:52.744 Test: test_spdk_nvmf_ns_event ...passed 00:08:52.744 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:08:52.744 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:08:52.744 Test: test_spdk_nvmf_subsystem_add_host ...[2024-07-13 22:54:41.937479] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 264:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:08:52.744 passed 00:08:52.744 Test: test_nvmf_ns_reservation_report ...[2024-07-13 22:54:41.937574] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to transport_ut transport 00:08:52.744 passed 00:08:52.744 Test: test_nvmf_nqn_is_valid ...[2024-07-13 22:54:41.937689] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3465:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:08:52.744 [2024-07-13 22:54:41.937773] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:08:52.744 [2024-07-13 22:54:41.937838] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:a4422dcc-a95e-43ed-ae85-85bab2751e8": uuid is not the correct length 00:08:52.744 passed 00:08:52.744 Test: test_nvmf_ns_reservation_restore ...[2024-07-13 22:54:41.937883] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:08:52.744 [2024-07-13 22:54:41.937987] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2659:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:08:52.744 passed 00:08:52.744 Test: test_nvmf_subsystem_state_change ...passed 00:08:52.744 Test: test_nvmf_reservation_custom_ops ...passed 00:08:52.744 00:08:52.744 Run Summary: Type Total Ran Passed Failed Inactive 00:08:52.744 suites 1 1 n/a 0 0 00:08:52.744 tests 24 24 24 0 0 00:08:52.744 asserts 499 499 499 0 n/a 00:08:52.744 00:08:52.744 Elapsed time = 0.009 seconds 00:08:52.744 22:54:41 unittest.unittest_nvmf -- unit/unittest.sh@112 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:08:52.744 00:08:52.744 00:08:52.744 CUnit - A unit testing framework for C - Version 2.1-3 00:08:52.744 http://cunit.sourceforge.net/ 00:08:52.744 00:08:52.744 00:08:52.744 Suite: nvmf 00:08:52.744 Test: test_nvmf_tcp_create ...[2024-07-13 22:54:42.000759] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 745:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:08:52.744 passed 00:08:52.744 Test: test_nvmf_tcp_destroy ...passed 00:08:52.744 Test: test_nvmf_tcp_poll_group_create ...passed 00:08:52.744 Test: test_nvmf_tcp_send_c2h_data ...passed 00:08:52.744 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:08:52.744 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:08:52.744 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:08:52.744 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-07-13 22:54:42.102575] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:52.744 [2024-07-13 22:54:42.102666] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd93fbfee0 is same with the state(5) to be set 00:08:52.744 [2024-07-13 22:54:42.102761] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd93fbfee0 is same with the state(5) to be set 00:08:52.744 [2024-07-13 22:54:42.102805] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:52.744 [2024-07-13 22:54:42.102843] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd93fbfee0 is same with the state(5) to be set 00:08:52.744 passed 00:08:52.744 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:08:52.744 Test: test_nvmf_tcp_icreq_handle ...[2024-07-13 22:54:42.102947] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2122:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:08:52.744 [2024-07-13 22:54:42.103044] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:52.744 [2024-07-13 22:54:42.103111] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd93fbfee0 is same with the state(5) to be set 00:08:52.744 passed 00:08:52.744 Test: test_nvmf_tcp_check_xfer_type ...[2024-07-13 22:54:42.103150] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2122:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:08:52.744 [2024-07-13 22:54:42.103194] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd93fbfee0 is same with the state(5) to be set 00:08:52.744 [2024-07-13 22:54:42.103231] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:52.744 [2024-07-13 22:54:42.103272] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd93fbfee0 is same with the state(5) to be set 00:08:52.744 [2024-07-13 22:54:42.103312] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:08:52.744 [2024-07-13 22:54:42.103360] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd93fbfee0 is same with the state(5) to be set 00:08:52.744 passed 00:08:52.744 Test: test_nvmf_tcp_invalid_sgl ...[2024-07-13 22:54:42.103435] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2517:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:08:52.744 [2024-07-13 22:54:42.103486] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:52.744 [2024-07-13 22:54:42.103520] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd93fbfee0 is same with the state(5) to be set 00:08:52.744 passed 00:08:52.744 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-07-13 22:54:42.103590] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2249:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7ffd93fc0c40 00:08:52.744 [2024-07-13 22:54:42.103684] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:52.745 [2024-07-13 22:54:42.103751] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd93fc03a0 is same with the state(5) to be set 00:08:52.745 [2024-07-13 22:54:42.103801] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2306:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7ffd93fc03a0 00:08:52.745 [2024-07-13 22:54:42.103841] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:52.745 [2024-07-13 22:54:42.103883] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd93fc03a0 is same with the state(5) to be set 00:08:52.745 [2024-07-13 22:54:42.103920] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2259:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:08:52.745 [2024-07-13 22:54:42.103962] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:52.745 [2024-07-13 22:54:42.104015] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd93fc03a0 is same with the state(5) to be set 00:08:52.745 [2024-07-13 22:54:42.104079] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2298:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:08:52.745 [2024-07-13 22:54:42.104125] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:52.745 [2024-07-13 22:54:42.104172] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd93fc03a0 is same with the state(5) to be set 00:08:52.745 [2024-07-13 22:54:42.104213] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:52.745 [2024-07-13 22:54:42.104256] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd93fc03a0 is same with the state(5) to be set 00:08:52.745 [2024-07-13 22:54:42.104321] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:52.745 [2024-07-13 22:54:42.104354] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd93fc03a0 is same with the state(5) to be set 00:08:52.745 [2024-07-13 22:54:42.104411] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:52.745 [2024-07-13 22:54:42.104451] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd93fc03a0 is same with the state(5) to be set 00:08:52.745 [2024-07-13 22:54:42.104496] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:52.745 [2024-07-13 22:54:42.104532] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd93fc03a0 is same with the state(5) to be set 00:08:52.745 [2024-07-13 22:54:42.104609] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:52.745 [2024-07-13 22:54:42.104656] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd93fc03a0 is same with the state(5) to be set 00:08:52.745 passed 00:08:52.745 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-07-13 22:54:42.104708] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:52.745 [2024-07-13 22:54:42.104744] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd93fc03a0 is same with the state(5) to be set 00:08:52.745 passed 00:08:52.745 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-07-13 22:54:42.127890] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:08:52.745 passed 00:08:52.745 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-07-13 22:54:42.127974] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:08:52.745 [2024-07-13 22:54:42.128373] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:08:52.745 [2024-07-13 22:54:42.128431] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:08:52.745 passed 00:08:52.745 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-07-13 22:54:42.128695] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:08:52.745 passed 00:08:52.745 00:08:52.745 Run Summary: Type Total Ran Passed Failed Inactive 00:08:52.745 suites 1 1 n/a 0 0 00:08:52.745 tests 17 17 17 0 0 00:08:52.745 asserts 222 222 222 0 n/a 00:08:52.745 00:08:52.745 Elapsed time = 0.152 seconds 00:08:52.745 [2024-07-13 22:54:42.128762] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:08:53.004 22:54:42 unittest.unittest_nvmf -- unit/unittest.sh@113 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:08:53.004 00:08:53.004 00:08:53.004 CUnit - A unit testing framework for C - Version 2.1-3 00:08:53.004 http://cunit.sourceforge.net/ 00:08:53.004 00:08:53.004 00:08:53.004 Suite: nvmf 00:08:53.004 Test: test_nvmf_tgt_create_poll_group ...passed 00:08:53.004 00:08:53.004 Run Summary: Type Total Ran Passed Failed Inactive 00:08:53.004 suites 1 1 n/a 0 0 00:08:53.004 tests 1 1 1 0 0 00:08:53.004 asserts 17 17 17 0 n/a 00:08:53.004 00:08:53.004 Elapsed time = 0.024 seconds 00:08:53.004 00:08:53.004 real 0m0.492s 00:08:53.004 user 0m0.236s 00:08:53.004 sys 0m0.258s 00:08:53.004 22:54:42 unittest.unittest_nvmf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:53.004 22:54:42 unittest.unittest_nvmf -- common/autotest_common.sh@10 -- # set +x 00:08:53.004 ************************************ 00:08:53.004 END TEST unittest_nvmf 00:08:53.004 ************************************ 00:08:53.004 22:54:42 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:53.004 22:54:42 unittest -- unit/unittest.sh@262 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:53.004 22:54:42 unittest -- unit/unittest.sh@267 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:53.004 22:54:42 unittest -- unit/unittest.sh@268 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:08:53.004 22:54:42 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:53.004 22:54:42 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:53.004 22:54:42 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:53.004 ************************************ 00:08:53.004 START TEST unittest_nvmf_rdma 00:08:53.004 ************************************ 00:08:53.004 22:54:42 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:08:53.004 00:08:53.004 00:08:53.004 CUnit - A unit testing framework for C - Version 2.1-3 00:08:53.004 http://cunit.sourceforge.net/ 00:08:53.004 00:08:53.004 00:08:53.004 Suite: nvmf 00:08:53.004 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-07-13 22:54:42.377381] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1863:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:08:53.004 [2024-07-13 22:54:42.377715] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1913:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:08:53.004 [2024-07-13 22:54:42.377770] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1913:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:08:53.004 passed 00:08:53.004 Test: test_spdk_nvmf_rdma_request_process ...passed 00:08:53.004 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:08:53.004 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:08:53.004 Test: test_nvmf_rdma_opts_init ...passed 00:08:53.004 Test: test_nvmf_rdma_request_free_data ...passed 00:08:53.004 Test: test_nvmf_rdma_resources_create ...passed 00:08:53.004 Test: test_nvmf_rdma_qpair_compare ...passed 00:08:53.004 Test: test_nvmf_rdma_resize_cq ...[2024-07-13 22:54:42.380244] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 954:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:08:53.004 Using CQ of insufficient size may lead to CQ overrun 00:08:53.004 [2024-07-13 22:54:42.380358] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 959:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:08:53.004 passed 00:08:53.004 00:08:53.004 Run Summary: Type Total Ran Passed Failed Inactive 00:08:53.004 suites 1 1 n/a 0 0 00:08:53.004 tests 9 9 9 0 0 00:08:53.004 asserts 579 579 579 0 n/a 00:08:53.004 00:08:53.004 Elapsed time = 0.003 seconds 00:08:53.004 [2024-07-13 22:54:42.380462] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 967:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:08:53.004 00:08:53.004 real 0m0.041s 00:08:53.004 user 0m0.009s 00:08:53.004 sys 0m0.032s 00:08:53.004 22:54:42 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:53.004 22:54:42 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:53.004 ************************************ 00:08:53.004 END TEST unittest_nvmf_rdma 00:08:53.004 ************************************ 00:08:53.263 22:54:42 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:53.263 22:54:42 unittest -- unit/unittest.sh@271 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:53.263 22:54:42 unittest -- unit/unittest.sh@275 -- # run_test unittest_scsi unittest_scsi 00:08:53.263 22:54:42 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:53.263 22:54:42 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:53.263 22:54:42 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:53.263 ************************************ 00:08:53.263 START TEST unittest_scsi 00:08:53.263 ************************************ 00:08:53.263 22:54:42 unittest.unittest_scsi -- common/autotest_common.sh@1123 -- # unittest_scsi 00:08:53.263 22:54:42 unittest.unittest_scsi -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:08:53.263 00:08:53.263 00:08:53.263 CUnit - A unit testing framework for C - Version 2.1-3 00:08:53.263 http://cunit.sourceforge.net/ 00:08:53.263 00:08:53.263 00:08:53.263 Suite: dev_suite 00:08:53.263 Test: dev_destruct_null_dev ...passed 00:08:53.263 Test: dev_destruct_zero_luns ...passed 00:08:53.263 Test: dev_destruct_null_lun ...passed 00:08:53.263 Test: dev_destruct_success ...passed 00:08:53.263 Test: dev_construct_num_luns_zero ...[2024-07-13 22:54:42.469862] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:08:53.263 passed 00:08:53.263 Test: dev_construct_no_lun_zero ...passed 00:08:53.263 Test: dev_construct_null_lun ...[2024-07-13 22:54:42.470198] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:08:53.263 [2024-07-13 22:54:42.470292] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:08:53.263 passed 00:08:53.264 Test: dev_construct_name_too_long ...[2024-07-13 22:54:42.470366] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:08:53.264 passed 00:08:53.264 Test: dev_construct_success ...passed 00:08:53.264 Test: dev_construct_success_lun_zero_not_first ...passed 00:08:53.264 Test: dev_queue_mgmt_task_success ...passed 00:08:53.264 Test: dev_queue_task_success ...passed 00:08:53.264 Test: dev_stop_success ...passed 00:08:53.264 Test: dev_add_port_max_ports ...passed 00:08:53.264 Test: dev_add_port_construct_failure1 ...[2024-07-13 22:54:42.470732] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:08:53.264 [2024-07-13 22:54:42.470871] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:08:53.264 passed 00:08:53.264 Test: dev_add_port_construct_failure2 ...passed 00:08:53.264 Test: dev_add_port_success1 ...passed 00:08:53.264 Test: dev_add_port_success2 ...passed 00:08:53.264 Test: dev_add_port_success3 ...passed 00:08:53.264 Test: dev_find_port_by_id_num_ports_zero ...passed 00:08:53.264 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:08:53.264 Test: dev_find_port_by_id_success ...[2024-07-13 22:54:42.470973] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:08:53.264 passed 00:08:53.264 Test: dev_add_lun_bdev_not_found ...passed 00:08:53.264 Test: dev_add_lun_no_free_lun_id ...[2024-07-13 22:54:42.471445] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:08:53.264 passed 00:08:53.264 Test: dev_add_lun_success1 ...passed 00:08:53.264 Test: dev_add_lun_success2 ...passed 00:08:53.264 Test: dev_check_pending_tasks ...passed 00:08:53.264 Test: dev_iterate_luns ...passed 00:08:53.264 Test: dev_find_free_lun ...passed 00:08:53.264 00:08:53.264 Run Summary: Type Total Ran Passed Failed Inactive 00:08:53.264 suites 1 1 n/a 0 0 00:08:53.264 tests 29 29 29 0 0 00:08:53.264 asserts 97 97 97 0 n/a 00:08:53.264 00:08:53.264 Elapsed time = 0.002 seconds 00:08:53.264 22:54:42 unittest.unittest_scsi -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:08:53.264 00:08:53.264 00:08:53.264 CUnit - A unit testing framework for C - Version 2.1-3 00:08:53.264 http://cunit.sourceforge.net/ 00:08:53.264 00:08:53.264 00:08:53.264 Suite: lun_suite 00:08:53.264 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-07-13 22:54:42.508223] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:08:53.264 passed 00:08:53.264 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-07-13 22:54:42.508585] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:08:53.264 passed 00:08:53.264 Test: lun_task_mgmt_execute_lun_reset ...passed 00:08:53.264 Test: lun_task_mgmt_execute_target_reset ...passed 00:08:53.264 Test: lun_task_mgmt_execute_invalid_case ...[2024-07-13 22:54:42.508781] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:08:53.264 passed 00:08:53.264 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:08:53.264 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:08:53.264 Test: lun_append_task_null_lun_not_supported ...passed 00:08:53.264 Test: lun_execute_scsi_task_pending ...passed 00:08:53.264 Test: lun_execute_scsi_task_complete ...passed 00:08:53.264 Test: lun_execute_scsi_task_resize ...passed 00:08:53.264 Test: lun_destruct_success ...passed 00:08:53.264 Test: lun_construct_null_ctx ...passed 00:08:53.264 Test: lun_construct_success ...passed 00:08:53.264 Test: lun_reset_task_wait_scsi_task_complete ...[2024-07-13 22:54:42.509072] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:08:53.264 passed 00:08:53.264 Test: lun_reset_task_suspend_scsi_task ...passed 00:08:53.264 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:08:53.264 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:08:53.264 00:08:53.264 Run Summary: Type Total Ran Passed Failed Inactive 00:08:53.264 suites 1 1 n/a 0 0 00:08:53.264 tests 18 18 18 0 0 00:08:53.264 asserts 153 153 153 0 n/a 00:08:53.264 00:08:53.264 Elapsed time = 0.001 seconds 00:08:53.264 22:54:42 unittest.unittest_scsi -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:08:53.264 00:08:53.264 00:08:53.264 CUnit - A unit testing framework for C - Version 2.1-3 00:08:53.264 http://cunit.sourceforge.net/ 00:08:53.264 00:08:53.264 00:08:53.264 Suite: scsi_suite 00:08:53.264 Test: scsi_init ...passed 00:08:53.264 00:08:53.264 Run Summary: Type Total Ran Passed Failed Inactive 00:08:53.264 suites 1 1 n/a 0 0 00:08:53.264 tests 1 1 1 0 0 00:08:53.264 asserts 1 1 1 0 n/a 00:08:53.264 00:08:53.264 Elapsed time = 0.000 seconds 00:08:53.264 22:54:42 unittest.unittest_scsi -- unit/unittest.sh@120 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:08:53.264 00:08:53.264 00:08:53.264 CUnit - A unit testing framework for C - Version 2.1-3 00:08:53.264 http://cunit.sourceforge.net/ 00:08:53.264 00:08:53.264 00:08:53.264 Suite: translation_suite 00:08:53.264 Test: mode_select_6_test ...passed 00:08:53.264 Test: mode_select_6_test2 ...passed 00:08:53.264 Test: mode_sense_6_test ...passed 00:08:53.264 Test: mode_sense_10_test ...passed 00:08:53.264 Test: inquiry_evpd_test ...passed 00:08:53.264 Test: inquiry_standard_test ...passed 00:08:53.264 Test: inquiry_overflow_test ...passed 00:08:53.264 Test: task_complete_test ...passed 00:08:53.264 Test: lba_range_test ...passed 00:08:53.264 Test: xfer_len_test ...[2024-07-13 22:54:42.576244] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:08:53.264 passed 00:08:53.264 Test: xfer_test ...passed 00:08:53.264 Test: scsi_name_padding_test ...passed 00:08:53.264 Test: get_dif_ctx_test ...passed 00:08:53.264 Test: unmap_split_test ...passed 00:08:53.264 00:08:53.264 Run Summary: Type Total Ran Passed Failed Inactive 00:08:53.264 suites 1 1 n/a 0 0 00:08:53.264 tests 14 14 14 0 0 00:08:53.264 asserts 1205 1205 1205 0 n/a 00:08:53.264 00:08:53.264 Elapsed time = 0.003 seconds 00:08:53.264 22:54:42 unittest.unittest_scsi -- unit/unittest.sh@121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:08:53.264 00:08:53.264 00:08:53.264 CUnit - A unit testing framework for C - Version 2.1-3 00:08:53.264 http://cunit.sourceforge.net/ 00:08:53.264 00:08:53.264 00:08:53.264 Suite: reservation_suite 00:08:53.264 Test: test_reservation_register ...[2024-07-13 22:54:42.611240] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:53.264 passed 00:08:53.264 Test: test_reservation_reserve ...[2024-07-13 22:54:42.611718] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:53.264 [2024-07-13 22:54:42.611822] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 215:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:08:53.264 [2024-07-13 22:54:42.611970] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 210:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:08:53.264 passed 00:08:53.264 Test: test_all_registrant_reservation_reserve ...[2024-07-13 22:54:42.612089] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:53.264 passed 00:08:53.264 Test: test_all_registrant_reservation_access ...[2024-07-13 22:54:42.612261] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:53.264 [2024-07-13 22:54:42.612350] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 865:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0x8 00:08:53.264 passed 00:08:53.264 Test: test_reservation_preempt_non_all_regs ...[2024-07-13 22:54:42.612428] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 865:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0xaa 00:08:53.264 [2024-07-13 22:54:42.612517] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:53.264 passed 00:08:53.264 Test: test_reservation_preempt_all_regs ...[2024-07-13 22:54:42.612618] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 464:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:08:53.264 passed 00:08:53.264 Test: test_reservation_cmds_conflict ...[2024-07-13 22:54:42.612805] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:53.264 [2024-07-13 22:54:42.613001] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:53.264 [2024-07-13 22:54:42.613096] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 857:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:08:53.264 [2024-07-13 22:54:42.613179] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:08:53.264 [2024-07-13 22:54:42.613228] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:08:53.264 [2024-07-13 22:54:42.613292] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:08:53.264 [2024-07-13 22:54:42.613339] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:08:53.264 passed 00:08:53.264 Test: test_scsi2_reserve_release ...passed 00:08:53.264 Test: test_pr_with_scsi2_reserve_release ...[2024-07-13 22:54:42.613463] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:53.264 passed 00:08:53.264 00:08:53.264 Run Summary: Type Total Ran Passed Failed Inactive 00:08:53.264 suites 1 1 n/a 0 0 00:08:53.264 tests 9 9 9 0 0 00:08:53.264 asserts 344 344 344 0 n/a 00:08:53.264 00:08:53.264 Elapsed time = 0.002 seconds 00:08:53.264 00:08:53.264 real 0m0.173s 00:08:53.264 user 0m0.087s 00:08:53.264 sys 0m0.088s 00:08:53.264 22:54:42 unittest.unittest_scsi -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:53.264 22:54:42 unittest.unittest_scsi -- common/autotest_common.sh@10 -- # set +x 00:08:53.264 ************************************ 00:08:53.264 END TEST unittest_scsi 00:08:53.264 ************************************ 00:08:53.524 22:54:42 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:53.524 22:54:42 unittest -- unit/unittest.sh@278 -- # uname -s 00:08:53.524 22:54:42 unittest -- unit/unittest.sh@278 -- # '[' Linux = Linux ']' 00:08:53.524 22:54:42 unittest -- unit/unittest.sh@279 -- # run_test unittest_sock unittest_sock 00:08:53.524 22:54:42 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:53.524 22:54:42 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:53.524 22:54:42 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:53.524 ************************************ 00:08:53.524 START TEST unittest_sock 00:08:53.524 ************************************ 00:08:53.524 22:54:42 unittest.unittest_sock -- common/autotest_common.sh@1123 -- # unittest_sock 00:08:53.524 22:54:42 unittest.unittest_sock -- unit/unittest.sh@125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:08:53.524 00:08:53.524 00:08:53.524 CUnit - A unit testing framework for C - Version 2.1-3 00:08:53.524 http://cunit.sourceforge.net/ 00:08:53.524 00:08:53.524 00:08:53.524 Suite: sock 00:08:53.524 Test: posix_sock ...passed 00:08:53.524 Test: ut_sock ...passed 00:08:53.524 Test: posix_sock_group ...passed 00:08:53.524 Test: ut_sock_group ...passed 00:08:53.524 Test: posix_sock_group_fairness ...passed 00:08:53.524 Test: _posix_sock_close ...passed 00:08:53.524 Test: sock_get_default_opts ...passed 00:08:53.524 Test: ut_sock_impl_get_set_opts ...passed 00:08:53.524 Test: posix_sock_impl_get_set_opts ...passed 00:08:53.524 Test: ut_sock_map ...passed 00:08:53.524 Test: override_impl_opts ...passed 00:08:53.524 Test: ut_sock_group_get_ctx ...passed 00:08:53.524 00:08:53.524 Run Summary: Type Total Ran Passed Failed Inactive 00:08:53.524 suites 1 1 n/a 0 0 00:08:53.524 tests 12 12 12 0 0 00:08:53.524 asserts 349 349 349 0 n/a 00:08:53.524 00:08:53.524 Elapsed time = 0.008 seconds 00:08:53.524 22:54:42 unittest.unittest_sock -- unit/unittest.sh@126 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:08:53.524 00:08:53.524 00:08:53.524 CUnit - A unit testing framework for C - Version 2.1-3 00:08:53.524 http://cunit.sourceforge.net/ 00:08:53.524 00:08:53.524 00:08:53.524 Suite: posix 00:08:53.524 Test: flush ...passed 00:08:53.524 00:08:53.524 Run Summary: Type Total Ran Passed Failed Inactive 00:08:53.524 suites 1 1 n/a 0 0 00:08:53.524 tests 1 1 1 0 0 00:08:53.524 asserts 28 28 28 0 n/a 00:08:53.524 00:08:53.524 Elapsed time = 0.000 seconds 00:08:53.524 22:54:42 unittest.unittest_sock -- unit/unittest.sh@128 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:53.524 00:08:53.524 real 0m0.111s 00:08:53.524 user 0m0.034s 00:08:53.524 sys 0m0.051s 00:08:53.524 22:54:42 unittest.unittest_sock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:53.524 22:54:42 unittest.unittest_sock -- common/autotest_common.sh@10 -- # set +x 00:08:53.524 ************************************ 00:08:53.524 END TEST unittest_sock 00:08:53.524 ************************************ 00:08:53.524 22:54:42 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:53.524 22:54:42 unittest -- unit/unittest.sh@281 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:08:53.524 22:54:42 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:53.524 22:54:42 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:53.524 22:54:42 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:53.524 ************************************ 00:08:53.524 START TEST unittest_thread 00:08:53.524 ************************************ 00:08:53.524 22:54:42 unittest.unittest_thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:08:53.524 00:08:53.524 00:08:53.524 CUnit - A unit testing framework for C - Version 2.1-3 00:08:53.524 http://cunit.sourceforge.net/ 00:08:53.524 00:08:53.524 00:08:53.524 Suite: io_channel 00:08:53.524 Test: thread_alloc ...passed 00:08:53.524 Test: thread_send_msg ...passed 00:08:53.524 Test: thread_poller ...passed 00:08:53.524 Test: poller_pause ...passed 00:08:53.524 Test: thread_for_each ...passed 00:08:53.524 Test: for_each_channel_remove ...passed 00:08:53.524 Test: for_each_channel_unreg ...[2024-07-13 22:54:42.885151] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2177:spdk_io_device_register: *ERROR*: io_device 0x7ffd42ef51a0 already registered (old:0x613000000200 new:0x6130000003c0) 00:08:53.524 passed 00:08:53.524 Test: thread_name ...passed 00:08:53.524 Test: channel ...[2024-07-13 22:54:42.889647] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2311:spdk_get_io_channel: *ERROR*: could not find io_device 0x55db3ca59180 00:08:53.524 passed 00:08:53.524 Test: channel_destroy_races ...passed 00:08:53.524 Test: thread_exit_test ...[2024-07-13 22:54:42.894926] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 639:thread_exit: *ERROR*: thread 0x619000007380 got timeout, and move it to the exited state forcefully 00:08:53.524 passed 00:08:53.525 Test: thread_update_stats_test ...passed 00:08:53.525 Test: nested_channel ...passed 00:08:53.525 Test: device_unregister_and_thread_exit_race ...passed 00:08:53.525 Test: cache_closest_timed_poller ...passed 00:08:53.525 Test: multi_timed_pollers_have_same_expiration ...passed 00:08:53.525 Test: io_device_lookup ...passed 00:08:53.525 Test: spdk_spin ...[2024-07-13 22:54:42.905794] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:08:53.525 [2024-07-13 22:54:42.905982] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7ffd42ef5190 00:08:53.525 [2024-07-13 22:54:42.906210] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3120:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:08:53.525 [2024-07-13 22:54:42.907987] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:08:53.525 [2024-07-13 22:54:42.908202] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7ffd42ef5190 00:08:53.525 [2024-07-13 22:54:42.908357] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3103:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:08:53.525 [2024-07-13 22:54:42.908516] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7ffd42ef5190 00:08:53.525 [2024-07-13 22:54:42.908676] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3103:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:08:53.525 [2024-07-13 22:54:42.908850] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7ffd42ef5190 00:08:53.525 [2024-07-13 22:54:42.909048] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3064:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:08:53.525 [2024-07-13 22:54:42.909216] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7ffd42ef5190 00:08:53.525 passed 00:08:53.525 Test: for_each_channel_and_thread_exit_race ...passed 00:08:53.525 Test: for_each_thread_and_thread_exit_race ...passed 00:08:53.525 00:08:53.525 Run Summary: Type Total Ran Passed Failed Inactive 00:08:53.525 suites 1 1 n/a 0 0 00:08:53.525 tests 20 20 20 0 0 00:08:53.525 asserts 409 409 409 0 n/a 00:08:53.525 00:08:53.525 Elapsed time = 0.050 seconds 00:08:53.784 00:08:53.784 real 0m0.093s 00:08:53.784 user 0m0.072s 00:08:53.784 sys 0m0.020s 00:08:53.784 22:54:42 unittest.unittest_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:53.784 22:54:42 unittest.unittest_thread -- common/autotest_common.sh@10 -- # set +x 00:08:53.784 ************************************ 00:08:53.784 END TEST unittest_thread 00:08:53.784 ************************************ 00:08:53.784 22:54:42 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:53.784 22:54:42 unittest -- unit/unittest.sh@282 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:08:53.784 22:54:42 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:53.784 22:54:42 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:53.784 22:54:42 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:53.784 ************************************ 00:08:53.784 START TEST unittest_iobuf 00:08:53.784 ************************************ 00:08:53.784 22:54:42 unittest.unittest_iobuf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:08:53.784 00:08:53.784 00:08:53.784 CUnit - A unit testing framework for C - Version 2.1-3 00:08:53.784 http://cunit.sourceforge.net/ 00:08:53.784 00:08:53.784 00:08:53.784 Suite: io_channel 00:08:53.784 Test: iobuf ...passed 00:08:53.784 Test: iobuf_cache ...[2024-07-13 22:54:43.015937] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 360:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:08:53.784 [2024-07-13 22:54:43.016244] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 363:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:53.784 [2024-07-13 22:54:43.016396] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 372:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:08:53.784 [2024-07-13 22:54:43.016450] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 375:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:53.784 [2024-07-13 22:54:43.016549] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 360:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:08:53.784 [2024-07-13 22:54:43.016601] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 363:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:53.784 passed 00:08:53.784 00:08:53.784 Run Summary: Type Total Ran Passed Failed Inactive 00:08:53.784 suites 1 1 n/a 0 0 00:08:53.784 tests 2 2 2 0 0 00:08:53.784 asserts 107 107 107 0 n/a 00:08:53.784 00:08:53.784 Elapsed time = 0.006 seconds 00:08:53.784 00:08:53.784 real 0m0.045s 00:08:53.784 user 0m0.017s 00:08:53.784 sys 0m0.028s 00:08:53.784 22:54:43 unittest.unittest_iobuf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:53.784 22:54:43 unittest.unittest_iobuf -- common/autotest_common.sh@10 -- # set +x 00:08:53.784 ************************************ 00:08:53.784 END TEST unittest_iobuf 00:08:53.784 ************************************ 00:08:53.784 22:54:43 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:53.784 22:54:43 unittest -- unit/unittest.sh@283 -- # run_test unittest_util unittest_util 00:08:53.784 22:54:43 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:53.784 22:54:43 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:53.784 22:54:43 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:53.784 ************************************ 00:08:53.784 START TEST unittest_util 00:08:53.784 ************************************ 00:08:53.784 22:54:43 unittest.unittest_util -- common/autotest_common.sh@1123 -- # unittest_util 00:08:53.784 22:54:43 unittest.unittest_util -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:08:53.784 00:08:53.784 00:08:53.784 CUnit - A unit testing framework for C - Version 2.1-3 00:08:53.784 http://cunit.sourceforge.net/ 00:08:53.784 00:08:53.784 00:08:53.784 Suite: base64 00:08:53.784 Test: test_base64_get_encoded_strlen ...passed 00:08:53.784 Test: test_base64_get_decoded_len ...passed 00:08:53.784 Test: test_base64_encode ...passed 00:08:53.784 Test: test_base64_decode ...passed 00:08:53.784 Test: test_base64_urlsafe_encode ...passed 00:08:53.784 Test: test_base64_urlsafe_decode ...passed 00:08:53.784 00:08:53.784 Run Summary: Type Total Ran Passed Failed Inactive 00:08:53.784 suites 1 1 n/a 0 0 00:08:53.784 tests 6 6 6 0 0 00:08:53.784 asserts 112 112 112 0 n/a 00:08:53.784 00:08:53.784 Elapsed time = 0.000 seconds 00:08:53.784 22:54:43 unittest.unittest_util -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:08:53.784 00:08:53.784 00:08:53.784 CUnit - A unit testing framework for C - Version 2.1-3 00:08:53.784 http://cunit.sourceforge.net/ 00:08:53.784 00:08:53.784 00:08:53.784 Suite: bit_array 00:08:53.784 Test: test_1bit ...passed 00:08:53.784 Test: test_64bit ...passed 00:08:53.784 Test: test_find ...passed 00:08:53.784 Test: test_resize ...passed 00:08:53.784 Test: test_errors ...passed 00:08:53.784 Test: test_count ...passed 00:08:53.784 Test: test_mask_store_load ...passed 00:08:53.784 Test: test_mask_clear ...passed 00:08:53.784 00:08:53.784 Run Summary: Type Total Ran Passed Failed Inactive 00:08:53.784 suites 1 1 n/a 0 0 00:08:53.784 tests 8 8 8 0 0 00:08:53.784 asserts 5075 5075 5075 0 n/a 00:08:53.784 00:08:53.784 Elapsed time = 0.002 seconds 00:08:53.784 22:54:43 unittest.unittest_util -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:08:53.784 00:08:53.784 00:08:53.784 CUnit - A unit testing framework for C - Version 2.1-3 00:08:53.784 http://cunit.sourceforge.net/ 00:08:53.784 00:08:53.784 00:08:53.784 Suite: cpuset 00:08:53.784 Test: test_cpuset ...passed 00:08:53.784 Test: test_cpuset_parse ...[2024-07-13 22:54:43.163655] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 256:parse_list: *ERROR*: Unexpected end of core list '[' 00:08:53.784 [2024-07-13 22:54:43.163968] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:08:53.784 [2024-07-13 22:54:43.164070] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:08:53.784 [2024-07-13 22:54:43.164169] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 236:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:08:53.784 [2024-07-13 22:54:43.164219] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:08:53.784 [2024-07-13 22:54:43.164267] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:08:53.784 [2024-07-13 22:54:43.164307] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 220:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:08:53.784 [2024-07-13 22:54:43.164366] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 215:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:08:53.784 passed 00:08:53.784 Test: test_cpuset_fmt ...passed 00:08:53.784 Test: test_cpuset_foreach ...passed 00:08:53.785 00:08:53.785 Run Summary: Type Total Ran Passed Failed Inactive 00:08:53.785 suites 1 1 n/a 0 0 00:08:53.785 tests 4 4 4 0 0 00:08:53.785 asserts 90 90 90 0 n/a 00:08:53.785 00:08:53.785 Elapsed time = 0.003 seconds 00:08:53.785 22:54:43 unittest.unittest_util -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:08:54.043 00:08:54.043 00:08:54.043 CUnit - A unit testing framework for C - Version 2.1-3 00:08:54.043 http://cunit.sourceforge.net/ 00:08:54.043 00:08:54.043 00:08:54.043 Suite: crc16 00:08:54.043 Test: test_crc16_t10dif ...passed 00:08:54.043 Test: test_crc16_t10dif_seed ...passed 00:08:54.043 Test: test_crc16_t10dif_copy ...passed 00:08:54.043 00:08:54.043 Run Summary: Type Total Ran Passed Failed Inactive 00:08:54.043 suites 1 1 n/a 0 0 00:08:54.043 tests 3 3 3 0 0 00:08:54.043 asserts 5 5 5 0 n/a 00:08:54.043 00:08:54.043 Elapsed time = 0.000 seconds 00:08:54.043 22:54:43 unittest.unittest_util -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:08:54.043 00:08:54.043 00:08:54.043 CUnit - A unit testing framework for C - Version 2.1-3 00:08:54.043 http://cunit.sourceforge.net/ 00:08:54.043 00:08:54.043 00:08:54.043 Suite: crc32_ieee 00:08:54.043 Test: test_crc32_ieee ...passed 00:08:54.043 00:08:54.043 Run Summary: Type Total Ran Passed Failed Inactive 00:08:54.043 suites 1 1 n/a 0 0 00:08:54.043 tests 1 1 1 0 0 00:08:54.043 asserts 1 1 1 0 n/a 00:08:54.043 00:08:54.043 Elapsed time = 0.000 seconds 00:08:54.043 22:54:43 unittest.unittest_util -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:08:54.043 00:08:54.043 00:08:54.043 CUnit - A unit testing framework for C - Version 2.1-3 00:08:54.043 http://cunit.sourceforge.net/ 00:08:54.043 00:08:54.043 00:08:54.043 Suite: crc32c 00:08:54.043 Test: test_crc32c ...passed 00:08:54.043 Test: test_crc32c_nvme ...passed 00:08:54.043 00:08:54.043 Run Summary: Type Total Ran Passed Failed Inactive 00:08:54.043 suites 1 1 n/a 0 0 00:08:54.043 tests 2 2 2 0 0 00:08:54.043 asserts 16 16 16 0 n/a 00:08:54.043 00:08:54.043 Elapsed time = 0.000 seconds 00:08:54.043 22:54:43 unittest.unittest_util -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:08:54.043 00:08:54.043 00:08:54.043 CUnit - A unit testing framework for C - Version 2.1-3 00:08:54.043 http://cunit.sourceforge.net/ 00:08:54.043 00:08:54.043 00:08:54.043 Suite: crc64 00:08:54.043 Test: test_crc64_nvme ...passed 00:08:54.043 00:08:54.043 Run Summary: Type Total Ran Passed Failed Inactive 00:08:54.043 suites 1 1 n/a 0 0 00:08:54.043 tests 1 1 1 0 0 00:08:54.043 asserts 4 4 4 0 n/a 00:08:54.043 00:08:54.043 Elapsed time = 0.000 seconds 00:08:54.043 22:54:43 unittest.unittest_util -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:08:54.043 00:08:54.043 00:08:54.043 CUnit - A unit testing framework for C - Version 2.1-3 00:08:54.043 http://cunit.sourceforge.net/ 00:08:54.043 00:08:54.043 00:08:54.043 Suite: string 00:08:54.043 Test: test_parse_ip_addr ...passed 00:08:54.043 Test: test_str_chomp ...passed 00:08:54.043 Test: test_parse_capacity ...passed 00:08:54.043 Test: test_sprintf_append_realloc ...passed 00:08:54.043 Test: test_strtol ...passed 00:08:54.043 Test: test_strtoll ...passed 00:08:54.043 Test: test_strarray ...passed 00:08:54.043 Test: test_strcpy_replace ...passed 00:08:54.043 00:08:54.043 Run Summary: Type Total Ran Passed Failed Inactive 00:08:54.043 suites 1 1 n/a 0 0 00:08:54.043 tests 8 8 8 0 0 00:08:54.043 asserts 161 161 161 0 n/a 00:08:54.043 00:08:54.043 Elapsed time = 0.001 seconds 00:08:54.043 22:54:43 unittest.unittest_util -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:08:54.043 00:08:54.043 00:08:54.043 CUnit - A unit testing framework for C - Version 2.1-3 00:08:54.043 http://cunit.sourceforge.net/ 00:08:54.043 00:08:54.043 00:08:54.043 Suite: dif 00:08:54.043 Test: dif_generate_and_verify_test ...[2024-07-13 22:54:43.354395] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:54.043 [2024-07-13 22:54:43.354906] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:54.043 [2024-07-13 22:54:43.355207] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:54.043 [2024-07-13 22:54:43.355515] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:54.043 [2024-07-13 22:54:43.355855] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:54.043 [2024-07-13 22:54:43.356167] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:54.043 passed 00:08:54.043 Test: dif_disable_check_test ...[2024-07-13 22:54:43.357239] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:54.043 [2024-07-13 22:54:43.357580] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:54.043 [2024-07-13 22:54:43.357883] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:54.043 passed 00:08:54.043 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-07-13 22:54:43.358966] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:08:54.043 [2024-07-13 22:54:43.359294] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:08:54.043 [2024-07-13 22:54:43.359628] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:08:54.043 [2024-07-13 22:54:43.359990] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:08:54.043 [2024-07-13 22:54:43.360332] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:54.043 [2024-07-13 22:54:43.360651] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:54.043 [2024-07-13 22:54:43.361008] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:54.043 [2024-07-13 22:54:43.361330] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:54.043 [2024-07-13 22:54:43.361646] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:54.043 [2024-07-13 22:54:43.361987] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:54.043 [2024-07-13 22:54:43.362326] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:54.043 passed 00:08:54.043 Test: dif_apptag_mask_test ...[2024-07-13 22:54:43.362664] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:08:54.043 [2024-07-13 22:54:43.362991] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:08:54.043 passed 00:08:54.043 Test: dif_sec_512_md_0_error_test ...[2024-07-13 22:54:43.363199] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:54.043 passed 00:08:54.043 Test: dif_sec_4096_md_0_error_test ...passed 00:08:54.043 Test: dif_sec_4100_md_128_error_test ...[2024-07-13 22:54:43.363255] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:54.043 [2024-07-13 22:54:43.363303] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:54.043 [2024-07-13 22:54:43.363354] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:08:54.043 passed 00:08:54.043 Test: dif_guard_seed_test ...passed 00:08:54.043 Test: dif_guard_value_test ...[2024-07-13 22:54:43.363396] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:08:54.043 passed 00:08:54.043 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:08:54.043 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:08:54.043 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:54.043 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:54.043 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:54.043 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:08:54.043 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:08:54.043 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:08:54.043 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:08:54.043 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:54.043 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:08:54.043 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:08:54.043 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:08:54.043 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:08:54.043 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:08:54.043 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:08:54.043 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:54.043 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:54.043 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-13 22:54:43.408100] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=f94c, Actual=fd4c 00:08:54.043 [2024-07-13 22:54:43.410608] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fa21, Actual=fe21 00:08:54.043 [2024-07-13 22:54:43.413090] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:08:54.043 [2024-07-13 22:54:43.415560] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:08:54.043 [2024-07-13 22:54:43.418064] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=400005a 00:08:54.043 [2024-07-13 22:54:43.420522] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=400005a 00:08:54.043 [2024-07-13 22:54:43.423026] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd4c, Actual=dff 00:08:54.043 [2024-07-13 22:54:43.424454] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fe21, Actual=2767 00:08:54.043 [2024-07-13 22:54:43.425912] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1eb753ed, Actual=1ab753ed 00:08:54.043 [2024-07-13 22:54:43.428390] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=3c574660, Actual=38574660 00:08:54.043 [2024-07-13 22:54:43.430906] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:08:54.043 [2024-07-13 22:54:43.433393] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:08:54.043 [2024-07-13 22:54:43.435868] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=40000000000005a 00:08:54.043 [2024-07-13 22:54:43.438354] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=40000000000005a 00:08:54.043 [2024-07-13 22:54:43.440829] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753ed, Actual=14b402fe 00:08:54.043 [2024-07-13 22:54:43.442285] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=38574660, Actual=fbfb2cd9 00:08:54.043 [2024-07-13 22:54:43.443732] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a176a7728ecc20d3, Actual=a576a7728ecc20d3 00:08:54.043 [2024-07-13 22:54:43.446218] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=8c010a2d4837a266, Actual=88010a2d4837a266 00:08:54.303 [2024-07-13 22:54:43.448690] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:08:54.303 [2024-07-13 22:54:43.451178] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:08:54.303 [2024-07-13 22:54:43.453693] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=400005a 00:08:54.303 [2024-07-13 22:54:43.456166] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=400005a 00:08:54.303 [2024-07-13 22:54:43.458664] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7728ecc20d3, Actual=1886cc99aea0bf3b 00:08:54.303 [2024-07-13 22:54:43.460105] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=88010a2d4837a266, Actual=4d3328ce7ccf90a3 00:08:54.303 passed 00:08:54.303 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-07-13 22:54:43.460636] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:08:54.303 [2024-07-13 22:54:43.460970] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:08:54.303 [2024-07-13 22:54:43.461277] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.303 [2024-07-13 22:54:43.461597] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.303 [2024-07-13 22:54:43.461920] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:54.303 [2024-07-13 22:54:43.462236] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:54.303 [2024-07-13 22:54:43.462543] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=dff 00:08:54.303 [2024-07-13 22:54:43.462756] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=2767 00:08:54.303 [2024-07-13 22:54:43.462974] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1eb753ed, Actual=1ab753ed 00:08:54.303 [2024-07-13 22:54:43.463280] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3c574660, Actual=38574660 00:08:54.303 [2024-07-13 22:54:43.463609] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.303 [2024-07-13 22:54:43.463921] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.303 [2024-07-13 22:54:43.464246] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058 00:08:54.303 [2024-07-13 22:54:43.464543] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058 00:08:54.303 [2024-07-13 22:54:43.464847] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=14b402fe 00:08:54.303 [2024-07-13 22:54:43.465074] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=fbfb2cd9 00:08:54.303 [2024-07-13 22:54:43.465309] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a176a7728ecc20d3, Actual=a576a7728ecc20d3 00:08:54.303 [2024-07-13 22:54:43.465617] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=8c010a2d4837a266, Actual=88010a2d4837a266 00:08:54.303 [2024-07-13 22:54:43.465929] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.303 [2024-07-13 22:54:43.466233] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.303 [2024-07-13 22:54:43.466545] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:54.303 [2024-07-13 22:54:43.466869] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:54.303 [2024-07-13 22:54:43.467212] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1886cc99aea0bf3b 00:08:54.303 [2024-07-13 22:54:43.467432] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=4d3328ce7ccf90a3 00:08:54.303 passed 00:08:54.303 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-07-13 22:54:43.467695] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:08:54.303 [2024-07-13 22:54:43.468007] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:08:54.303 [2024-07-13 22:54:43.468316] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.303 [2024-07-13 22:54:43.468628] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.303 [2024-07-13 22:54:43.468977] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:54.303 [2024-07-13 22:54:43.469299] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:54.303 [2024-07-13 22:54:43.469613] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=dff 00:08:54.303 [2024-07-13 22:54:43.469821] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=2767 00:08:54.303 [2024-07-13 22:54:43.470038] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1eb753ed, Actual=1ab753ed 00:08:54.303 [2024-07-13 22:54:43.470348] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3c574660, Actual=38574660 00:08:54.303 [2024-07-13 22:54:43.470657] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.303 [2024-07-13 22:54:43.470967] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.303 [2024-07-13 22:54:43.471285] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058 00:08:54.303 [2024-07-13 22:54:43.471601] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058 00:08:54.303 [2024-07-13 22:54:43.471911] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=14b402fe 00:08:54.303 [2024-07-13 22:54:43.472128] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=fbfb2cd9 00:08:54.303 [2024-07-13 22:54:43.472363] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a176a7728ecc20d3, Actual=a576a7728ecc20d3 00:08:54.303 [2024-07-13 22:54:43.472674] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=8c010a2d4837a266, Actual=88010a2d4837a266 00:08:54.303 [2024-07-13 22:54:43.473011] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.303 [2024-07-13 22:54:43.473329] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.303 [2024-07-13 22:54:43.473640] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:54.303 [2024-07-13 22:54:43.473952] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:54.303 [2024-07-13 22:54:43.474288] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1886cc99aea0bf3b 00:08:54.303 [2024-07-13 22:54:43.474502] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=4d3328ce7ccf90a3 00:08:54.303 passed 00:08:54.303 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-07-13 22:54:43.474750] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:08:54.303 [2024-07-13 22:54:43.475083] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:08:54.303 [2024-07-13 22:54:43.475406] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.303 [2024-07-13 22:54:43.475719] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.303 [2024-07-13 22:54:43.476063] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:54.303 [2024-07-13 22:54:43.476366] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:54.303 [2024-07-13 22:54:43.476674] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=dff 00:08:54.303 [2024-07-13 22:54:43.476886] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=2767 00:08:54.303 [2024-07-13 22:54:43.477120] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1eb753ed, Actual=1ab753ed 00:08:54.303 [2024-07-13 22:54:43.477424] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3c574660, Actual=38574660 00:08:54.303 [2024-07-13 22:54:43.477766] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.303 [2024-07-13 22:54:43.478081] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.303 [2024-07-13 22:54:43.478414] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058 00:08:54.303 [2024-07-13 22:54:43.478733] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058 00:08:54.303 [2024-07-13 22:54:43.479045] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=14b402fe 00:08:54.303 [2024-07-13 22:54:43.479267] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=fbfb2cd9 00:08:54.303 [2024-07-13 22:54:43.479489] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a176a7728ecc20d3, Actual=a576a7728ecc20d3 00:08:54.303 [2024-07-13 22:54:43.479805] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=8c010a2d4837a266, Actual=88010a2d4837a266 00:08:54.303 [2024-07-13 22:54:43.480106] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.303 [2024-07-13 22:54:43.480423] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.303 [2024-07-13 22:54:43.480731] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:54.303 [2024-07-13 22:54:43.481062] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:54.303 [2024-07-13 22:54:43.481395] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1886cc99aea0bf3b 00:08:54.303 [2024-07-13 22:54:43.481608] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=4d3328ce7ccf90a3 00:08:54.303 passed 00:08:54.303 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-07-13 22:54:43.481869] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:08:54.303 [2024-07-13 22:54:43.482181] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:08:54.303 [2024-07-13 22:54:43.482495] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.303 [2024-07-13 22:54:43.482812] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.303 [2024-07-13 22:54:43.483137] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:54.303 [2024-07-13 22:54:43.483449] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:54.303 [2024-07-13 22:54:43.483768] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=dff 00:08:54.303 [2024-07-13 22:54:43.483979] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=2767 00:08:54.303 passed 00:08:54.304 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-07-13 22:54:43.484242] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1eb753ed, Actual=1ab753ed 00:08:54.304 [2024-07-13 22:54:43.484555] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3c574660, Actual=38574660 00:08:54.304 [2024-07-13 22:54:43.484897] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.304 [2024-07-13 22:54:43.485234] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.304 [2024-07-13 22:54:43.485551] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058 00:08:54.304 [2024-07-13 22:54:43.485863] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058 00:08:54.304 [2024-07-13 22:54:43.486177] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=14b402fe 00:08:54.304 [2024-07-13 22:54:43.486388] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=fbfb2cd9 00:08:54.304 [2024-07-13 22:54:43.486647] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a176a7728ecc20d3, Actual=a576a7728ecc20d3 00:08:54.304 [2024-07-13 22:54:43.486965] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=8c010a2d4837a266, Actual=88010a2d4837a266 00:08:54.304 [2024-07-13 22:54:43.487272] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.304 [2024-07-13 22:54:43.487586] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.304 [2024-07-13 22:54:43.487898] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:54.304 [2024-07-13 22:54:43.488221] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:54.304 [2024-07-13 22:54:43.488569] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1886cc99aea0bf3b 00:08:54.304 [2024-07-13 22:54:43.488787] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=4d3328ce7ccf90a3 00:08:54.304 passed 00:08:54.304 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-07-13 22:54:43.489064] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:08:54.304 [2024-07-13 22:54:43.489390] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:08:54.304 [2024-07-13 22:54:43.489695] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.304 [2024-07-13 22:54:43.490022] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.304 [2024-07-13 22:54:43.490376] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:54.304 [2024-07-13 22:54:43.490680] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:54.304 [2024-07-13 22:54:43.490994] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=dff 00:08:54.304 [2024-07-13 22:54:43.491209] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=2767 00:08:54.304 passed 00:08:54.304 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-07-13 22:54:43.491471] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1eb753ed, Actual=1ab753ed 00:08:54.304 [2024-07-13 22:54:43.491779] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3c574660, Actual=38574660 00:08:54.304 [2024-07-13 22:54:43.492129] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.304 [2024-07-13 22:54:43.492447] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.304 [2024-07-13 22:54:43.492764] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058 00:08:54.304 [2024-07-13 22:54:43.493105] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058 00:08:54.304 [2024-07-13 22:54:43.493423] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=14b402fe 00:08:54.304 [2024-07-13 22:54:43.493642] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=fbfb2cd9 00:08:54.304 [2024-07-13 22:54:43.493904] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a176a7728ecc20d3, Actual=a576a7728ecc20d3 00:08:54.304 [2024-07-13 22:54:43.494228] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=8c010a2d4837a266, Actual=88010a2d4837a266 00:08:54.304 [2024-07-13 22:54:43.494541] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.304 [2024-07-13 22:54:43.494848] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.304 [2024-07-13 22:54:43.495174] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:54.304 [2024-07-13 22:54:43.495488] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:54.304 [2024-07-13 22:54:43.495834] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1886cc99aea0bf3b 00:08:54.304 [2024-07-13 22:54:43.496056] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=4d3328ce7ccf90a3 00:08:54.304 passed 00:08:54.304 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:08:54.304 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:54.304 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:08:54.304 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:54.304 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:08:54.304 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:08:54.304 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:54.304 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:08:54.304 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:54.304 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-13 22:54:43.540642] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=f94c, Actual=fd4c 00:08:54.304 [2024-07-13 22:54:43.541796] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=391e, Actual=3d1e 00:08:54.304 [2024-07-13 22:54:43.542936] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:08:54.304 [2024-07-13 22:54:43.544046] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:08:54.304 [2024-07-13 22:54:43.545223] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=400005a 00:08:54.304 [2024-07-13 22:54:43.546351] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=400005a 00:08:54.304 [2024-07-13 22:54:43.547472] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd4c, Actual=dff 00:08:54.304 [2024-07-13 22:54:43.548597] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d90d, Actual=4b 00:08:54.304 [2024-07-13 22:54:43.549727] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1eb753ed, Actual=1ab753ed 00:08:54.304 [2024-07-13 22:54:43.551262] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a7f411ee, Actual=a3f411ee 00:08:54.304 [2024-07-13 22:54:43.552713] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:08:54.304 [2024-07-13 22:54:43.554447] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:08:54.304 [2024-07-13 22:54:43.555586] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=40000000000005a 00:08:54.304 [2024-07-13 22:54:43.556705] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=40000000000005a 00:08:54.304 [2024-07-13 22:54:43.557840] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753ed, Actual=14b402fe 00:08:54.304 [2024-07-13 22:54:43.559278] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=890612e, Actual=cb3c0b97 00:08:54.304 [2024-07-13 22:54:43.560724] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a176a7728ecc20d3, Actual=a576a7728ecc20d3 00:08:54.304 [2024-07-13 22:54:43.561917] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=c39fa16bb60648f1, Actual=c79fa16bb60648f1 00:08:54.304 [2024-07-13 22:54:43.563073] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:08:54.304 [2024-07-13 22:54:43.564214] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:08:54.304 [2024-07-13 22:54:43.565358] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=400005a 00:08:54.304 [2024-07-13 22:54:43.566493] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=400005a 00:08:54.304 [2024-07-13 22:54:43.567611] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7728ecc20d3, Actual=1886cc99aea0bf3b 00:08:54.304 passed 00:08:54.304 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-13 22:54:43.568764] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d8fbbefb69e63b38, Actual=1dc99c185d1e09fd 00:08:54.304 [2024-07-13 22:54:43.569155] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:08:54.304 [2024-07-13 22:54:43.569439] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=cd05, Actual=c905 00:08:54.304 [2024-07-13 22:54:43.569712] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.304 [2024-07-13 22:54:43.570002] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.304 [2024-07-13 22:54:43.570301] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:54.304 [2024-07-13 22:54:43.570626] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:54.304 [2024-07-13 22:54:43.570908] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=dff 00:08:54.304 [2024-07-13 22:54:43.571541] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=f450 00:08:54.304 [2024-07-13 22:54:43.572124] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1eb753ed, Actual=1ab753ed 00:08:54.304 [2024-07-13 22:54:43.572426] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=45c2306c, Actual=41c2306c 00:08:54.304 [2024-07-13 22:54:43.572730] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.304 [2024-07-13 22:54:43.573037] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.304 [2024-07-13 22:54:43.573324] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058 00:08:54.304 [2024-07-13 22:54:43.573623] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058 00:08:54.304 [2024-07-13 22:54:43.573890] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=14b402fe 00:08:54.304 [2024-07-13 22:54:43.574166] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=290a2a15 00:08:54.304 [2024-07-13 22:54:43.574460] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a176a7728ecc20d3, Actual=a576a7728ecc20d3 00:08:54.304 [2024-07-13 22:54:43.574719] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2302358b893a436b, Actual=2702358b893a436b 00:08:54.304 [2024-07-13 22:54:43.574987] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.304 [2024-07-13 22:54:43.575256] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.304 [2024-07-13 22:54:43.575538] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:54.304 [2024-07-13 22:54:43.575820] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:54.304 [2024-07-13 22:54:43.576118] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1886cc99aea0bf3b 00:08:54.304 [2024-07-13 22:54:43.576400] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=fd5408f862220267 00:08:54.304 passed 00:08:54.304 Test: dix_sec_512_md_0_error ...[2024-07-13 22:54:43.576475] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:54.304 passed 00:08:54.304 Test: dix_sec_512_md_8_prchk_0_single_iov ...passed 00:08:54.304 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:54.304 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:08:54.304 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:54.304 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:08:54.304 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:08:54.304 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:54.304 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:08:54.304 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:54.304 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-13 22:54:43.620402] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=f94c, Actual=fd4c 00:08:54.304 [2024-07-13 22:54:43.621540] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=391e, Actual=3d1e 00:08:54.304 [2024-07-13 22:54:43.622670] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:08:54.304 [2024-07-13 22:54:43.623786] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:08:54.304 [2024-07-13 22:54:43.624947] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=400005a 00:08:54.304 [2024-07-13 22:54:43.626088] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=400005a 00:08:54.304 [2024-07-13 22:54:43.627210] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd4c, Actual=dff 00:08:54.304 [2024-07-13 22:54:43.628324] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d90d, Actual=4b 00:08:54.304 [2024-07-13 22:54:43.629455] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1eb753ed, Actual=1ab753ed 00:08:54.304 [2024-07-13 22:54:43.630581] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a7f411ee, Actual=a3f411ee 00:08:54.304 [2024-07-13 22:54:43.631718] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:08:54.304 [2024-07-13 22:54:43.632856] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:08:54.304 [2024-07-13 22:54:43.633999] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=40000000000005a 00:08:54.304 [2024-07-13 22:54:43.635112] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=40000000000005a 00:08:54.304 [2024-07-13 22:54:43.636224] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753ed, Actual=14b402fe 00:08:54.304 [2024-07-13 22:54:43.637361] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=890612e, Actual=cb3c0b97 00:08:54.304 [2024-07-13 22:54:43.638505] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a176a7728ecc20d3, Actual=a576a7728ecc20d3 00:08:54.304 [2024-07-13 22:54:43.639620] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=c39fa16bb60648f1, Actual=c79fa16bb60648f1 00:08:54.304 [2024-07-13 22:54:43.640742] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:08:54.304 [2024-07-13 22:54:43.641882] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:08:54.304 [2024-07-13 22:54:43.643002] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=400005a 00:08:54.304 [2024-07-13 22:54:43.644118] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=400005a 00:08:54.304 [2024-07-13 22:54:43.645277] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7728ecc20d3, Actual=1886cc99aea0bf3b 00:08:54.304 [2024-07-13 22:54:43.646414] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d8fbbefb69e63b38, Actual=1dc99c185d1e09fd 00:08:54.304 passed 00:08:54.304 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-13 22:54:43.646790] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:08:54.304 [2024-07-13 22:54:43.647072] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=cd05, Actual=c905 00:08:54.304 [2024-07-13 22:54:43.647356] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.304 [2024-07-13 22:54:43.647641] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.304 [2024-07-13 22:54:43.647940] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:54.304 [2024-07-13 22:54:43.648216] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:54.304 [2024-07-13 22:54:43.648493] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=dff 00:08:54.305 [2024-07-13 22:54:43.648767] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=f450 00:08:54.305 [2024-07-13 22:54:43.649076] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1eb753ed, Actual=1ab753ed 00:08:54.305 [2024-07-13 22:54:43.649357] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=45c2306c, Actual=41c2306c 00:08:54.305 [2024-07-13 22:54:43.649650] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.305 [2024-07-13 22:54:43.649923] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.305 [2024-07-13 22:54:43.650185] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058 00:08:54.305 [2024-07-13 22:54:43.650461] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058 00:08:54.305 [2024-07-13 22:54:43.650723] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=14b402fe 00:08:54.305 [2024-07-13 22:54:43.650997] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=290a2a15 00:08:54.305 [2024-07-13 22:54:43.651284] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a176a7728ecc20d3, Actual=a576a7728ecc20d3 00:08:54.305 [2024-07-13 22:54:43.651557] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2302358b893a436b, Actual=2702358b893a436b 00:08:54.305 [2024-07-13 22:54:43.651825] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.305 [2024-07-13 22:54:43.652091] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:54.305 [2024-07-13 22:54:43.652351] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:54.305 [2024-07-13 22:54:43.652622] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:54.305 [2024-07-13 22:54:43.652935] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1886cc99aea0bf3b 00:08:54.305 [2024-07-13 22:54:43.653225] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=fd5408f862220267 00:08:54.305 passed 00:08:54.305 Test: set_md_interleave_iovs_test ...passed 00:08:54.305 Test: set_md_interleave_iovs_split_test ...passed 00:08:54.305 Test: dif_generate_stream_pi_16_test ...passed 00:08:54.305 Test: dif_generate_stream_test ...passed 00:08:54.305 Test: set_md_interleave_iovs_alignment_test ...[2024-07-13 22:54:43.660736] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1822:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:08:54.305 passed 00:08:54.305 Test: dif_generate_split_test ...passed 00:08:54.305 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:08:54.305 Test: dif_verify_split_test ...passed 00:08:54.305 Test: dif_verify_stream_multi_segments_test ...passed 00:08:54.305 Test: update_crc32c_pi_16_test ...passed 00:08:54.305 Test: update_crc32c_test ...passed 00:08:54.305 Test: dif_update_crc32c_split_test ...passed 00:08:54.305 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:08:54.305 Test: get_range_with_md_test ...passed 00:08:54.305 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:08:54.305 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:08:54.305 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:08:54.305 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:08:54.305 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:08:54.305 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:08:54.305 Test: dif_generate_and_verify_unmap_test ...passed 00:08:54.305 00:08:54.305 Run Summary: Type Total Ran Passed Failed Inactive 00:08:54.305 suites 1 1 n/a 0 0 00:08:54.305 tests 79 79 79 0 0 00:08:54.305 asserts 3584 3584 3584 0 n/a 00:08:54.305 00:08:54.305 Elapsed time = 0.351 seconds 00:08:54.563 22:54:43 unittest.unittest_util -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:08:54.563 00:08:54.563 00:08:54.563 CUnit - A unit testing framework for C - Version 2.1-3 00:08:54.563 http://cunit.sourceforge.net/ 00:08:54.563 00:08:54.563 00:08:54.563 Suite: iov 00:08:54.563 Test: test_single_iov ...passed 00:08:54.563 Test: test_simple_iov ...passed 00:08:54.563 Test: test_complex_iov ...passed 00:08:54.563 Test: test_iovs_to_buf ...passed 00:08:54.563 Test: test_buf_to_iovs ...passed 00:08:54.563 Test: test_memset ...passed 00:08:54.563 Test: test_iov_one ...passed 00:08:54.563 Test: test_iov_xfer ...passed 00:08:54.563 00:08:54.563 Run Summary: Type Total Ran Passed Failed Inactive 00:08:54.563 suites 1 1 n/a 0 0 00:08:54.563 tests 8 8 8 0 0 00:08:54.563 asserts 156 156 156 0 n/a 00:08:54.563 00:08:54.563 Elapsed time = 0.000 seconds 00:08:54.563 22:54:43 unittest.unittest_util -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:08:54.563 00:08:54.563 00:08:54.563 CUnit - A unit testing framework for C - Version 2.1-3 00:08:54.563 http://cunit.sourceforge.net/ 00:08:54.563 00:08:54.563 00:08:54.563 Suite: math 00:08:54.563 Test: test_serial_number_arithmetic ...passed 00:08:54.563 Suite: erase 00:08:54.563 Test: test_memset_s ...passed 00:08:54.563 00:08:54.563 Run Summary: Type Total Ran Passed Failed Inactive 00:08:54.563 suites 2 2 n/a 0 0 00:08:54.563 tests 2 2 2 0 0 00:08:54.563 asserts 18 18 18 0 n/a 00:08:54.563 00:08:54.563 Elapsed time = 0.000 seconds 00:08:54.563 22:54:43 unittest.unittest_util -- unit/unittest.sh@145 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:08:54.563 00:08:54.563 00:08:54.563 CUnit - A unit testing framework for C - Version 2.1-3 00:08:54.563 http://cunit.sourceforge.net/ 00:08:54.563 00:08:54.563 00:08:54.563 Suite: pipe 00:08:54.563 Test: test_create_destroy ...passed 00:08:54.563 Test: test_write_get_buffer ...passed 00:08:54.563 Test: test_write_advance ...passed 00:08:54.563 Test: test_read_get_buffer ...passed 00:08:54.563 Test: test_read_advance ...passed 00:08:54.563 Test: test_data ...passed 00:08:54.563 00:08:54.563 Run Summary: Type Total Ran Passed Failed Inactive 00:08:54.563 suites 1 1 n/a 0 0 00:08:54.563 tests 6 6 6 0 0 00:08:54.563 asserts 251 251 251 0 n/a 00:08:54.563 00:08:54.563 Elapsed time = 0.000 seconds 00:08:54.563 22:54:43 unittest.unittest_util -- unit/unittest.sh@146 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:08:54.563 00:08:54.563 00:08:54.563 CUnit - A unit testing framework for C - Version 2.1-3 00:08:54.563 http://cunit.sourceforge.net/ 00:08:54.563 00:08:54.563 00:08:54.563 Suite: xor 00:08:54.563 Test: test_xor_gen ...passed 00:08:54.563 00:08:54.563 Run Summary: Type Total Ran Passed Failed Inactive 00:08:54.563 suites 1 1 n/a 0 0 00:08:54.563 tests 1 1 1 0 0 00:08:54.563 asserts 17 17 17 0 n/a 00:08:54.563 00:08:54.563 Elapsed time = 0.007 seconds 00:08:54.563 00:08:54.563 real 0m0.759s 00:08:54.563 user 0m0.580s 00:08:54.563 sys 0m0.181s 00:08:54.563 22:54:43 unittest.unittest_util -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:54.563 22:54:43 unittest.unittest_util -- common/autotest_common.sh@10 -- # set +x 00:08:54.563 ************************************ 00:08:54.563 END TEST unittest_util 00:08:54.563 ************************************ 00:08:54.563 22:54:43 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:54.563 22:54:43 unittest -- unit/unittest.sh@284 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:54.563 22:54:43 unittest -- unit/unittest.sh@285 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:08:54.563 22:54:43 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:54.564 22:54:43 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:54.564 22:54:43 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:54.564 ************************************ 00:08:54.564 START TEST unittest_vhost 00:08:54.564 ************************************ 00:08:54.564 22:54:43 unittest.unittest_vhost -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:08:54.564 00:08:54.564 00:08:54.564 CUnit - A unit testing framework for C - Version 2.1-3 00:08:54.564 http://cunit.sourceforge.net/ 00:08:54.564 00:08:54.564 00:08:54.564 Suite: vhost_suite 00:08:54.564 Test: desc_to_iov_test ...[2024-07-13 22:54:43.925157] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 620:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:08:54.564 passed 00:08:54.564 Test: create_controller_test ...[2024-07-13 22:54:43.930261] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:08:54.564 [2024-07-13 22:54:43.930520] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:08:54.564 [2024-07-13 22:54:43.930828] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:08:54.564 [2024-07-13 22:54:43.931054] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:08:54.564 [2024-07-13 22:54:43.931259] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:08:54.564 [2024-07-13 22:54:43.931774] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1781:vhost_user_dev_init: *ERROR*: Resulting socket path for controller is too long: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 00:08:54.564 [2024-07-13 22:54:43.933179] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 137:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:08:54.564 passed 00:08:54.564 Test: session_find_by_vid_test ...passed 00:08:54.564 Test: remove_controller_test ...[2024-07-13 22:54:43.935393] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1866:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:08:54.564 passed 00:08:54.564 Test: vq_avail_ring_get_test ...passed 00:08:54.564 Test: vq_packed_ring_test ...passed 00:08:54.564 Test: vhost_blk_construct_test ...passed 00:08:54.564 00:08:54.564 Run Summary: Type Total Ran Passed Failed Inactive 00:08:54.564 suites 1 1 n/a 0 0 00:08:54.564 tests 7 7 7 0 0 00:08:54.564 asserts 147 147 147 0 n/a 00:08:54.564 00:08:54.564 Elapsed time = 0.013 seconds 00:08:54.564 00:08:54.564 real 0m0.056s 00:08:54.564 user 0m0.036s 00:08:54.564 sys 0m0.018s 00:08:54.564 22:54:43 unittest.unittest_vhost -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:54.564 22:54:43 unittest.unittest_vhost -- common/autotest_common.sh@10 -- # set +x 00:08:54.564 ************************************ 00:08:54.564 END TEST unittest_vhost 00:08:54.564 ************************************ 00:08:54.822 22:54:43 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:54.822 22:54:43 unittest -- unit/unittest.sh@287 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:08:54.822 22:54:43 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:54.822 22:54:43 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:54.822 22:54:43 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:54.822 ************************************ 00:08:54.822 START TEST unittest_dma 00:08:54.822 ************************************ 00:08:54.822 22:54:44 unittest.unittest_dma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:08:54.822 00:08:54.822 00:08:54.822 CUnit - A unit testing framework for C - Version 2.1-3 00:08:54.822 http://cunit.sourceforge.net/ 00:08:54.822 00:08:54.822 00:08:54.822 Suite: dma_suite 00:08:54.822 Test: test_dma ...[2024-07-13 22:54:44.028487] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 56:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:08:54.822 passed 00:08:54.822 00:08:54.822 Run Summary: Type Total Ran Passed Failed Inactive 00:08:54.822 suites 1 1 n/a 0 0 00:08:54.822 tests 1 1 1 0 0 00:08:54.822 asserts 54 54 54 0 n/a 00:08:54.822 00:08:54.822 Elapsed time = 0.000 seconds 00:08:54.822 00:08:54.822 real 0m0.033s 00:08:54.822 user 0m0.023s 00:08:54.822 sys 0m0.010s 00:08:54.822 22:54:44 unittest.unittest_dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:54.822 22:54:44 unittest.unittest_dma -- common/autotest_common.sh@10 -- # set +x 00:08:54.822 ************************************ 00:08:54.822 END TEST unittest_dma 00:08:54.822 ************************************ 00:08:54.822 22:54:44 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:54.822 22:54:44 unittest -- unit/unittest.sh@289 -- # run_test unittest_init unittest_init 00:08:54.822 22:54:44 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:54.822 22:54:44 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:54.822 22:54:44 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:54.822 ************************************ 00:08:54.822 START TEST unittest_init 00:08:54.822 ************************************ 00:08:54.822 22:54:44 unittest.unittest_init -- common/autotest_common.sh@1123 -- # unittest_init 00:08:54.822 22:54:44 unittest.unittest_init -- unit/unittest.sh@150 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:08:54.822 00:08:54.822 00:08:54.822 CUnit - A unit testing framework for C - Version 2.1-3 00:08:54.822 http://cunit.sourceforge.net/ 00:08:54.822 00:08:54.822 00:08:54.822 Suite: subsystem_suite 00:08:54.822 Test: subsystem_sort_test_depends_on_single ...passed 00:08:54.822 Test: subsystem_sort_test_depends_on_multiple ...passed 00:08:54.822 Test: subsystem_sort_test_missing_dependency ...[2024-07-13 22:54:44.115855] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 196:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:08:54.822 [2024-07-13 22:54:44.116189] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:08:54.822 passed 00:08:54.822 00:08:54.822 Run Summary: Type Total Ran Passed Failed Inactive 00:08:54.822 suites 1 1 n/a 0 0 00:08:54.822 tests 3 3 3 0 0 00:08:54.822 asserts 20 20 20 0 n/a 00:08:54.822 00:08:54.822 Elapsed time = 0.001 seconds 00:08:54.822 00:08:54.822 real 0m0.038s 00:08:54.822 user 0m0.019s 00:08:54.822 sys 0m0.019s 00:08:54.822 22:54:44 unittest.unittest_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:54.822 22:54:44 unittest.unittest_init -- common/autotest_common.sh@10 -- # set +x 00:08:54.822 ************************************ 00:08:54.822 END TEST unittest_init 00:08:54.822 ************************************ 00:08:54.822 22:54:44 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:54.822 22:54:44 unittest -- unit/unittest.sh@290 -- # run_test unittest_keyring /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:08:54.822 22:54:44 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:54.822 22:54:44 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:54.822 22:54:44 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:54.822 ************************************ 00:08:54.822 START TEST unittest_keyring 00:08:54.822 ************************************ 00:08:54.822 22:54:44 unittest.unittest_keyring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:08:54.822 00:08:54.822 00:08:54.822 CUnit - A unit testing framework for C - Version 2.1-3 00:08:54.822 http://cunit.sourceforge.net/ 00:08:54.822 00:08:54.822 00:08:54.822 Suite: keyring 00:08:54.822 Test: test_keyring_add_remove ...[2024-07-13 22:54:44.201715] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' already exists 00:08:54.822 [2024-07-13 22:54:44.202050] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists 00:08:54.822 passed 00:08:54.822 Test: test_keyring_get_put ...passed 00:08:54.822 00:08:54.822 [2024-07-13 22:54:44.202139] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:08:54.822 Run Summary: Type Total Ran Passed Failed Inactive 00:08:54.822 suites 1 1 n/a 0 0 00:08:54.822 tests 2 2 2 0 0 00:08:54.822 asserts 44 44 44 0 n/a 00:08:54.822 00:08:54.822 Elapsed time = 0.001 seconds 00:08:54.822 00:08:54.822 real 0m0.032s 00:08:54.822 user 0m0.020s 00:08:54.822 sys 0m0.012s 00:08:54.822 22:54:44 unittest.unittest_keyring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:54.822 22:54:44 unittest.unittest_keyring -- common/autotest_common.sh@10 -- # set +x 00:08:54.822 ************************************ 00:08:54.822 END TEST unittest_keyring 00:08:54.822 ************************************ 00:08:55.080 22:54:44 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:55.080 22:54:44 unittest -- unit/unittest.sh@292 -- # '[' yes = yes ']' 00:08:55.080 22:54:44 unittest -- unit/unittest.sh@292 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:08:55.080 22:54:44 unittest -- unit/unittest.sh@293 -- # hostname 00:08:55.080 22:54:44 unittest -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:08:55.080 geninfo: WARNING: invalid characters removed from testname! 00:09:27.147 22:55:12 unittest -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:09:28.082 22:55:17 unittest -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:31.361 22:55:20 unittest -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:34.728 22:55:23 unittest -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:37.255 22:55:26 unittest -- unit/unittest.sh@298 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:40.537 22:55:29 unittest -- unit/unittest.sh@299 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:43.067 22:55:32 unittest -- unit/unittest.sh@300 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:45.602 22:55:34 unittest -- unit/unittest.sh@301 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:09:45.602 22:55:34 unittest -- unit/unittest.sh@302 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:09:45.860 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:45.860 Found 322 entries. 00:09:45.860 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:09:45.860 Writing .css and .png files. 00:09:45.860 Generating output. 00:09:46.118 Processing file include/linux/virtio_ring.h 00:09:46.377 Processing file include/spdk/base64.h 00:09:46.377 Processing file include/spdk/util.h 00:09:46.377 Processing file include/spdk/trace.h 00:09:46.377 Processing file include/spdk/endian.h 00:09:46.377 Processing file include/spdk/thread.h 00:09:46.377 Processing file include/spdk/nvme_spec.h 00:09:46.377 Processing file include/spdk/mmio.h 00:09:46.377 Processing file include/spdk/nvme.h 00:09:46.377 Processing file include/spdk/bdev_module.h 00:09:46.377 Processing file include/spdk/histogram_data.h 00:09:46.377 Processing file include/spdk/nvmf_transport.h 00:09:46.377 Processing file include/spdk_internal/sock.h 00:09:46.377 Processing file include/spdk_internal/rdma_utils.h 00:09:46.377 Processing file include/spdk_internal/virtio.h 00:09:46.377 Processing file include/spdk_internal/sgl.h 00:09:46.377 Processing file include/spdk_internal/nvme_tcp.h 00:09:46.377 Processing file include/spdk_internal/utf.h 00:09:46.636 Processing file lib/accel/accel_sw.c 00:09:46.636 Processing file lib/accel/accel_rpc.c 00:09:46.636 Processing file lib/accel/accel.c 00:09:46.894 Processing file lib/bdev/bdev_rpc.c 00:09:46.894 Processing file lib/bdev/bdev.c 00:09:46.894 Processing file lib/bdev/part.c 00:09:46.894 Processing file lib/bdev/bdev_zone.c 00:09:46.894 Processing file lib/bdev/scsi_nvme.c 00:09:47.153 Processing file lib/blob/request.c 00:09:47.153 Processing file lib/blob/blob_bs_dev.c 00:09:47.153 Processing file lib/blob/blobstore.c 00:09:47.153 Processing file lib/blob/zeroes.c 00:09:47.153 Processing file lib/blob/blobstore.h 00:09:47.484 Processing file lib/blobfs/blobfs.c 00:09:47.484 Processing file lib/blobfs/tree.c 00:09:47.484 Processing file lib/conf/conf.c 00:09:47.484 Processing file lib/dma/dma.c 00:09:47.774 Processing file lib/env_dpdk/sigbus_handler.c 00:09:47.774 Processing file lib/env_dpdk/pci_event.c 00:09:47.774 Processing file lib/env_dpdk/memory.c 00:09:47.774 Processing file lib/env_dpdk/pci_virtio.c 00:09:47.774 Processing file lib/env_dpdk/pci_vmd.c 00:09:47.774 Processing file lib/env_dpdk/init.c 00:09:47.774 Processing file lib/env_dpdk/env.c 00:09:47.774 Processing file lib/env_dpdk/threads.c 00:09:47.774 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:09:47.774 Processing file lib/env_dpdk/pci.c 00:09:47.774 Processing file lib/env_dpdk/pci_idxd.c 00:09:47.774 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:09:47.774 Processing file lib/env_dpdk/pci_ioat.c 00:09:47.774 Processing file lib/env_dpdk/pci_dpdk.c 00:09:48.033 Processing file lib/event/scheduler_static.c 00:09:48.033 Processing file lib/event/app.c 00:09:48.033 Processing file lib/event/log_rpc.c 00:09:48.033 Processing file lib/event/reactor.c 00:09:48.033 Processing file lib/event/app_rpc.c 00:09:48.292 Processing file lib/ftl/ftl_nv_cache.c 00:09:48.292 Processing file lib/ftl/ftl_p2l.c 00:09:48.292 Processing file lib/ftl/ftl_core.h 00:09:48.292 Processing file lib/ftl/ftl_sb.c 00:09:48.292 Processing file lib/ftl/ftl_nv_cache_io.h 00:09:48.292 Processing file lib/ftl/ftl_init.c 00:09:48.292 Processing file lib/ftl/ftl_writer.c 00:09:48.292 Processing file lib/ftl/ftl_band.c 00:09:48.292 Processing file lib/ftl/ftl_rq.c 00:09:48.292 Processing file lib/ftl/ftl_band_ops.c 00:09:48.292 Processing file lib/ftl/ftl_reloc.c 00:09:48.292 Processing file lib/ftl/ftl_l2p_flat.c 00:09:48.292 Processing file lib/ftl/ftl_io.h 00:09:48.292 Processing file lib/ftl/ftl_debug.h 00:09:48.292 Processing file lib/ftl/ftl_core.c 00:09:48.292 Processing file lib/ftl/ftl_writer.h 00:09:48.292 Processing file lib/ftl/ftl_debug.c 00:09:48.292 Processing file lib/ftl/ftl_io.c 00:09:48.292 Processing file lib/ftl/ftl_l2p_cache.c 00:09:48.292 Processing file lib/ftl/ftl_trace.c 00:09:48.292 Processing file lib/ftl/ftl_band.h 00:09:48.292 Processing file lib/ftl/ftl_nv_cache.h 00:09:48.292 Processing file lib/ftl/ftl_l2p.c 00:09:48.292 Processing file lib/ftl/ftl_layout.c 00:09:48.551 Processing file lib/ftl/base/ftl_base_dev.c 00:09:48.551 Processing file lib/ftl/base/ftl_base_bdev.c 00:09:48.809 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:09:48.809 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:09:48.809 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:09:48.809 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:09:48.809 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:09:48.809 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:09:48.809 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:09:48.809 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:09:48.809 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:09:48.809 Processing file lib/ftl/mngt/ftl_mngt.c 00:09:48.809 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:09:48.809 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:09:48.809 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:09:48.809 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:09:48.809 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:09:49.068 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:09:49.068 Processing file lib/ftl/upgrade/ftl_chunk_upgrade.c 00:09:49.068 Processing file lib/ftl/upgrade/ftl_p2l_upgrade.c 00:09:49.068 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:09:49.068 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:09:49.068 Processing file lib/ftl/upgrade/ftl_trim_upgrade.c 00:09:49.068 Processing file lib/ftl/upgrade/ftl_band_upgrade.c 00:09:49.068 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:09:49.068 Processing file lib/ftl/utils/ftl_mempool.c 00:09:49.068 Processing file lib/ftl/utils/ftl_df.h 00:09:49.068 Processing file lib/ftl/utils/ftl_bitmap.c 00:09:49.068 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:09:49.068 Processing file lib/ftl/utils/ftl_md.c 00:09:49.068 Processing file lib/ftl/utils/ftl_conf.c 00:09:49.068 Processing file lib/ftl/utils/ftl_addr_utils.h 00:09:49.068 Processing file lib/ftl/utils/ftl_property.c 00:09:49.068 Processing file lib/ftl/utils/ftl_property.h 00:09:49.327 Processing file lib/idxd/idxd.c 00:09:49.327 Processing file lib/idxd/idxd_user.c 00:09:49.327 Processing file lib/idxd/idxd_internal.h 00:09:49.327 Processing file lib/init/subsystem_rpc.c 00:09:49.327 Processing file lib/init/subsystem.c 00:09:49.327 Processing file lib/init/json_config.c 00:09:49.327 Processing file lib/init/rpc.c 00:09:49.585 Processing file lib/ioat/ioat_internal.h 00:09:49.586 Processing file lib/ioat/ioat.c 00:09:49.844 Processing file lib/iscsi/task.c 00:09:49.844 Processing file lib/iscsi/conn.c 00:09:49.844 Processing file lib/iscsi/tgt_node.c 00:09:49.844 Processing file lib/iscsi/task.h 00:09:49.844 Processing file lib/iscsi/init_grp.c 00:09:49.844 Processing file lib/iscsi/portal_grp.c 00:09:49.844 Processing file lib/iscsi/iscsi.c 00:09:49.844 Processing file lib/iscsi/iscsi.h 00:09:49.844 Processing file lib/iscsi/param.c 00:09:49.844 Processing file lib/iscsi/md5.c 00:09:49.844 Processing file lib/iscsi/iscsi_rpc.c 00:09:49.844 Processing file lib/iscsi/iscsi_subsystem.c 00:09:50.103 Processing file lib/json/json_write.c 00:09:50.103 Processing file lib/json/json_parse.c 00:09:50.103 Processing file lib/json/json_util.c 00:09:50.103 Processing file lib/jsonrpc/jsonrpc_client.c 00:09:50.103 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:09:50.103 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:09:50.103 Processing file lib/jsonrpc/jsonrpc_server.c 00:09:50.103 Processing file lib/keyring/keyring_rpc.c 00:09:50.103 Processing file lib/keyring/keyring.c 00:09:50.362 Processing file lib/log/log.c 00:09:50.362 Processing file lib/log/log_deprecated.c 00:09:50.362 Processing file lib/log/log_flags.c 00:09:50.362 Processing file lib/lvol/lvol.c 00:09:50.362 Processing file lib/nbd/nbd.c 00:09:50.362 Processing file lib/nbd/nbd_rpc.c 00:09:50.362 Processing file lib/notify/notify_rpc.c 00:09:50.362 Processing file lib/notify/notify.c 00:09:51.298 Processing file lib/nvme/nvme_ctrlr.c 00:09:51.298 Processing file lib/nvme/nvme_rdma.c 00:09:51.298 Processing file lib/nvme/nvme_transport.c 00:09:51.298 Processing file lib/nvme/nvme_ns_cmd.c 00:09:51.298 Processing file lib/nvme/nvme_pcie.c 00:09:51.298 Processing file lib/nvme/nvme_quirks.c 00:09:51.298 Processing file lib/nvme/nvme_poll_group.c 00:09:51.298 Processing file lib/nvme/nvme_zns.c 00:09:51.298 Processing file lib/nvme/nvme_opal.c 00:09:51.298 Processing file lib/nvme/nvme_auth.c 00:09:51.298 Processing file lib/nvme/nvme_tcp.c 00:09:51.298 Processing file lib/nvme/nvme.c 00:09:51.298 Processing file lib/nvme/nvme_fabric.c 00:09:51.298 Processing file lib/nvme/nvme_ns.c 00:09:51.298 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:09:51.298 Processing file lib/nvme/nvme_pcie_common.c 00:09:51.298 Processing file lib/nvme/nvme_internal.h 00:09:51.298 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:09:51.298 Processing file lib/nvme/nvme_qpair.c 00:09:51.298 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:09:51.298 Processing file lib/nvme/nvme_io_msg.c 00:09:51.298 Processing file lib/nvme/nvme_discovery.c 00:09:51.298 Processing file lib/nvme/nvme_pcie_internal.h 00:09:51.298 Processing file lib/nvme/nvme_cuse.c 00:09:51.865 Processing file lib/nvmf/subsystem.c 00:09:51.865 Processing file lib/nvmf/nvmf_internal.h 00:09:51.865 Processing file lib/nvmf/auth.c 00:09:51.865 Processing file lib/nvmf/transport.c 00:09:51.865 Processing file lib/nvmf/nvmf.c 00:09:51.865 Processing file lib/nvmf/rdma.c 00:09:51.865 Processing file lib/nvmf/ctrlr.c 00:09:51.865 Processing file lib/nvmf/tcp.c 00:09:51.865 Processing file lib/nvmf/ctrlr_bdev.c 00:09:51.865 Processing file lib/nvmf/nvmf_rpc.c 00:09:51.865 Processing file lib/nvmf/ctrlr_discovery.c 00:09:51.865 Processing file lib/rdma_provider/common.c 00:09:51.865 Processing file lib/rdma_provider/rdma_provider_verbs.c 00:09:51.865 Processing file lib/rdma_utils/rdma_utils.c 00:09:51.865 Processing file lib/rpc/rpc.c 00:09:52.123 Processing file lib/scsi/scsi_bdev.c 00:09:52.123 Processing file lib/scsi/lun.c 00:09:52.123 Processing file lib/scsi/scsi_rpc.c 00:09:52.123 Processing file lib/scsi/scsi.c 00:09:52.123 Processing file lib/scsi/port.c 00:09:52.123 Processing file lib/scsi/task.c 00:09:52.123 Processing file lib/scsi/dev.c 00:09:52.123 Processing file lib/scsi/scsi_pr.c 00:09:52.123 Processing file lib/sock/sock_rpc.c 00:09:52.123 Processing file lib/sock/sock.c 00:09:52.382 Processing file lib/thread/thread.c 00:09:52.382 Processing file lib/thread/iobuf.c 00:09:52.382 Processing file lib/trace/trace_rpc.c 00:09:52.382 Processing file lib/trace/trace_flags.c 00:09:52.382 Processing file lib/trace/trace.c 00:09:52.382 Processing file lib/trace_parser/trace.cpp 00:09:52.641 Processing file lib/ut/ut.c 00:09:52.641 Processing file lib/ut_mock/mock.c 00:09:52.900 Processing file lib/util/strerror_tls.c 00:09:52.900 Processing file lib/util/bit_array.c 00:09:52.900 Processing file lib/util/crc16.c 00:09:52.900 Processing file lib/util/fd_group.c 00:09:52.900 Processing file lib/util/crc32.c 00:09:52.900 Processing file lib/util/base64.c 00:09:52.900 Processing file lib/util/fd.c 00:09:52.900 Processing file lib/util/string.c 00:09:52.900 Processing file lib/util/pipe.c 00:09:52.900 Processing file lib/util/zipf.c 00:09:52.900 Processing file lib/util/uuid.c 00:09:52.900 Processing file lib/util/hexlify.c 00:09:52.900 Processing file lib/util/crc64.c 00:09:52.900 Processing file lib/util/math.c 00:09:52.900 Processing file lib/util/cpuset.c 00:09:52.900 Processing file lib/util/crc32c.c 00:09:52.900 Processing file lib/util/dif.c 00:09:52.900 Processing file lib/util/xor.c 00:09:52.900 Processing file lib/util/iov.c 00:09:52.900 Processing file lib/util/file.c 00:09:52.900 Processing file lib/util/crc32_ieee.c 00:09:53.159 Processing file lib/vfio_user/host/vfio_user.c 00:09:53.159 Processing file lib/vfio_user/host/vfio_user_pci.c 00:09:53.159 Processing file lib/vhost/vhost_internal.h 00:09:53.159 Processing file lib/vhost/vhost.c 00:09:53.159 Processing file lib/vhost/vhost_blk.c 00:09:53.159 Processing file lib/vhost/rte_vhost_user.c 00:09:53.159 Processing file lib/vhost/vhost_rpc.c 00:09:53.159 Processing file lib/vhost/vhost_scsi.c 00:09:53.418 Processing file lib/virtio/virtio.c 00:09:53.418 Processing file lib/virtio/virtio_pci.c 00:09:53.418 Processing file lib/virtio/virtio_vhost_user.c 00:09:53.418 Processing file lib/virtio/virtio_vfio_user.c 00:09:53.418 Processing file lib/vmd/vmd.c 00:09:53.418 Processing file lib/vmd/led.c 00:09:53.677 Processing file module/accel/dsa/accel_dsa.c 00:09:53.677 Processing file module/accel/dsa/accel_dsa_rpc.c 00:09:53.677 Processing file module/accel/error/accel_error.c 00:09:53.677 Processing file module/accel/error/accel_error_rpc.c 00:09:53.677 Processing file module/accel/iaa/accel_iaa.c 00:09:53.677 Processing file module/accel/iaa/accel_iaa_rpc.c 00:09:53.937 Processing file module/accel/ioat/accel_ioat.c 00:09:53.937 Processing file module/accel/ioat/accel_ioat_rpc.c 00:09:53.937 Processing file module/bdev/aio/bdev_aio.c 00:09:53.937 Processing file module/bdev/aio/bdev_aio_rpc.c 00:09:53.937 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:09:53.937 Processing file module/bdev/delay/vbdev_delay.c 00:09:54.205 Processing file module/bdev/error/vbdev_error.c 00:09:54.205 Processing file module/bdev/error/vbdev_error_rpc.c 00:09:54.205 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:09:54.205 Processing file module/bdev/ftl/bdev_ftl.c 00:09:54.205 Processing file module/bdev/gpt/gpt.c 00:09:54.205 Processing file module/bdev/gpt/gpt.h 00:09:54.205 Processing file module/bdev/gpt/vbdev_gpt.c 00:09:54.463 Processing file module/bdev/iscsi/bdev_iscsi.c 00:09:54.463 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:09:54.463 Processing file module/bdev/lvol/vbdev_lvol.c 00:09:54.463 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:09:54.463 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:09:54.463 Processing file module/bdev/malloc/bdev_malloc.c 00:09:54.721 Processing file module/bdev/null/bdev_null_rpc.c 00:09:54.721 Processing file module/bdev/null/bdev_null.c 00:09:54.979 Processing file module/bdev/nvme/bdev_mdns_client.c 00:09:54.979 Processing file module/bdev/nvme/nvme_rpc.c 00:09:54.979 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:09:54.979 Processing file module/bdev/nvme/bdev_nvme.c 00:09:54.979 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:09:54.979 Processing file module/bdev/nvme/vbdev_opal.c 00:09:54.979 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:09:54.979 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:09:54.979 Processing file module/bdev/passthru/vbdev_passthru.c 00:09:55.237 Processing file module/bdev/raid/raid0.c 00:09:55.237 Processing file module/bdev/raid/bdev_raid.h 00:09:55.237 Processing file module/bdev/raid/bdev_raid.c 00:09:55.237 Processing file module/bdev/raid/raid5f.c 00:09:55.237 Processing file module/bdev/raid/raid1.c 00:09:55.237 Processing file module/bdev/raid/bdev_raid_sb.c 00:09:55.237 Processing file module/bdev/raid/concat.c 00:09:55.237 Processing file module/bdev/raid/bdev_raid_rpc.c 00:09:55.237 Processing file module/bdev/split/vbdev_split_rpc.c 00:09:55.237 Processing file module/bdev/split/vbdev_split.c 00:09:55.496 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:09:55.496 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:09:55.496 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:09:55.496 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:09:55.496 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:09:55.496 Processing file module/blob/bdev/blob_bdev.c 00:09:55.753 Processing file module/blobfs/bdev/blobfs_bdev.c 00:09:55.753 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:09:55.754 Processing file module/env_dpdk/env_dpdk_rpc.c 00:09:55.754 Processing file module/event/subsystems/accel/accel.c 00:09:56.011 Processing file module/event/subsystems/bdev/bdev.c 00:09:56.011 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:09:56.011 Processing file module/event/subsystems/iobuf/iobuf.c 00:09:56.011 Processing file module/event/subsystems/iscsi/iscsi.c 00:09:56.011 Processing file module/event/subsystems/keyring/keyring.c 00:09:56.269 Processing file module/event/subsystems/nbd/nbd.c 00:09:56.269 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:09:56.269 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:09:56.269 Processing file module/event/subsystems/scheduler/scheduler.c 00:09:56.269 Processing file module/event/subsystems/scsi/scsi.c 00:09:56.527 Processing file module/event/subsystems/sock/sock.c 00:09:56.527 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:09:56.527 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:09:56.527 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:09:56.527 Processing file module/event/subsystems/vmd/vmd.c 00:09:56.786 Processing file module/keyring/file/keyring.c 00:09:56.786 Processing file module/keyring/file/keyring_rpc.c 00:09:56.786 Processing file module/keyring/linux/keyring.c 00:09:56.786 Processing file module/keyring/linux/keyring_rpc.c 00:09:56.786 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:09:56.786 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:09:57.044 Processing file module/scheduler/gscheduler/gscheduler.c 00:09:57.044 Processing file module/sock/sock_kernel.h 00:09:57.044 Processing file module/sock/posix/posix.c 00:09:57.044 Writing directory view page. 00:09:57.044 Overall coverage rate: 00:09:57.044 lines......: 38.7% (40911 of 105814 lines) 00:09:57.044 functions..: 42.3% (3727 of 8806 functions) 00:09:57.044 00:09:57.044 00:09:57.044 ===================== 00:09:57.044 All unit tests passed 00:09:57.044 22:55:46 unittest -- unit/unittest.sh@305 -- # set +x 00:09:57.044 ===================== 00:09:57.044 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:09:57.044 00:09:57.044 00:09:57.044 00:09:57.044 real 3m44.116s 00:09:57.044 user 3m14.672s 00:09:57.044 sys 0m18.945s 00:09:57.044 22:55:46 unittest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:57.044 22:55:46 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:57.044 ************************************ 00:09:57.044 END TEST unittest 00:09:57.044 ************************************ 00:09:57.303 22:55:46 -- common/autotest_common.sh@1142 -- # return 0 00:09:57.303 22:55:46 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:09:57.303 22:55:46 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:09:57.303 22:55:46 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:09:57.303 22:55:46 -- spdk/autotest.sh@162 -- # timing_enter lib 00:09:57.303 22:55:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:57.303 22:55:46 -- common/autotest_common.sh@10 -- # set +x 00:09:57.303 22:55:46 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:09:57.303 22:55:46 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:57.303 22:55:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:57.303 22:55:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:57.303 22:55:46 -- common/autotest_common.sh@10 -- # set +x 00:09:57.303 ************************************ 00:09:57.303 START TEST env 00:09:57.303 ************************************ 00:09:57.303 22:55:46 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:57.303 * Looking for test storage... 00:09:57.303 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:09:57.303 22:55:46 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:57.303 22:55:46 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:57.303 22:55:46 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:57.303 22:55:46 env -- common/autotest_common.sh@10 -- # set +x 00:09:57.303 ************************************ 00:09:57.303 START TEST env_memory 00:09:57.303 ************************************ 00:09:57.303 22:55:46 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:57.303 00:09:57.303 00:09:57.303 CUnit - A unit testing framework for C - Version 2.1-3 00:09:57.303 http://cunit.sourceforge.net/ 00:09:57.303 00:09:57.303 00:09:57.303 Suite: memory 00:09:57.303 Test: alloc and free memory map ...[2024-07-13 22:55:46.632691] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:57.303 passed 00:09:57.303 Test: mem map translation ...[2024-07-13 22:55:46.682597] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:57.303 [2024-07-13 22:55:46.682727] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:57.303 [2024-07-13 22:55:46.682854] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:57.303 [2024-07-13 22:55:46.683224] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:57.561 passed 00:09:57.561 Test: mem map registration ...[2024-07-13 22:55:46.774318] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:09:57.561 [2024-07-13 22:55:46.774469] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:09:57.561 passed 00:09:57.561 Test: mem map adjacent registrations ...passed 00:09:57.561 00:09:57.561 Run Summary: Type Total Ran Passed Failed Inactive 00:09:57.561 suites 1 1 n/a 0 0 00:09:57.561 tests 4 4 4 0 0 00:09:57.561 asserts 152 152 152 0 n/a 00:09:57.561 00:09:57.561 Elapsed time = 0.302 seconds 00:09:57.561 00:09:57.561 real 0m0.340s 00:09:57.561 user 0m0.316s 00:09:57.561 sys 0m0.021s 00:09:57.561 22:55:46 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:57.561 22:55:46 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:09:57.561 ************************************ 00:09:57.561 END TEST env_memory 00:09:57.561 ************************************ 00:09:57.561 22:55:46 env -- common/autotest_common.sh@1142 -- # return 0 00:09:57.561 22:55:46 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:57.561 22:55:46 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:57.561 22:55:46 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:57.561 22:55:46 env -- common/autotest_common.sh@10 -- # set +x 00:09:57.561 ************************************ 00:09:57.562 START TEST env_vtophys 00:09:57.562 ************************************ 00:09:57.562 22:55:46 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:57.820 EAL: lib.eal log level changed from notice to debug 00:09:57.820 EAL: Detected lcore 0 as core 0 on socket 0 00:09:57.820 EAL: Detected lcore 1 as core 0 on socket 0 00:09:57.820 EAL: Detected lcore 2 as core 0 on socket 0 00:09:57.820 EAL: Detected lcore 3 as core 0 on socket 0 00:09:57.820 EAL: Detected lcore 4 as core 0 on socket 0 00:09:57.820 EAL: Detected lcore 5 as core 0 on socket 0 00:09:57.820 EAL: Detected lcore 6 as core 0 on socket 0 00:09:57.820 EAL: Detected lcore 7 as core 0 on socket 0 00:09:57.820 EAL: Detected lcore 8 as core 0 on socket 0 00:09:57.820 EAL: Detected lcore 9 as core 0 on socket 0 00:09:57.820 EAL: Maximum logical cores by configuration: 128 00:09:57.820 EAL: Detected CPU lcores: 10 00:09:57.820 EAL: Detected NUMA nodes: 1 00:09:57.820 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:09:57.820 EAL: Checking presence of .so 'librte_eal.so.23' 00:09:57.820 EAL: Checking presence of .so 'librte_eal.so' 00:09:57.820 EAL: Detected static linkage of DPDK 00:09:57.820 EAL: No shared files mode enabled, IPC will be disabled 00:09:57.820 EAL: Selected IOVA mode 'PA' 00:09:57.820 EAL: Probing VFIO support... 00:09:57.820 EAL: IOMMU type 1 (Type 1) is supported 00:09:57.820 EAL: IOMMU type 7 (sPAPR) is not supported 00:09:57.820 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:09:57.820 EAL: VFIO support initialized 00:09:57.820 EAL: Ask a virtual area of 0x2e000 bytes 00:09:57.820 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:09:57.820 EAL: Setting up physically contiguous memory... 00:09:57.820 EAL: Setting maximum number of open files to 1048576 00:09:57.820 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:09:57.820 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:09:57.820 EAL: Ask a virtual area of 0x61000 bytes 00:09:57.820 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:09:57.820 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:57.820 EAL: Ask a virtual area of 0x400000000 bytes 00:09:57.820 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:09:57.820 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:09:57.820 EAL: Ask a virtual area of 0x61000 bytes 00:09:57.820 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:09:57.820 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:57.820 EAL: Ask a virtual area of 0x400000000 bytes 00:09:57.820 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:09:57.820 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:09:57.820 EAL: Ask a virtual area of 0x61000 bytes 00:09:57.820 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:09:57.820 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:57.820 EAL: Ask a virtual area of 0x400000000 bytes 00:09:57.820 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:09:57.820 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:09:57.820 EAL: Ask a virtual area of 0x61000 bytes 00:09:57.820 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:09:57.820 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:57.820 EAL: Ask a virtual area of 0x400000000 bytes 00:09:57.820 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:09:57.820 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:09:57.820 EAL: Hugepages will be freed exactly as allocated. 00:09:57.820 EAL: No shared files mode enabled, IPC is disabled 00:09:57.820 EAL: No shared files mode enabled, IPC is disabled 00:09:57.820 EAL: TSC frequency is ~2200000 KHz 00:09:57.820 EAL: Main lcore 0 is ready (tid=7f96ea729a80;cpuset=[0]) 00:09:57.820 EAL: Trying to obtain current memory policy. 00:09:57.820 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:57.820 EAL: Restoring previous memory policy: 0 00:09:57.820 EAL: request: mp_malloc_sync 00:09:57.820 EAL: No shared files mode enabled, IPC is disabled 00:09:57.820 EAL: Heap on socket 0 was expanded by 2MB 00:09:57.820 EAL: No shared files mode enabled, IPC is disabled 00:09:57.820 EAL: Mem event callback 'spdk:(nil)' registered 00:09:57.820 00:09:57.820 00:09:57.820 CUnit - A unit testing framework for C - Version 2.1-3 00:09:57.820 http://cunit.sourceforge.net/ 00:09:57.820 00:09:57.820 00:09:57.820 Suite: components_suite 00:09:58.388 Test: vtophys_malloc_test ...passed 00:09:58.388 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:09:58.388 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:58.388 EAL: Restoring previous memory policy: 0 00:09:58.388 EAL: Calling mem event callback 'spdk:(nil)' 00:09:58.388 EAL: request: mp_malloc_sync 00:09:58.388 EAL: No shared files mode enabled, IPC is disabled 00:09:58.388 EAL: Heap on socket 0 was expanded by 4MB 00:09:58.388 EAL: Calling mem event callback 'spdk:(nil)' 00:09:58.388 EAL: request: mp_malloc_sync 00:09:58.388 EAL: No shared files mode enabled, IPC is disabled 00:09:58.388 EAL: Heap on socket 0 was shrunk by 4MB 00:09:58.388 EAL: Trying to obtain current memory policy. 00:09:58.388 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:58.388 EAL: Restoring previous memory policy: 0 00:09:58.388 EAL: Calling mem event callback 'spdk:(nil)' 00:09:58.388 EAL: request: mp_malloc_sync 00:09:58.388 EAL: No shared files mode enabled, IPC is disabled 00:09:58.388 EAL: Heap on socket 0 was expanded by 6MB 00:09:58.388 EAL: Calling mem event callback 'spdk:(nil)' 00:09:58.388 EAL: request: mp_malloc_sync 00:09:58.388 EAL: No shared files mode enabled, IPC is disabled 00:09:58.388 EAL: Heap on socket 0 was shrunk by 6MB 00:09:58.388 EAL: Trying to obtain current memory policy. 00:09:58.388 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:58.388 EAL: Restoring previous memory policy: 0 00:09:58.388 EAL: Calling mem event callback 'spdk:(nil)' 00:09:58.388 EAL: request: mp_malloc_sync 00:09:58.388 EAL: No shared files mode enabled, IPC is disabled 00:09:58.388 EAL: Heap on socket 0 was expanded by 10MB 00:09:58.388 EAL: Calling mem event callback 'spdk:(nil)' 00:09:58.388 EAL: request: mp_malloc_sync 00:09:58.388 EAL: No shared files mode enabled, IPC is disabled 00:09:58.388 EAL: Heap on socket 0 was shrunk by 10MB 00:09:58.388 EAL: Trying to obtain current memory policy. 00:09:58.388 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:58.388 EAL: Restoring previous memory policy: 0 00:09:58.388 EAL: Calling mem event callback 'spdk:(nil)' 00:09:58.388 EAL: request: mp_malloc_sync 00:09:58.388 EAL: No shared files mode enabled, IPC is disabled 00:09:58.388 EAL: Heap on socket 0 was expanded by 18MB 00:09:58.388 EAL: Calling mem event callback 'spdk:(nil)' 00:09:58.388 EAL: request: mp_malloc_sync 00:09:58.388 EAL: No shared files mode enabled, IPC is disabled 00:09:58.388 EAL: Heap on socket 0 was shrunk by 18MB 00:09:58.388 EAL: Trying to obtain current memory policy. 00:09:58.388 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:58.388 EAL: Restoring previous memory policy: 0 00:09:58.388 EAL: Calling mem event callback 'spdk:(nil)' 00:09:58.388 EAL: request: mp_malloc_sync 00:09:58.388 EAL: No shared files mode enabled, IPC is disabled 00:09:58.388 EAL: Heap on socket 0 was expanded by 34MB 00:09:58.388 EAL: Calling mem event callback 'spdk:(nil)' 00:09:58.388 EAL: request: mp_malloc_sync 00:09:58.388 EAL: No shared files mode enabled, IPC is disabled 00:09:58.388 EAL: Heap on socket 0 was shrunk by 34MB 00:09:58.388 EAL: Trying to obtain current memory policy. 00:09:58.388 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:58.388 EAL: Restoring previous memory policy: 0 00:09:58.388 EAL: Calling mem event callback 'spdk:(nil)' 00:09:58.388 EAL: request: mp_malloc_sync 00:09:58.388 EAL: No shared files mode enabled, IPC is disabled 00:09:58.388 EAL: Heap on socket 0 was expanded by 66MB 00:09:58.388 EAL: Calling mem event callback 'spdk:(nil)' 00:09:58.388 EAL: request: mp_malloc_sync 00:09:58.388 EAL: No shared files mode enabled, IPC is disabled 00:09:58.388 EAL: Heap on socket 0 was shrunk by 66MB 00:09:58.388 EAL: Trying to obtain current memory policy. 00:09:58.388 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:58.388 EAL: Restoring previous memory policy: 0 00:09:58.388 EAL: Calling mem event callback 'spdk:(nil)' 00:09:58.388 EAL: request: mp_malloc_sync 00:09:58.388 EAL: No shared files mode enabled, IPC is disabled 00:09:58.388 EAL: Heap on socket 0 was expanded by 130MB 00:09:58.388 EAL: Calling mem event callback 'spdk:(nil)' 00:09:58.388 EAL: request: mp_malloc_sync 00:09:58.388 EAL: No shared files mode enabled, IPC is disabled 00:09:58.388 EAL: Heap on socket 0 was shrunk by 130MB 00:09:58.388 EAL: Trying to obtain current memory policy. 00:09:58.388 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:58.388 EAL: Restoring previous memory policy: 0 00:09:58.388 EAL: Calling mem event callback 'spdk:(nil)' 00:09:58.388 EAL: request: mp_malloc_sync 00:09:58.388 EAL: No shared files mode enabled, IPC is disabled 00:09:58.388 EAL: Heap on socket 0 was expanded by 258MB 00:09:58.647 EAL: Calling mem event callback 'spdk:(nil)' 00:09:58.647 EAL: request: mp_malloc_sync 00:09:58.647 EAL: No shared files mode enabled, IPC is disabled 00:09:58.647 EAL: Heap on socket 0 was shrunk by 258MB 00:09:58.647 EAL: Trying to obtain current memory policy. 00:09:58.647 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:58.647 EAL: Restoring previous memory policy: 0 00:09:58.647 EAL: Calling mem event callback 'spdk:(nil)' 00:09:58.647 EAL: request: mp_malloc_sync 00:09:58.647 EAL: No shared files mode enabled, IPC is disabled 00:09:58.647 EAL: Heap on socket 0 was expanded by 514MB 00:09:58.905 EAL: Calling mem event callback 'spdk:(nil)' 00:09:58.905 EAL: request: mp_malloc_sync 00:09:58.905 EAL: No shared files mode enabled, IPC is disabled 00:09:58.905 EAL: Heap on socket 0 was shrunk by 514MB 00:09:58.905 EAL: Trying to obtain current memory policy. 00:09:58.905 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:59.164 EAL: Restoring previous memory policy: 0 00:09:59.164 EAL: Calling mem event callback 'spdk:(nil)' 00:09:59.164 EAL: request: mp_malloc_sync 00:09:59.164 EAL: No shared files mode enabled, IPC is disabled 00:09:59.164 EAL: Heap on socket 0 was expanded by 1026MB 00:09:59.423 EAL: Calling mem event callback 'spdk:(nil)' 00:09:59.682 EAL: request: mp_malloc_sync 00:09:59.682 EAL: No shared files mode enabled, IPC is disabled 00:09:59.682 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:59.682 passed 00:09:59.682 00:09:59.682 Run Summary: Type Total Ran Passed Failed Inactive 00:09:59.682 suites 1 1 n/a 0 0 00:09:59.682 tests 2 2 2 0 0 00:09:59.682 asserts 6303 6303 6303 0 n/a 00:09:59.682 00:09:59.682 Elapsed time = 1.751 seconds 00:09:59.682 EAL: Calling mem event callback 'spdk:(nil)' 00:09:59.682 EAL: request: mp_malloc_sync 00:09:59.682 EAL: No shared files mode enabled, IPC is disabled 00:09:59.682 EAL: Heap on socket 0 was shrunk by 2MB 00:09:59.682 EAL: No shared files mode enabled, IPC is disabled 00:09:59.682 EAL: No shared files mode enabled, IPC is disabled 00:09:59.682 EAL: No shared files mode enabled, IPC is disabled 00:09:59.682 00:09:59.682 real 0m2.004s 00:09:59.682 user 0m1.003s 00:09:59.682 sys 0m0.868s 00:09:59.682 22:55:48 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:59.682 22:55:48 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:09:59.682 ************************************ 00:09:59.682 END TEST env_vtophys 00:09:59.682 ************************************ 00:09:59.682 22:55:49 env -- common/autotest_common.sh@1142 -- # return 0 00:09:59.682 22:55:49 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:59.682 22:55:49 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:59.682 22:55:49 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:59.682 22:55:49 env -- common/autotest_common.sh@10 -- # set +x 00:09:59.682 ************************************ 00:09:59.682 START TEST env_pci 00:09:59.682 ************************************ 00:09:59.682 22:55:49 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:59.682 00:09:59.682 00:09:59.682 CUnit - A unit testing framework for C - Version 2.1-3 00:09:59.682 http://cunit.sourceforge.net/ 00:09:59.682 00:09:59.682 00:09:59.682 Suite: pci 00:09:59.682 Test: pci_hook ...[2024-07-13 22:55:49.046967] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 122950 has claimed it 00:09:59.682 EAL: Cannot find device (10000:00:01.0) 00:09:59.682 EAL: Failed to attach device on primary process 00:09:59.682 passed 00:09:59.682 00:09:59.682 Run Summary: Type Total Ran Passed Failed Inactive 00:09:59.682 suites 1 1 n/a 0 0 00:09:59.682 tests 1 1 1 0 0 00:09:59.683 asserts 25 25 25 0 n/a 00:09:59.683 00:09:59.683 Elapsed time = 0.004 seconds 00:09:59.942 00:09:59.942 real 0m0.062s 00:09:59.942 user 0m0.043s 00:09:59.942 sys 0m0.020s 00:09:59.942 22:55:49 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:59.942 22:55:49 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:09:59.942 ************************************ 00:09:59.942 END TEST env_pci 00:09:59.942 ************************************ 00:09:59.942 22:55:49 env -- common/autotest_common.sh@1142 -- # return 0 00:09:59.942 22:55:49 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:59.942 22:55:49 env -- env/env.sh@15 -- # uname 00:09:59.942 22:55:49 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:59.942 22:55:49 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:59.942 22:55:49 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:59.942 22:55:49 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:59.942 22:55:49 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:59.942 22:55:49 env -- common/autotest_common.sh@10 -- # set +x 00:09:59.942 ************************************ 00:09:59.942 START TEST env_dpdk_post_init 00:09:59.942 ************************************ 00:09:59.942 22:55:49 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:59.942 EAL: Detected CPU lcores: 10 00:09:59.942 EAL: Detected NUMA nodes: 1 00:09:59.942 EAL: Detected static linkage of DPDK 00:09:59.942 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:59.942 EAL: Selected IOVA mode 'PA' 00:09:59.942 EAL: VFIO support initialized 00:09:59.942 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:59.942 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:10:00.200 Starting DPDK initialization... 00:10:00.200 Starting SPDK post initialization... 00:10:00.200 SPDK NVMe probe 00:10:00.200 Attaching to 0000:00:10.0 00:10:00.200 Attached to 0000:00:10.0 00:10:00.200 Cleaning up... 00:10:00.200 00:10:00.200 real 0m0.233s 00:10:00.200 user 0m0.067s 00:10:00.200 sys 0m0.069s 00:10:00.200 22:55:49 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:00.200 22:55:49 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:10:00.200 ************************************ 00:10:00.200 END TEST env_dpdk_post_init 00:10:00.200 ************************************ 00:10:00.200 22:55:49 env -- common/autotest_common.sh@1142 -- # return 0 00:10:00.200 22:55:49 env -- env/env.sh@26 -- # uname 00:10:00.200 22:55:49 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:10:00.200 22:55:49 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:10:00.200 22:55:49 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:00.201 22:55:49 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:00.201 22:55:49 env -- common/autotest_common.sh@10 -- # set +x 00:10:00.201 ************************************ 00:10:00.201 START TEST env_mem_callbacks 00:10:00.201 ************************************ 00:10:00.201 22:55:49 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:10:00.201 EAL: Detected CPU lcores: 10 00:10:00.201 EAL: Detected NUMA nodes: 1 00:10:00.201 EAL: Detected static linkage of DPDK 00:10:00.201 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:00.201 EAL: Selected IOVA mode 'PA' 00:10:00.201 EAL: VFIO support initialized 00:10:00.201 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:00.201 00:10:00.201 00:10:00.201 CUnit - A unit testing framework for C - Version 2.1-3 00:10:00.201 http://cunit.sourceforge.net/ 00:10:00.201 00:10:00.201 00:10:00.201 Suite: memory 00:10:00.201 Test: test ... 00:10:00.201 register 0x200000200000 2097152 00:10:00.201 malloc 3145728 00:10:00.201 register 0x200000400000 4194304 00:10:00.201 buf 0x200000500000 len 3145728 PASSED 00:10:00.201 malloc 64 00:10:00.201 buf 0x2000004fff40 len 64 PASSED 00:10:00.201 malloc 4194304 00:10:00.201 register 0x200000800000 6291456 00:10:00.201 buf 0x200000a00000 len 4194304 PASSED 00:10:00.201 free 0x200000500000 3145728 00:10:00.201 free 0x2000004fff40 64 00:10:00.201 unregister 0x200000400000 4194304 PASSED 00:10:00.201 free 0x200000a00000 4194304 00:10:00.201 unregister 0x200000800000 6291456 PASSED 00:10:00.201 malloc 8388608 00:10:00.201 register 0x200000400000 10485760 00:10:00.201 buf 0x200000600000 len 8388608 PASSED 00:10:00.201 free 0x200000600000 8388608 00:10:00.201 unregister 0x200000400000 10485760 PASSED 00:10:00.201 passed 00:10:00.201 00:10:00.201 Run Summary: Type Total Ran Passed Failed Inactive 00:10:00.201 suites 1 1 n/a 0 0 00:10:00.201 tests 1 1 1 0 0 00:10:00.201 asserts 15 15 15 0 n/a 00:10:00.201 00:10:00.201 Elapsed time = 0.007 seconds 00:10:00.459 00:10:00.459 real 0m0.193s 00:10:00.459 user 0m0.043s 00:10:00.459 sys 0m0.050s 00:10:00.459 22:55:49 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:00.459 22:55:49 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:10:00.459 ************************************ 00:10:00.459 END TEST env_mem_callbacks 00:10:00.459 ************************************ 00:10:00.459 22:55:49 env -- common/autotest_common.sh@1142 -- # return 0 00:10:00.459 00:10:00.459 real 0m3.183s 00:10:00.459 user 0m1.674s 00:10:00.459 sys 0m1.167s 00:10:00.459 22:55:49 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:00.459 22:55:49 env -- common/autotest_common.sh@10 -- # set +x 00:10:00.459 ************************************ 00:10:00.459 END TEST env 00:10:00.459 ************************************ 00:10:00.459 22:55:49 -- common/autotest_common.sh@1142 -- # return 0 00:10:00.459 22:55:49 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:10:00.459 22:55:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:00.460 22:55:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:00.460 22:55:49 -- common/autotest_common.sh@10 -- # set +x 00:10:00.460 ************************************ 00:10:00.460 START TEST rpc 00:10:00.460 ************************************ 00:10:00.460 22:55:49 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:10:00.460 * Looking for test storage... 00:10:00.460 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:10:00.460 22:55:49 rpc -- rpc/rpc.sh@65 -- # spdk_pid=123067 00:10:00.460 22:55:49 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:00.460 22:55:49 rpc -- rpc/rpc.sh@67 -- # waitforlisten 123067 00:10:00.460 22:55:49 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:10:00.460 22:55:49 rpc -- common/autotest_common.sh@829 -- # '[' -z 123067 ']' 00:10:00.460 22:55:49 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.460 22:55:49 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:00.460 22:55:49 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.460 22:55:49 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:00.460 22:55:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:00.718 [2024-07-13 22:55:49.875611] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:00.718 [2024-07-13 22:55:49.875877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123067 ] 00:10:00.718 [2024-07-13 22:55:50.017591] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.718 [2024-07-13 22:55:50.089926] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:10:00.718 [2024-07-13 22:55:50.090018] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 123067' to capture a snapshot of events at runtime. 00:10:00.718 [2024-07-13 22:55:50.090066] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:00.718 [2024-07-13 22:55:50.090094] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:00.718 [2024-07-13 22:55:50.090137] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid123067 for offline analysis/debug. 00:10:00.718 [2024-07-13 22:55:50.090182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.654 22:55:50 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:01.654 22:55:50 rpc -- common/autotest_common.sh@862 -- # return 0 00:10:01.654 22:55:50 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:10:01.654 22:55:50 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:10:01.654 22:55:50 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:10:01.654 22:55:50 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:10:01.654 22:55:50 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:01.654 22:55:50 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:01.654 22:55:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.654 ************************************ 00:10:01.654 START TEST rpc_integrity 00:10:01.654 ************************************ 00:10:01.654 22:55:50 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:10:01.654 22:55:50 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:01.654 22:55:50 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.654 22:55:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:01.654 22:55:50 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.654 22:55:50 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:01.654 22:55:50 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:10:01.654 22:55:50 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:01.654 22:55:50 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:01.654 22:55:50 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.654 22:55:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:01.654 22:55:50 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.654 22:55:50 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:10:01.654 22:55:50 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:01.654 22:55:50 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.654 22:55:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:01.654 22:55:50 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.654 22:55:50 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:01.654 { 00:10:01.654 "name": "Malloc0", 00:10:01.654 "aliases": [ 00:10:01.654 "3d0cda0e-bec4-42c3-a650-c4038fa82bb8" 00:10:01.654 ], 00:10:01.654 "product_name": "Malloc disk", 00:10:01.654 "block_size": 512, 00:10:01.654 "num_blocks": 16384, 00:10:01.654 "uuid": "3d0cda0e-bec4-42c3-a650-c4038fa82bb8", 00:10:01.654 "assigned_rate_limits": { 00:10:01.654 "rw_ios_per_sec": 0, 00:10:01.654 "rw_mbytes_per_sec": 0, 00:10:01.654 "r_mbytes_per_sec": 0, 00:10:01.654 "w_mbytes_per_sec": 0 00:10:01.654 }, 00:10:01.654 "claimed": false, 00:10:01.654 "zoned": false, 00:10:01.654 "supported_io_types": { 00:10:01.654 "read": true, 00:10:01.654 "write": true, 00:10:01.654 "unmap": true, 00:10:01.654 "flush": true, 00:10:01.654 "reset": true, 00:10:01.654 "nvme_admin": false, 00:10:01.654 "nvme_io": false, 00:10:01.654 "nvme_io_md": false, 00:10:01.654 "write_zeroes": true, 00:10:01.654 "zcopy": true, 00:10:01.654 "get_zone_info": false, 00:10:01.654 "zone_management": false, 00:10:01.654 "zone_append": false, 00:10:01.654 "compare": false, 00:10:01.654 "compare_and_write": false, 00:10:01.654 "abort": true, 00:10:01.654 "seek_hole": false, 00:10:01.654 "seek_data": false, 00:10:01.654 "copy": true, 00:10:01.654 "nvme_iov_md": false 00:10:01.654 }, 00:10:01.654 "memory_domains": [ 00:10:01.654 { 00:10:01.654 "dma_device_id": "system", 00:10:01.654 "dma_device_type": 1 00:10:01.654 }, 00:10:01.654 { 00:10:01.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.654 "dma_device_type": 2 00:10:01.654 } 00:10:01.654 ], 00:10:01.654 "driver_specific": {} 00:10:01.654 } 00:10:01.654 ]' 00:10:01.654 22:55:50 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:10:01.654 22:55:50 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:01.654 22:55:50 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:10:01.654 22:55:50 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.654 22:55:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:01.654 [2024-07-13 22:55:50.999795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:10:01.654 [2024-07-13 22:55:50.999933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.654 [2024-07-13 22:55:50.999968] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006080 00:10:01.654 [2024-07-13 22:55:50.999997] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.654 [2024-07-13 22:55:51.002785] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.654 [2024-07-13 22:55:51.002877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:01.654 Passthru0 00:10:01.654 22:55:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.654 22:55:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:01.654 22:55:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.654 22:55:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:01.654 22:55:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.654 22:55:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:01.654 { 00:10:01.654 "name": "Malloc0", 00:10:01.654 "aliases": [ 00:10:01.654 "3d0cda0e-bec4-42c3-a650-c4038fa82bb8" 00:10:01.654 ], 00:10:01.654 "product_name": "Malloc disk", 00:10:01.654 "block_size": 512, 00:10:01.654 "num_blocks": 16384, 00:10:01.654 "uuid": "3d0cda0e-bec4-42c3-a650-c4038fa82bb8", 00:10:01.654 "assigned_rate_limits": { 00:10:01.654 "rw_ios_per_sec": 0, 00:10:01.654 "rw_mbytes_per_sec": 0, 00:10:01.654 "r_mbytes_per_sec": 0, 00:10:01.654 "w_mbytes_per_sec": 0 00:10:01.654 }, 00:10:01.654 "claimed": true, 00:10:01.654 "claim_type": "exclusive_write", 00:10:01.654 "zoned": false, 00:10:01.654 "supported_io_types": { 00:10:01.654 "read": true, 00:10:01.654 "write": true, 00:10:01.654 "unmap": true, 00:10:01.654 "flush": true, 00:10:01.654 "reset": true, 00:10:01.654 "nvme_admin": false, 00:10:01.654 "nvme_io": false, 00:10:01.654 "nvme_io_md": false, 00:10:01.654 "write_zeroes": true, 00:10:01.654 "zcopy": true, 00:10:01.654 "get_zone_info": false, 00:10:01.654 "zone_management": false, 00:10:01.654 "zone_append": false, 00:10:01.654 "compare": false, 00:10:01.654 "compare_and_write": false, 00:10:01.654 "abort": true, 00:10:01.654 "seek_hole": false, 00:10:01.654 "seek_data": false, 00:10:01.654 "copy": true, 00:10:01.654 "nvme_iov_md": false 00:10:01.654 }, 00:10:01.654 "memory_domains": [ 00:10:01.654 { 00:10:01.654 "dma_device_id": "system", 00:10:01.654 "dma_device_type": 1 00:10:01.654 }, 00:10:01.654 { 00:10:01.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.654 "dma_device_type": 2 00:10:01.654 } 00:10:01.654 ], 00:10:01.654 "driver_specific": {} 00:10:01.654 }, 00:10:01.654 { 00:10:01.654 "name": "Passthru0", 00:10:01.654 "aliases": [ 00:10:01.654 "ae797110-22aa-5069-980c-1da91d8bfbf4" 00:10:01.654 ], 00:10:01.654 "product_name": "passthru", 00:10:01.654 "block_size": 512, 00:10:01.654 "num_blocks": 16384, 00:10:01.654 "uuid": "ae797110-22aa-5069-980c-1da91d8bfbf4", 00:10:01.654 "assigned_rate_limits": { 00:10:01.654 "rw_ios_per_sec": 0, 00:10:01.654 "rw_mbytes_per_sec": 0, 00:10:01.654 "r_mbytes_per_sec": 0, 00:10:01.654 "w_mbytes_per_sec": 0 00:10:01.654 }, 00:10:01.654 "claimed": false, 00:10:01.654 "zoned": false, 00:10:01.654 "supported_io_types": { 00:10:01.654 "read": true, 00:10:01.654 "write": true, 00:10:01.654 "unmap": true, 00:10:01.654 "flush": true, 00:10:01.654 "reset": true, 00:10:01.654 "nvme_admin": false, 00:10:01.654 "nvme_io": false, 00:10:01.654 "nvme_io_md": false, 00:10:01.654 "write_zeroes": true, 00:10:01.654 "zcopy": true, 00:10:01.654 "get_zone_info": false, 00:10:01.654 "zone_management": false, 00:10:01.654 "zone_append": false, 00:10:01.654 "compare": false, 00:10:01.654 "compare_and_write": false, 00:10:01.654 "abort": true, 00:10:01.654 "seek_hole": false, 00:10:01.654 "seek_data": false, 00:10:01.654 "copy": true, 00:10:01.654 "nvme_iov_md": false 00:10:01.655 }, 00:10:01.655 "memory_domains": [ 00:10:01.655 { 00:10:01.655 "dma_device_id": "system", 00:10:01.655 "dma_device_type": 1 00:10:01.655 }, 00:10:01.655 { 00:10:01.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.655 "dma_device_type": 2 00:10:01.655 } 00:10:01.655 ], 00:10:01.655 "driver_specific": { 00:10:01.655 "passthru": { 00:10:01.655 "name": "Passthru0", 00:10:01.655 "base_bdev_name": "Malloc0" 00:10:01.655 } 00:10:01.655 } 00:10:01.655 } 00:10:01.655 ]' 00:10:01.655 22:55:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:10:01.913 22:55:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:01.913 22:55:51 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:01.913 22:55:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.913 22:55:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:01.913 22:55:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.913 22:55:51 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:10:01.913 22:55:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.913 22:55:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:01.913 22:55:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.913 22:55:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:01.913 22:55:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.913 22:55:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:01.913 22:55:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.913 22:55:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:01.913 22:55:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:10:01.913 22:55:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:01.913 00:10:01.913 real 0m0.301s 00:10:01.913 user 0m0.213s 00:10:01.913 sys 0m0.024s 00:10:01.913 22:55:51 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:01.913 22:55:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:01.913 ************************************ 00:10:01.913 END TEST rpc_integrity 00:10:01.913 ************************************ 00:10:01.913 22:55:51 rpc -- common/autotest_common.sh@1142 -- # return 0 00:10:01.913 22:55:51 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:10:01.913 22:55:51 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:01.913 22:55:51 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:01.913 22:55:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.913 ************************************ 00:10:01.913 START TEST rpc_plugins 00:10:01.913 ************************************ 00:10:01.913 22:55:51 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:10:01.914 22:55:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:10:01.914 22:55:51 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.914 22:55:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:01.914 22:55:51 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.914 22:55:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:10:01.914 22:55:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:10:01.914 22:55:51 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.914 22:55:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:01.914 22:55:51 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.914 22:55:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:10:01.914 { 00:10:01.914 "name": "Malloc1", 00:10:01.914 "aliases": [ 00:10:01.914 "62a4f735-f5f5-4dd8-a483-866f44648387" 00:10:01.914 ], 00:10:01.914 "product_name": "Malloc disk", 00:10:01.914 "block_size": 4096, 00:10:01.914 "num_blocks": 256, 00:10:01.914 "uuid": "62a4f735-f5f5-4dd8-a483-866f44648387", 00:10:01.914 "assigned_rate_limits": { 00:10:01.914 "rw_ios_per_sec": 0, 00:10:01.914 "rw_mbytes_per_sec": 0, 00:10:01.914 "r_mbytes_per_sec": 0, 00:10:01.914 "w_mbytes_per_sec": 0 00:10:01.914 }, 00:10:01.914 "claimed": false, 00:10:01.914 "zoned": false, 00:10:01.914 "supported_io_types": { 00:10:01.914 "read": true, 00:10:01.914 "write": true, 00:10:01.914 "unmap": true, 00:10:01.914 "flush": true, 00:10:01.914 "reset": true, 00:10:01.914 "nvme_admin": false, 00:10:01.914 "nvme_io": false, 00:10:01.914 "nvme_io_md": false, 00:10:01.914 "write_zeroes": true, 00:10:01.914 "zcopy": true, 00:10:01.914 "get_zone_info": false, 00:10:01.914 "zone_management": false, 00:10:01.914 "zone_append": false, 00:10:01.914 "compare": false, 00:10:01.914 "compare_and_write": false, 00:10:01.914 "abort": true, 00:10:01.914 "seek_hole": false, 00:10:01.914 "seek_data": false, 00:10:01.914 "copy": true, 00:10:01.914 "nvme_iov_md": false 00:10:01.914 }, 00:10:01.914 "memory_domains": [ 00:10:01.914 { 00:10:01.914 "dma_device_id": "system", 00:10:01.914 "dma_device_type": 1 00:10:01.914 }, 00:10:01.914 { 00:10:01.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.914 "dma_device_type": 2 00:10:01.914 } 00:10:01.914 ], 00:10:01.914 "driver_specific": {} 00:10:01.914 } 00:10:01.914 ]' 00:10:01.914 22:55:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:10:01.914 22:55:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:10:01.914 22:55:51 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:10:01.914 22:55:51 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.914 22:55:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:01.914 22:55:51 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.914 22:55:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:10:01.914 22:55:51 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.914 22:55:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:01.914 22:55:51 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.914 22:55:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:10:01.914 22:55:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:10:02.172 22:55:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:10:02.172 00:10:02.172 real 0m0.151s 00:10:02.172 user 0m0.104s 00:10:02.172 sys 0m0.012s 00:10:02.172 22:55:51 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:02.172 22:55:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:02.172 ************************************ 00:10:02.172 END TEST rpc_plugins 00:10:02.172 ************************************ 00:10:02.173 22:55:51 rpc -- common/autotest_common.sh@1142 -- # return 0 00:10:02.173 22:55:51 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:10:02.173 22:55:51 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:02.173 22:55:51 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:02.173 22:55:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.173 ************************************ 00:10:02.173 START TEST rpc_trace_cmd_test 00:10:02.173 ************************************ 00:10:02.173 22:55:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:10:02.173 22:55:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:10:02.173 22:55:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:10:02.173 22:55:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.173 22:55:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.173 22:55:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.173 22:55:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:10:02.173 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid123067", 00:10:02.173 "tpoint_group_mask": "0x8", 00:10:02.173 "iscsi_conn": { 00:10:02.173 "mask": "0x2", 00:10:02.173 "tpoint_mask": "0x0" 00:10:02.173 }, 00:10:02.173 "scsi": { 00:10:02.173 "mask": "0x4", 00:10:02.173 "tpoint_mask": "0x0" 00:10:02.173 }, 00:10:02.173 "bdev": { 00:10:02.173 "mask": "0x8", 00:10:02.173 "tpoint_mask": "0xffffffffffffffff" 00:10:02.173 }, 00:10:02.173 "nvmf_rdma": { 00:10:02.173 "mask": "0x10", 00:10:02.173 "tpoint_mask": "0x0" 00:10:02.173 }, 00:10:02.173 "nvmf_tcp": { 00:10:02.173 "mask": "0x20", 00:10:02.173 "tpoint_mask": "0x0" 00:10:02.173 }, 00:10:02.173 "ftl": { 00:10:02.173 "mask": "0x40", 00:10:02.173 "tpoint_mask": "0x0" 00:10:02.173 }, 00:10:02.173 "blobfs": { 00:10:02.173 "mask": "0x80", 00:10:02.173 "tpoint_mask": "0x0" 00:10:02.173 }, 00:10:02.173 "dsa": { 00:10:02.173 "mask": "0x200", 00:10:02.173 "tpoint_mask": "0x0" 00:10:02.173 }, 00:10:02.173 "thread": { 00:10:02.173 "mask": "0x400", 00:10:02.173 "tpoint_mask": "0x0" 00:10:02.173 }, 00:10:02.173 "nvme_pcie": { 00:10:02.173 "mask": "0x800", 00:10:02.173 "tpoint_mask": "0x0" 00:10:02.173 }, 00:10:02.173 "iaa": { 00:10:02.173 "mask": "0x1000", 00:10:02.173 "tpoint_mask": "0x0" 00:10:02.173 }, 00:10:02.173 "nvme_tcp": { 00:10:02.173 "mask": "0x2000", 00:10:02.173 "tpoint_mask": "0x0" 00:10:02.173 }, 00:10:02.173 "bdev_nvme": { 00:10:02.173 "mask": "0x4000", 00:10:02.173 "tpoint_mask": "0x0" 00:10:02.173 }, 00:10:02.173 "sock": { 00:10:02.173 "mask": "0x8000", 00:10:02.173 "tpoint_mask": "0x0" 00:10:02.173 } 00:10:02.173 }' 00:10:02.173 22:55:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:10:02.173 22:55:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:10:02.173 22:55:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:10:02.173 22:55:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:10:02.173 22:55:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:10:02.173 22:55:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:10:02.173 22:55:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:10:02.432 22:55:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:10:02.432 22:55:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:10:02.432 22:55:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:10:02.432 00:10:02.432 real 0m0.252s 00:10:02.432 user 0m0.229s 00:10:02.432 sys 0m0.015s 00:10:02.432 22:55:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:02.432 22:55:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.432 ************************************ 00:10:02.432 END TEST rpc_trace_cmd_test 00:10:02.432 ************************************ 00:10:02.432 22:55:51 rpc -- common/autotest_common.sh@1142 -- # return 0 00:10:02.432 22:55:51 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:10:02.432 22:55:51 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:10:02.432 22:55:51 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:10:02.432 22:55:51 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:02.432 22:55:51 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:02.432 22:55:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.432 ************************************ 00:10:02.432 START TEST rpc_daemon_integrity 00:10:02.432 ************************************ 00:10:02.432 22:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:10:02.432 22:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:02.432 22:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.432 22:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:02.432 22:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.432 22:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:02.432 22:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:10:02.432 22:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:02.432 22:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:02.432 22:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.432 22:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:02.432 22:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.432 22:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:10:02.432 22:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:02.432 22:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.432 22:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:02.432 22:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.432 22:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:02.432 { 00:10:02.432 "name": "Malloc2", 00:10:02.432 "aliases": [ 00:10:02.432 "c83528bc-5712-4455-bb2b-dd2d6bafb087" 00:10:02.432 ], 00:10:02.432 "product_name": "Malloc disk", 00:10:02.432 "block_size": 512, 00:10:02.432 "num_blocks": 16384, 00:10:02.432 "uuid": "c83528bc-5712-4455-bb2b-dd2d6bafb087", 00:10:02.432 "assigned_rate_limits": { 00:10:02.432 "rw_ios_per_sec": 0, 00:10:02.432 "rw_mbytes_per_sec": 0, 00:10:02.432 "r_mbytes_per_sec": 0, 00:10:02.432 "w_mbytes_per_sec": 0 00:10:02.432 }, 00:10:02.432 "claimed": false, 00:10:02.432 "zoned": false, 00:10:02.432 "supported_io_types": { 00:10:02.432 "read": true, 00:10:02.432 "write": true, 00:10:02.432 "unmap": true, 00:10:02.432 "flush": true, 00:10:02.432 "reset": true, 00:10:02.432 "nvme_admin": false, 00:10:02.432 "nvme_io": false, 00:10:02.432 "nvme_io_md": false, 00:10:02.432 "write_zeroes": true, 00:10:02.432 "zcopy": true, 00:10:02.432 "get_zone_info": false, 00:10:02.432 "zone_management": false, 00:10:02.432 "zone_append": false, 00:10:02.432 "compare": false, 00:10:02.432 "compare_and_write": false, 00:10:02.432 "abort": true, 00:10:02.432 "seek_hole": false, 00:10:02.432 "seek_data": false, 00:10:02.432 "copy": true, 00:10:02.432 "nvme_iov_md": false 00:10:02.432 }, 00:10:02.432 "memory_domains": [ 00:10:02.432 { 00:10:02.432 "dma_device_id": "system", 00:10:02.432 "dma_device_type": 1 00:10:02.432 }, 00:10:02.432 { 00:10:02.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.432 "dma_device_type": 2 00:10:02.432 } 00:10:02.432 ], 00:10:02.432 "driver_specific": {} 00:10:02.432 } 00:10:02.432 ]' 00:10:02.432 22:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:10:02.692 22:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:02.692 22:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:10:02.692 22:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.692 22:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:02.692 [2024-07-13 22:55:51.854650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:10:02.692 [2024-07-13 22:55:51.854773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.692 [2024-07-13 22:55:51.854833] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:02.692 [2024-07-13 22:55:51.854861] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.692 [2024-07-13 22:55:51.857709] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.692 [2024-07-13 22:55:51.857809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:02.692 Passthru0 00:10:02.692 22:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.692 22:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:02.692 22:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.692 22:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:02.692 22:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.692 22:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:02.692 { 00:10:02.692 "name": "Malloc2", 00:10:02.692 "aliases": [ 00:10:02.692 "c83528bc-5712-4455-bb2b-dd2d6bafb087" 00:10:02.692 ], 00:10:02.692 "product_name": "Malloc disk", 00:10:02.692 "block_size": 512, 00:10:02.692 "num_blocks": 16384, 00:10:02.692 "uuid": "c83528bc-5712-4455-bb2b-dd2d6bafb087", 00:10:02.692 "assigned_rate_limits": { 00:10:02.692 "rw_ios_per_sec": 0, 00:10:02.692 "rw_mbytes_per_sec": 0, 00:10:02.692 "r_mbytes_per_sec": 0, 00:10:02.692 "w_mbytes_per_sec": 0 00:10:02.692 }, 00:10:02.692 "claimed": true, 00:10:02.692 "claim_type": "exclusive_write", 00:10:02.692 "zoned": false, 00:10:02.692 "supported_io_types": { 00:10:02.692 "read": true, 00:10:02.692 "write": true, 00:10:02.692 "unmap": true, 00:10:02.692 "flush": true, 00:10:02.692 "reset": true, 00:10:02.692 "nvme_admin": false, 00:10:02.692 "nvme_io": false, 00:10:02.692 "nvme_io_md": false, 00:10:02.692 "write_zeroes": true, 00:10:02.692 "zcopy": true, 00:10:02.692 "get_zone_info": false, 00:10:02.692 "zone_management": false, 00:10:02.692 "zone_append": false, 00:10:02.692 "compare": false, 00:10:02.692 "compare_and_write": false, 00:10:02.692 "abort": true, 00:10:02.692 "seek_hole": false, 00:10:02.692 "seek_data": false, 00:10:02.692 "copy": true, 00:10:02.692 "nvme_iov_md": false 00:10:02.692 }, 00:10:02.692 "memory_domains": [ 00:10:02.692 { 00:10:02.692 "dma_device_id": "system", 00:10:02.692 "dma_device_type": 1 00:10:02.692 }, 00:10:02.692 { 00:10:02.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.692 "dma_device_type": 2 00:10:02.692 } 00:10:02.692 ], 00:10:02.692 "driver_specific": {} 00:10:02.692 }, 00:10:02.692 { 00:10:02.692 "name": "Passthru0", 00:10:02.692 "aliases": [ 00:10:02.692 "316022a9-ac36-5853-9e51-218b68b4deb4" 00:10:02.692 ], 00:10:02.692 "product_name": "passthru", 00:10:02.692 "block_size": 512, 00:10:02.692 "num_blocks": 16384, 00:10:02.692 "uuid": "316022a9-ac36-5853-9e51-218b68b4deb4", 00:10:02.692 "assigned_rate_limits": { 00:10:02.692 "rw_ios_per_sec": 0, 00:10:02.692 "rw_mbytes_per_sec": 0, 00:10:02.692 "r_mbytes_per_sec": 0, 00:10:02.692 "w_mbytes_per_sec": 0 00:10:02.692 }, 00:10:02.692 "claimed": false, 00:10:02.692 "zoned": false, 00:10:02.692 "supported_io_types": { 00:10:02.692 "read": true, 00:10:02.692 "write": true, 00:10:02.692 "unmap": true, 00:10:02.692 "flush": true, 00:10:02.692 "reset": true, 00:10:02.692 "nvme_admin": false, 00:10:02.692 "nvme_io": false, 00:10:02.692 "nvme_io_md": false, 00:10:02.692 "write_zeroes": true, 00:10:02.692 "zcopy": true, 00:10:02.692 "get_zone_info": false, 00:10:02.692 "zone_management": false, 00:10:02.692 "zone_append": false, 00:10:02.692 "compare": false, 00:10:02.692 "compare_and_write": false, 00:10:02.692 "abort": true, 00:10:02.692 "seek_hole": false, 00:10:02.692 "seek_data": false, 00:10:02.692 "copy": true, 00:10:02.692 "nvme_iov_md": false 00:10:02.692 }, 00:10:02.692 "memory_domains": [ 00:10:02.692 { 00:10:02.692 "dma_device_id": "system", 00:10:02.692 "dma_device_type": 1 00:10:02.692 }, 00:10:02.692 { 00:10:02.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.692 "dma_device_type": 2 00:10:02.692 } 00:10:02.692 ], 00:10:02.692 "driver_specific": { 00:10:02.692 "passthru": { 00:10:02.692 "name": "Passthru0", 00:10:02.692 "base_bdev_name": "Malloc2" 00:10:02.692 } 00:10:02.692 } 00:10:02.692 } 00:10:02.693 ]' 00:10:02.693 22:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:10:02.693 22:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:02.693 22:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:02.693 22:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.693 22:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:02.693 22:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.693 22:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:10:02.693 22:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.693 22:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:02.693 22:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.693 22:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:02.693 22:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.693 22:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:02.693 22:55:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.693 22:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:02.693 22:55:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:10:02.693 22:55:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:02.693 00:10:02.693 real 0m0.307s 00:10:02.693 user 0m0.219s 00:10:02.693 sys 0m0.023s 00:10:02.693 ************************************ 00:10:02.693 END TEST rpc_daemon_integrity 00:10:02.693 22:55:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:02.693 22:55:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:02.693 ************************************ 00:10:02.693 22:55:52 rpc -- common/autotest_common.sh@1142 -- # return 0 00:10:02.693 22:55:52 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:10:02.693 22:55:52 rpc -- rpc/rpc.sh@84 -- # killprocess 123067 00:10:02.693 22:55:52 rpc -- common/autotest_common.sh@948 -- # '[' -z 123067 ']' 00:10:02.693 22:55:52 rpc -- common/autotest_common.sh@952 -- # kill -0 123067 00:10:02.693 22:55:52 rpc -- common/autotest_common.sh@953 -- # uname 00:10:02.693 22:55:52 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:02.693 22:55:52 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 123067 00:10:02.693 22:55:52 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:02.693 killing process with pid 123067 00:10:02.693 22:55:52 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:02.693 22:55:52 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 123067' 00:10:02.693 22:55:52 rpc -- common/autotest_common.sh@967 -- # kill 123067 00:10:02.693 22:55:52 rpc -- common/autotest_common.sh@972 -- # wait 123067 00:10:03.261 00:10:03.261 real 0m2.787s 00:10:03.261 user 0m3.620s 00:10:03.261 sys 0m0.602s 00:10:03.261 22:55:52 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:03.261 ************************************ 00:10:03.261 END TEST rpc 00:10:03.261 22:55:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.261 ************************************ 00:10:03.261 22:55:52 -- common/autotest_common.sh@1142 -- # return 0 00:10:03.261 22:55:52 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:10:03.261 22:55:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:03.261 22:55:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:03.261 22:55:52 -- common/autotest_common.sh@10 -- # set +x 00:10:03.261 ************************************ 00:10:03.261 START TEST skip_rpc 00:10:03.261 ************************************ 00:10:03.261 22:55:52 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:10:03.261 * Looking for test storage... 00:10:03.261 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:10:03.261 22:55:52 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:03.261 22:55:52 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:03.261 22:55:52 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:10:03.261 22:55:52 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:03.261 22:55:52 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:03.261 22:55:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.261 ************************************ 00:10:03.261 START TEST skip_rpc 00:10:03.261 ************************************ 00:10:03.261 22:55:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:10:03.261 22:55:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=123296 00:10:03.261 22:55:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:03.261 22:55:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:10:03.261 22:55:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:10:03.520 [2024-07-13 22:55:52.707998] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:03.520 [2024-07-13 22:55:52.708277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123296 ] 00:10:03.520 [2024-07-13 22:55:52.850354] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.520 [2024-07-13 22:55:52.915588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.780 22:55:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:10:08.780 22:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:10:08.780 22:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:10:08.780 22:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:10:08.780 22:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:08.780 22:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:10:08.780 22:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:08.780 22:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:10:08.780 22:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.780 22:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.780 22:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:10:08.780 22:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:10:08.780 22:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:08.780 22:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:08.780 22:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:08.780 22:55:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:10:08.780 22:55:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 123296 00:10:08.780 22:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 123296 ']' 00:10:08.780 22:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 123296 00:10:08.780 22:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:10:08.780 22:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:08.780 22:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 123296 00:10:08.780 22:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:08.780 killing process with pid 123296 00:10:08.780 22:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:08.780 22:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 123296' 00:10:08.780 22:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 123296 00:10:08.780 22:55:57 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 123296 00:10:08.780 00:10:08.780 real 0m5.455s 00:10:08.780 user 0m5.041s 00:10:08.780 sys 0m0.324s 00:10:08.780 22:55:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:08.780 22:55:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.780 ************************************ 00:10:08.780 END TEST skip_rpc 00:10:08.780 ************************************ 00:10:08.780 22:55:58 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:10:08.780 22:55:58 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:10:08.780 22:55:58 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:08.780 22:55:58 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:08.780 22:55:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.780 ************************************ 00:10:08.780 START TEST skip_rpc_with_json 00:10:08.780 ************************************ 00:10:08.780 22:55:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:10:08.780 22:55:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:10:08.780 22:55:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=123398 00:10:08.780 22:55:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:08.780 22:55:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 123398 00:10:08.780 22:55:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 123398 ']' 00:10:08.780 22:55:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.780 22:55:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:08.780 22:55:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:08.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.780 22:55:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.780 22:55:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:08.780 22:55:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:09.039 [2024-07-13 22:55:58.213783] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:09.039 [2024-07-13 22:55:58.214628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123398 ] 00:10:09.039 [2024-07-13 22:55:58.362202] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.039 [2024-07-13 22:55:58.432359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.974 22:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:09.974 22:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:10:09.974 22:55:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:10:09.974 22:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.974 22:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:09.974 [2024-07-13 22:55:59.198173] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:10:09.974 request: 00:10:09.974 { 00:10:09.974 "trtype": "tcp", 00:10:09.974 "method": "nvmf_get_transports", 00:10:09.974 "req_id": 1 00:10:09.974 } 00:10:09.974 Got JSON-RPC error response 00:10:09.974 response: 00:10:09.974 { 00:10:09.974 "code": -19, 00:10:09.974 "message": "No such device" 00:10:09.974 } 00:10:09.974 22:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:10:09.974 22:55:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:10:09.974 22:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.974 22:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:09.974 [2024-07-13 22:55:59.210237] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:09.974 22:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.974 22:55:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:10:09.974 22:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.974 22:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:09.974 22:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.974 22:55:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:09.974 { 00:10:09.974 "subsystems": [ 00:10:09.974 { 00:10:09.974 "subsystem": "scheduler", 00:10:09.974 "config": [ 00:10:09.974 { 00:10:09.974 "method": "framework_set_scheduler", 00:10:09.974 "params": { 00:10:09.974 "name": "static" 00:10:09.974 } 00:10:09.974 } 00:10:09.974 ] 00:10:09.974 }, 00:10:09.974 { 00:10:09.974 "subsystem": "vmd", 00:10:09.974 "config": [] 00:10:09.974 }, 00:10:09.974 { 00:10:09.974 "subsystem": "sock", 00:10:09.974 "config": [ 00:10:09.974 { 00:10:09.974 "method": "sock_set_default_impl", 00:10:09.974 "params": { 00:10:09.974 "impl_name": "posix" 00:10:09.974 } 00:10:09.974 }, 00:10:09.974 { 00:10:09.974 "method": "sock_impl_set_options", 00:10:09.974 "params": { 00:10:09.974 "impl_name": "ssl", 00:10:09.974 "recv_buf_size": 4096, 00:10:09.974 "send_buf_size": 4096, 00:10:09.974 "enable_recv_pipe": true, 00:10:09.974 "enable_quickack": false, 00:10:09.974 "enable_placement_id": 0, 00:10:09.974 "enable_zerocopy_send_server": true, 00:10:09.974 "enable_zerocopy_send_client": false, 00:10:09.974 "zerocopy_threshold": 0, 00:10:09.974 "tls_version": 0, 00:10:09.974 "enable_ktls": false 00:10:09.974 } 00:10:09.974 }, 00:10:09.974 { 00:10:09.974 "method": "sock_impl_set_options", 00:10:09.974 "params": { 00:10:09.974 "impl_name": "posix", 00:10:09.974 "recv_buf_size": 2097152, 00:10:09.974 "send_buf_size": 2097152, 00:10:09.974 "enable_recv_pipe": true, 00:10:09.974 "enable_quickack": false, 00:10:09.974 "enable_placement_id": 0, 00:10:09.974 "enable_zerocopy_send_server": true, 00:10:09.974 "enable_zerocopy_send_client": false, 00:10:09.974 "zerocopy_threshold": 0, 00:10:09.974 "tls_version": 0, 00:10:09.974 "enable_ktls": false 00:10:09.974 } 00:10:09.974 } 00:10:09.974 ] 00:10:09.974 }, 00:10:09.974 { 00:10:09.974 "subsystem": "iobuf", 00:10:09.974 "config": [ 00:10:09.974 { 00:10:09.974 "method": "iobuf_set_options", 00:10:09.974 "params": { 00:10:09.974 "small_pool_count": 8192, 00:10:09.974 "large_pool_count": 1024, 00:10:09.974 "small_bufsize": 8192, 00:10:09.974 "large_bufsize": 135168 00:10:09.974 } 00:10:09.974 } 00:10:09.974 ] 00:10:09.974 }, 00:10:09.974 { 00:10:09.974 "subsystem": "keyring", 00:10:09.974 "config": [] 00:10:09.974 }, 00:10:09.974 { 00:10:09.974 "subsystem": "accel", 00:10:09.974 "config": [ 00:10:09.974 { 00:10:09.974 "method": "accel_set_options", 00:10:09.974 "params": { 00:10:09.974 "small_cache_size": 128, 00:10:09.974 "large_cache_size": 16, 00:10:09.974 "task_count": 2048, 00:10:09.974 "sequence_count": 2048, 00:10:09.974 "buf_count": 2048 00:10:09.974 } 00:10:09.974 } 00:10:09.974 ] 00:10:09.974 }, 00:10:09.974 { 00:10:09.974 "subsystem": "bdev", 00:10:09.974 "config": [ 00:10:09.974 { 00:10:09.974 "method": "bdev_set_options", 00:10:09.974 "params": { 00:10:09.974 "bdev_io_pool_size": 65535, 00:10:09.974 "bdev_io_cache_size": 256, 00:10:09.974 "bdev_auto_examine": true, 00:10:09.974 "iobuf_small_cache_size": 128, 00:10:09.974 "iobuf_large_cache_size": 16 00:10:09.974 } 00:10:09.974 }, 00:10:09.974 { 00:10:09.974 "method": "bdev_raid_set_options", 00:10:09.974 "params": { 00:10:09.974 "process_window_size_kb": 1024 00:10:09.974 } 00:10:09.974 }, 00:10:09.974 { 00:10:09.974 "method": "bdev_nvme_set_options", 00:10:09.974 "params": { 00:10:09.974 "action_on_timeout": "none", 00:10:09.974 "timeout_us": 0, 00:10:09.975 "timeout_admin_us": 0, 00:10:09.975 "keep_alive_timeout_ms": 10000, 00:10:09.975 "arbitration_burst": 0, 00:10:09.975 "low_priority_weight": 0, 00:10:09.975 "medium_priority_weight": 0, 00:10:09.975 "high_priority_weight": 0, 00:10:09.975 "nvme_adminq_poll_period_us": 10000, 00:10:09.975 "nvme_ioq_poll_period_us": 0, 00:10:09.975 "io_queue_requests": 0, 00:10:09.975 "delay_cmd_submit": true, 00:10:09.975 "transport_retry_count": 4, 00:10:09.975 "bdev_retry_count": 3, 00:10:09.975 "transport_ack_timeout": 0, 00:10:09.975 "ctrlr_loss_timeout_sec": 0, 00:10:09.975 "reconnect_delay_sec": 0, 00:10:09.975 "fast_io_fail_timeout_sec": 0, 00:10:09.975 "disable_auto_failback": false, 00:10:09.975 "generate_uuids": false, 00:10:09.975 "transport_tos": 0, 00:10:09.975 "nvme_error_stat": false, 00:10:09.975 "rdma_srq_size": 0, 00:10:09.975 "io_path_stat": false, 00:10:09.975 "allow_accel_sequence": false, 00:10:09.975 "rdma_max_cq_size": 0, 00:10:09.975 "rdma_cm_event_timeout_ms": 0, 00:10:09.975 "dhchap_digests": [ 00:10:09.975 "sha256", 00:10:09.975 "sha384", 00:10:09.975 "sha512" 00:10:09.975 ], 00:10:09.975 "dhchap_dhgroups": [ 00:10:09.975 "null", 00:10:09.975 "ffdhe2048", 00:10:09.975 "ffdhe3072", 00:10:09.975 "ffdhe4096", 00:10:09.975 "ffdhe6144", 00:10:09.975 "ffdhe8192" 00:10:09.975 ] 00:10:09.975 } 00:10:09.975 }, 00:10:09.975 { 00:10:09.975 "method": "bdev_nvme_set_hotplug", 00:10:09.975 "params": { 00:10:09.975 "period_us": 100000, 00:10:09.975 "enable": false 00:10:09.975 } 00:10:09.975 }, 00:10:09.975 { 00:10:09.975 "method": "bdev_iscsi_set_options", 00:10:09.975 "params": { 00:10:09.975 "timeout_sec": 30 00:10:09.975 } 00:10:09.975 }, 00:10:09.975 { 00:10:09.975 "method": "bdev_wait_for_examine" 00:10:09.975 } 00:10:09.975 ] 00:10:09.975 }, 00:10:09.975 { 00:10:09.975 "subsystem": "nvmf", 00:10:09.975 "config": [ 00:10:09.975 { 00:10:09.975 "method": "nvmf_set_config", 00:10:09.975 "params": { 00:10:09.975 "discovery_filter": "match_any", 00:10:09.975 "admin_cmd_passthru": { 00:10:09.975 "identify_ctrlr": false 00:10:09.975 } 00:10:09.975 } 00:10:09.975 }, 00:10:09.975 { 00:10:09.975 "method": "nvmf_set_max_subsystems", 00:10:09.975 "params": { 00:10:09.975 "max_subsystems": 1024 00:10:09.975 } 00:10:09.975 }, 00:10:09.975 { 00:10:09.975 "method": "nvmf_set_crdt", 00:10:09.975 "params": { 00:10:09.975 "crdt1": 0, 00:10:09.975 "crdt2": 0, 00:10:09.975 "crdt3": 0 00:10:09.975 } 00:10:09.975 }, 00:10:09.975 { 00:10:09.975 "method": "nvmf_create_transport", 00:10:09.975 "params": { 00:10:09.975 "trtype": "TCP", 00:10:09.975 "max_queue_depth": 128, 00:10:09.975 "max_io_qpairs_per_ctrlr": 127, 00:10:09.975 "in_capsule_data_size": 4096, 00:10:09.975 "max_io_size": 131072, 00:10:09.975 "io_unit_size": 131072, 00:10:09.975 "max_aq_depth": 128, 00:10:09.975 "num_shared_buffers": 511, 00:10:09.975 "buf_cache_size": 4294967295, 00:10:09.975 "dif_insert_or_strip": false, 00:10:09.975 "zcopy": false, 00:10:09.975 "c2h_success": true, 00:10:09.975 "sock_priority": 0, 00:10:09.975 "abort_timeout_sec": 1, 00:10:09.975 "ack_timeout": 0, 00:10:09.975 "data_wr_pool_size": 0 00:10:09.975 } 00:10:09.975 } 00:10:09.975 ] 00:10:09.975 }, 00:10:09.975 { 00:10:09.975 "subsystem": "nbd", 00:10:09.975 "config": [] 00:10:09.975 }, 00:10:09.975 { 00:10:09.975 "subsystem": "vhost_blk", 00:10:09.975 "config": [] 00:10:09.975 }, 00:10:09.975 { 00:10:09.975 "subsystem": "scsi", 00:10:09.975 "config": null 00:10:09.975 }, 00:10:09.975 { 00:10:09.975 "subsystem": "iscsi", 00:10:09.975 "config": [ 00:10:09.975 { 00:10:09.975 "method": "iscsi_set_options", 00:10:09.975 "params": { 00:10:09.975 "node_base": "iqn.2016-06.io.spdk", 00:10:09.975 "max_sessions": 128, 00:10:09.975 "max_connections_per_session": 2, 00:10:09.975 "max_queue_depth": 64, 00:10:09.975 "default_time2wait": 2, 00:10:09.975 "default_time2retain": 20, 00:10:09.975 "first_burst_length": 8192, 00:10:09.975 "immediate_data": true, 00:10:09.975 "allow_duplicated_isid": false, 00:10:09.975 "error_recovery_level": 0, 00:10:09.975 "nop_timeout": 60, 00:10:09.975 "nop_in_interval": 30, 00:10:09.975 "disable_chap": false, 00:10:09.975 "require_chap": false, 00:10:09.975 "mutual_chap": false, 00:10:09.975 "chap_group": 0, 00:10:09.975 "max_large_datain_per_connection": 64, 00:10:09.975 "max_r2t_per_connection": 4, 00:10:09.975 "pdu_pool_size": 36864, 00:10:09.975 "immediate_data_pool_size": 16384, 00:10:09.975 "data_out_pool_size": 2048 00:10:09.975 } 00:10:09.975 } 00:10:09.975 ] 00:10:09.975 }, 00:10:09.975 { 00:10:09.975 "subsystem": "vhost_scsi", 00:10:09.975 "config": [] 00:10:09.975 } 00:10:09.975 ] 00:10:09.975 } 00:10:09.975 22:55:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:09.975 22:55:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 123398 00:10:09.975 22:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 123398 ']' 00:10:09.975 22:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 123398 00:10:09.975 22:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:10:09.975 22:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:09.975 22:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 123398 00:10:09.975 22:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:09.975 killing process with pid 123398 00:10:09.975 22:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:09.975 22:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 123398' 00:10:09.975 22:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 123398 00:10:09.975 22:55:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 123398 00:10:10.541 22:55:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=123431 00:10:10.541 22:55:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:10.541 22:55:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:10:15.834 22:56:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 123431 00:10:15.834 22:56:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 123431 ']' 00:10:15.834 22:56:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 123431 00:10:15.834 22:56:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:10:15.834 22:56:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:15.834 22:56:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 123431 00:10:15.834 22:56:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:15.834 22:56:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:15.834 killing process with pid 123431 00:10:15.834 22:56:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 123431' 00:10:15.834 22:56:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 123431 00:10:15.834 22:56:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 123431 00:10:16.092 22:56:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:16.092 22:56:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:16.092 00:10:16.092 real 0m7.212s 00:10:16.092 user 0m6.842s 00:10:16.092 sys 0m0.747s 00:10:16.093 22:56:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:16.093 22:56:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:16.093 ************************************ 00:10:16.093 END TEST skip_rpc_with_json 00:10:16.093 ************************************ 00:10:16.093 22:56:05 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:10:16.093 22:56:05 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:10:16.093 22:56:05 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:16.093 22:56:05 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:16.093 22:56:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.093 ************************************ 00:10:16.093 START TEST skip_rpc_with_delay 00:10:16.093 ************************************ 00:10:16.093 22:56:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:10:16.093 22:56:05 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:16.093 22:56:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:10:16.093 22:56:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:16.093 22:56:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:16.093 22:56:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:16.093 22:56:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:16.093 22:56:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:16.093 22:56:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:16.093 22:56:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:16.093 22:56:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:16.093 22:56:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:10:16.093 22:56:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:16.093 [2024-07-13 22:56:05.471209] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:10:16.093 [2024-07-13 22:56:05.471411] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:10:16.351 22:56:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:10:16.351 22:56:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:16.351 22:56:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:16.351 22:56:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:16.351 00:10:16.351 real 0m0.113s 00:10:16.351 user 0m0.062s 00:10:16.351 sys 0m0.052s 00:10:16.351 22:56:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:16.351 22:56:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:10:16.351 ************************************ 00:10:16.351 END TEST skip_rpc_with_delay 00:10:16.351 ************************************ 00:10:16.351 22:56:05 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:10:16.351 22:56:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:10:16.352 22:56:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:10:16.352 22:56:05 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:10:16.352 22:56:05 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:16.352 22:56:05 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:16.352 22:56:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.352 ************************************ 00:10:16.352 START TEST exit_on_failed_rpc_init 00:10:16.352 ************************************ 00:10:16.352 22:56:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:10:16.352 22:56:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=123553 00:10:16.352 22:56:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 123553 00:10:16.352 22:56:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 123553 ']' 00:10:16.352 22:56:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.352 22:56:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:16.352 22:56:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:16.352 22:56:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.352 22:56:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:16.352 22:56:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:16.352 [2024-07-13 22:56:05.647189] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:16.352 [2024-07-13 22:56:05.647426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123553 ] 00:10:16.610 [2024-07-13 22:56:05.796071] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.610 [2024-07-13 22:56:05.876369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.545 22:56:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:17.545 22:56:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:10:17.545 22:56:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:17.545 22:56:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:17.545 22:56:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:10:17.545 22:56:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:17.545 22:56:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:17.545 22:56:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:17.545 22:56:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:17.545 22:56:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:17.545 22:56:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:17.545 22:56:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:17.545 22:56:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:17.545 22:56:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:10:17.545 22:56:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:17.545 [2024-07-13 22:56:06.767772] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:17.545 [2024-07-13 22:56:06.768053] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123576 ] 00:10:17.545 [2024-07-13 22:56:06.915880] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.803 [2024-07-13 22:56:06.991140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.803 [2024-07-13 22:56:06.991309] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:17.803 [2024-07-13 22:56:06.991358] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:17.803 [2024-07-13 22:56:06.991393] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:17.803 22:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:10:17.803 22:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:17.803 22:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:10:17.803 22:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:10:17.803 22:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:10:17.803 22:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:17.803 22:56:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:17.803 22:56:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 123553 00:10:17.803 22:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 123553 ']' 00:10:17.803 22:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 123553 00:10:17.803 22:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:10:17.803 22:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:17.803 22:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 123553 00:10:17.803 22:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:17.803 22:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:17.803 killing process with pid 123553 00:10:17.803 22:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 123553' 00:10:17.803 22:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 123553 00:10:17.803 22:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 123553 00:10:18.370 00:10:18.370 real 0m2.032s 00:10:18.370 user 0m2.330s 00:10:18.370 sys 0m0.540s 00:10:18.370 22:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:18.370 22:56:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:18.370 ************************************ 00:10:18.370 END TEST exit_on_failed_rpc_init 00:10:18.370 ************************************ 00:10:18.370 22:56:07 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:10:18.370 22:56:07 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:18.370 00:10:18.370 real 0m15.105s 00:10:18.370 user 0m14.458s 00:10:18.370 sys 0m1.771s 00:10:18.370 22:56:07 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:18.370 ************************************ 00:10:18.370 22:56:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.370 END TEST skip_rpc 00:10:18.370 ************************************ 00:10:18.370 22:56:07 -- common/autotest_common.sh@1142 -- # return 0 00:10:18.370 22:56:07 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:18.370 22:56:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:18.370 22:56:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:18.370 22:56:07 -- common/autotest_common.sh@10 -- # set +x 00:10:18.370 ************************************ 00:10:18.370 START TEST rpc_client 00:10:18.370 ************************************ 00:10:18.370 22:56:07 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:18.629 * Looking for test storage... 00:10:18.629 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:10:18.629 22:56:07 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:10:18.629 OK 00:10:18.629 22:56:07 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:10:18.629 00:10:18.629 real 0m0.130s 00:10:18.629 user 0m0.077s 00:10:18.629 sys 0m0.063s 00:10:18.629 22:56:07 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:18.629 22:56:07 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:10:18.629 ************************************ 00:10:18.629 END TEST rpc_client 00:10:18.629 ************************************ 00:10:18.629 22:56:07 -- common/autotest_common.sh@1142 -- # return 0 00:10:18.629 22:56:07 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:18.629 22:56:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:18.629 22:56:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:18.629 22:56:07 -- common/autotest_common.sh@10 -- # set +x 00:10:18.629 ************************************ 00:10:18.629 START TEST json_config 00:10:18.629 ************************************ 00:10:18.629 22:56:07 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:18.629 22:56:07 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:18.629 22:56:07 json_config -- nvmf/common.sh@7 -- # uname -s 00:10:18.629 22:56:07 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:18.629 22:56:07 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:18.629 22:56:07 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:18.629 22:56:07 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:18.629 22:56:07 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:18.629 22:56:07 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:18.629 22:56:07 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:18.629 22:56:07 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:18.629 22:56:07 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:18.629 22:56:07 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:18.629 22:56:07 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:372a053b-8154-42b7-a3d4-af10f430e353 00:10:18.629 22:56:07 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=372a053b-8154-42b7-a3d4-af10f430e353 00:10:18.629 22:56:07 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:18.629 22:56:07 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:18.629 22:56:07 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:18.629 22:56:07 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:18.629 22:56:07 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:18.629 22:56:07 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.629 22:56:07 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.629 22:56:07 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.629 22:56:07 json_config -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:18.629 22:56:07 json_config -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:18.629 22:56:07 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:18.629 22:56:07 json_config -- paths/export.sh@5 -- # export PATH 00:10:18.629 22:56:07 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:18.629 22:56:07 json_config -- nvmf/common.sh@47 -- # : 0 00:10:18.629 22:56:07 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:18.629 22:56:07 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:18.629 22:56:07 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:18.629 22:56:07 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:18.629 22:56:07 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:18.629 22:56:07 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:18.629 22:56:07 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:18.629 22:56:07 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:18.629 22:56:07 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:18.629 22:56:07 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:10:18.629 22:56:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:10:18.629 22:56:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:10:18.629 22:56:07 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:10:18.629 22:56:07 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:10:18.629 22:56:07 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:10:18.629 22:56:07 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:10:18.629 22:56:07 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:10:18.629 22:56:07 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:10:18.629 22:56:07 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:10:18.629 22:56:07 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:10:18.629 22:56:07 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:10:18.629 22:56:07 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:10:18.629 22:56:07 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:18.629 INFO: JSON configuration test init 00:10:18.629 22:56:07 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:10:18.629 22:56:07 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:10:18.629 22:56:07 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:10:18.629 22:56:07 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:18.629 22:56:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:18.629 22:56:07 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:10:18.629 22:56:07 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:18.629 22:56:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:18.629 22:56:07 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:10:18.629 22:56:07 json_config -- json_config/common.sh@9 -- # local app=target 00:10:18.629 22:56:07 json_config -- json_config/common.sh@10 -- # shift 00:10:18.629 22:56:07 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:18.629 22:56:07 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:18.629 22:56:07 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:10:18.630 22:56:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:18.630 22:56:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:18.630 22:56:07 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=123706 00:10:18.630 Waiting for target to run... 00:10:18.630 22:56:07 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:18.630 22:56:07 json_config -- json_config/common.sh@25 -- # waitforlisten 123706 /var/tmp/spdk_tgt.sock 00:10:18.630 22:56:07 json_config -- common/autotest_common.sh@829 -- # '[' -z 123706 ']' 00:10:18.630 22:56:07 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:18.630 22:56:07 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:18.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:18.630 22:56:07 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:18.630 22:56:07 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:18.630 22:56:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:18.630 22:56:07 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:10:18.887 [2024-07-13 22:56:08.046531] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:18.888 [2024-07-13 22:56:08.046789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123706 ] 00:10:19.146 [2024-07-13 22:56:08.489269] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.404 [2024-07-13 22:56:08.556621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.969 22:56:09 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:19.969 22:56:09 json_config -- common/autotest_common.sh@862 -- # return 0 00:10:19.969 00:10:19.969 22:56:09 json_config -- json_config/common.sh@26 -- # echo '' 00:10:19.969 22:56:09 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:10:19.969 22:56:09 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:10:19.969 22:56:09 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:19.969 22:56:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:19.969 22:56:09 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:10:19.969 22:56:09 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:10:19.969 22:56:09 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:19.970 22:56:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:19.970 22:56:09 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:10:19.970 22:56:09 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:10:19.970 22:56:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:10:20.228 22:56:09 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:10:20.228 22:56:09 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:10:20.228 22:56:09 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:20.228 22:56:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:20.228 22:56:09 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:10:20.228 22:56:09 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:10:20.228 22:56:09 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:10:20.228 22:56:09 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:10:20.228 22:56:09 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:10:20.228 22:56:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:10:20.486 22:56:09 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:10:20.486 22:56:09 json_config -- json_config/json_config.sh@48 -- # local get_types 00:10:20.486 22:56:09 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:10:20.486 22:56:09 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:10:20.486 22:56:09 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:20.487 22:56:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:20.487 22:56:09 json_config -- json_config/json_config.sh@55 -- # return 0 00:10:20.487 22:56:09 json_config -- json_config/json_config.sh@278 -- # [[ 1 -eq 1 ]] 00:10:20.487 22:56:09 json_config -- json_config/json_config.sh@279 -- # create_bdev_subsystem_config 00:10:20.487 22:56:09 json_config -- json_config/json_config.sh@105 -- # timing_enter create_bdev_subsystem_config 00:10:20.487 22:56:09 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:20.487 22:56:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:20.487 22:56:09 json_config -- json_config/json_config.sh@107 -- # expected_notifications=() 00:10:20.487 22:56:09 json_config -- json_config/json_config.sh@107 -- # local expected_notifications 00:10:20.487 22:56:09 json_config -- json_config/json_config.sh@111 -- # expected_notifications+=($(get_notifications)) 00:10:20.487 22:56:09 json_config -- json_config/json_config.sh@111 -- # get_notifications 00:10:20.487 22:56:09 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:10:20.487 22:56:09 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:20.487 22:56:09 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:20.487 22:56:09 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:10:20.487 22:56:09 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:10:20.487 22:56:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:10:21.051 22:56:10 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:10:21.051 22:56:10 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:21.051 22:56:10 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:21.051 22:56:10 json_config -- json_config/json_config.sh@113 -- # [[ 1 -eq 1 ]] 00:10:21.051 22:56:10 json_config -- json_config/json_config.sh@114 -- # local lvol_store_base_bdev=Nvme0n1 00:10:21.051 22:56:10 json_config -- json_config/json_config.sh@116 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:10:21.051 22:56:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:10:21.051 Nvme0n1p0 Nvme0n1p1 00:10:21.051 22:56:10 json_config -- json_config/json_config.sh@117 -- # tgt_rpc bdev_split_create Malloc0 3 00:10:21.051 22:56:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:10:21.309 [2024-07-13 22:56:10.689628] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:21.309 [2024-07-13 22:56:10.689757] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:21.309 00:10:21.309 22:56:10 json_config -- json_config/json_config.sh@118 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:10:21.309 22:56:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:10:21.567 Malloc3 00:10:21.824 22:56:10 json_config -- json_config/json_config.sh@119 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:10:21.825 22:56:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:10:21.825 [2024-07-13 22:56:11.218000] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:21.825 [2024-07-13 22:56:11.218136] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.825 [2024-07-13 22:56:11.218184] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:10:21.825 [2024-07-13 22:56:11.218259] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.825 [2024-07-13 22:56:11.221206] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.825 [2024-07-13 22:56:11.221313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:10:21.825 PTBdevFromMalloc3 00:10:22.092 22:56:11 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_null_create Null0 32 512 00:10:22.092 22:56:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:10:22.092 Null0 00:10:22.092 22:56:11 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:10:22.092 22:56:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:10:22.361 Malloc0 00:10:22.620 22:56:11 json_config -- json_config/json_config.sh@124 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:10:22.620 22:56:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:10:22.620 Malloc1 00:10:22.879 22:56:12 json_config -- json_config/json_config.sh@137 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:10:22.879 22:56:12 json_config -- json_config/json_config.sh@140 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:10:23.137 102400+0 records in 00:10:23.137 102400+0 records out 00:10:23.137 104857600 bytes (105 MB, 100 MiB) copied, 0.341007 s, 307 MB/s 00:10:23.137 22:56:12 json_config -- json_config/json_config.sh@141 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:10:23.137 22:56:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:10:23.395 aio_disk 00:10:23.395 22:56:12 json_config -- json_config/json_config.sh@142 -- # expected_notifications+=(bdev_register:aio_disk) 00:10:23.396 22:56:12 json_config -- json_config/json_config.sh@147 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:10:23.396 22:56:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:10:23.654 e421f983-8dea-4d11-acbc-fd6e04f51fd0 00:10:23.654 22:56:12 json_config -- json_config/json_config.sh@154 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:10:23.654 22:56:12 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:10:23.654 22:56:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:10:23.916 22:56:13 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:10:23.916 22:56:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:10:24.175 22:56:13 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:10:24.175 22:56:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:10:24.433 22:56:13 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:10:24.433 22:56:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:10:24.692 22:56:13 json_config -- json_config/json_config.sh@157 -- # [[ 0 -eq 1 ]] 00:10:24.692 22:56:13 json_config -- json_config/json_config.sh@172 -- # [[ 0 -eq 1 ]] 00:10:24.692 22:56:13 json_config -- json_config/json_config.sh@178 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:dbd48b33-0899-4a37-bc2d-5aa158c4be3c bdev_register:dbb89911-8b5a-46e8-8552-79fb68e0dcc8 bdev_register:b9adef9e-c894-42bc-bf32-8a83aed09184 bdev_register:bf38072d-c614-4650-867a-87e29c10576c 00:10:24.692 22:56:13 json_config -- json_config/json_config.sh@67 -- # local events_to_check 00:10:24.692 22:56:13 json_config -- json_config/json_config.sh@68 -- # local recorded_events 00:10:24.692 22:56:13 json_config -- json_config/json_config.sh@71 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:10:24.692 22:56:13 json_config -- json_config/json_config.sh@71 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:dbd48b33-0899-4a37-bc2d-5aa158c4be3c bdev_register:dbb89911-8b5a-46e8-8552-79fb68e0dcc8 bdev_register:b9adef9e-c894-42bc-bf32-8a83aed09184 bdev_register:bf38072d-c614-4650-867a-87e29c10576c 00:10:24.692 22:56:13 json_config -- json_config/json_config.sh@71 -- # sort 00:10:24.692 22:56:13 json_config -- json_config/json_config.sh@72 -- # recorded_events=($(get_notifications | sort)) 00:10:24.692 22:56:14 json_config -- json_config/json_config.sh@72 -- # get_notifications 00:10:24.692 22:56:14 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:10:24.692 22:56:14 json_config -- json_config/json_config.sh@72 -- # sort 00:10:24.692 22:56:14 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:24.692 22:56:14 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:24.692 22:56:14 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:10:24.692 22:56:14 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:10:24.692 22:56:14 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:10:24.951 22:56:14 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:10:24.951 22:56:14 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:24.951 22:56:14 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p1 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p0 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc3 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:PTBdevFromMalloc3 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Null0 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p2 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p1 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p0 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc1 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:aio_disk 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:dbd48b33-0899-4a37-bc2d-5aa158c4be3c 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:dbb89911-8b5a-46e8-8552-79fb68e0dcc8 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:b9adef9e-c894-42bc-bf32-8a83aed09184 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:bf38072d-c614-4650-867a-87e29c10576c 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@74 -- # [[ bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:b9adef9e-c894-42bc-bf32-8a83aed09184 bdev_register:bf38072d-c614-4650-867a-87e29c10576c bdev_register:dbb89911-8b5a-46e8-8552-79fb68e0dcc8 bdev_register:dbd48b33-0899-4a37-bc2d-5aa158c4be3c != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\b\9\a\d\e\f\9\e\-\c\8\9\4\-\4\2\b\c\-\b\f\3\2\-\8\a\8\3\a\e\d\0\9\1\8\4\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\b\f\3\8\0\7\2\d\-\c\6\1\4\-\4\6\5\0\-\8\6\7\a\-\8\7\e\2\9\c\1\0\5\7\6\c\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\d\b\b\8\9\9\1\1\-\8\b\5\a\-\4\6\e\8\-\8\5\5\2\-\7\9\f\b\6\8\e\0\d\c\c\8\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\d\b\d\4\8\b\3\3\-\0\8\9\9\-\4\a\3\7\-\b\c\2\d\-\5\a\a\1\5\8\c\4\b\e\3\c ]] 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@86 -- # cat 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@86 -- # printf ' %s\n' bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:b9adef9e-c894-42bc-bf32-8a83aed09184 bdev_register:bf38072d-c614-4650-867a-87e29c10576c bdev_register:dbb89911-8b5a-46e8-8552-79fb68e0dcc8 bdev_register:dbd48b33-0899-4a37-bc2d-5aa158c4be3c 00:10:24.952 Expected events matched: 00:10:24.952 bdev_register:Malloc0 00:10:24.952 bdev_register:Malloc0p0 00:10:24.952 bdev_register:Malloc0p1 00:10:24.952 bdev_register:Malloc0p2 00:10:24.952 bdev_register:Malloc1 00:10:24.952 bdev_register:Malloc3 00:10:24.952 bdev_register:Null0 00:10:24.952 bdev_register:Nvme0n1 00:10:24.952 bdev_register:Nvme0n1p0 00:10:24.952 bdev_register:Nvme0n1p1 00:10:24.952 bdev_register:PTBdevFromMalloc3 00:10:24.952 bdev_register:aio_disk 00:10:24.952 bdev_register:b9adef9e-c894-42bc-bf32-8a83aed09184 00:10:24.952 bdev_register:bf38072d-c614-4650-867a-87e29c10576c 00:10:24.952 bdev_register:dbb89911-8b5a-46e8-8552-79fb68e0dcc8 00:10:24.952 bdev_register:dbd48b33-0899-4a37-bc2d-5aa158c4be3c 00:10:24.952 22:56:14 json_config -- json_config/json_config.sh@180 -- # timing_exit create_bdev_subsystem_config 00:10:24.952 22:56:14 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:24.952 22:56:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:25.211 22:56:14 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:10:25.211 22:56:14 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:10:25.211 22:56:14 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:10:25.211 22:56:14 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:10:25.211 22:56:14 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:25.211 22:56:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:25.211 22:56:14 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:10:25.211 22:56:14 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:10:25.211 22:56:14 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:10:25.471 MallocBdevForConfigChangeCheck 00:10:25.471 22:56:14 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:10:25.471 22:56:14 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:25.471 22:56:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:25.471 22:56:14 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:10:25.471 22:56:14 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:25.730 INFO: shutting down applications... 00:10:25.730 22:56:15 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:10:25.730 22:56:15 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:10:25.730 22:56:15 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:10:25.730 22:56:15 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:10:25.730 22:56:15 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:10:26.000 [2024-07-13 22:56:15.248369] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:10:26.000 Calling clear_vhost_scsi_subsystem 00:10:26.000 Calling clear_iscsi_subsystem 00:10:26.000 Calling clear_vhost_blk_subsystem 00:10:26.000 Calling clear_nbd_subsystem 00:10:26.000 Calling clear_nvmf_subsystem 00:10:26.000 Calling clear_bdev_subsystem 00:10:26.263 22:56:15 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:10:26.263 22:56:15 json_config -- json_config/json_config.sh@343 -- # count=100 00:10:26.263 22:56:15 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:10:26.263 22:56:15 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:10:26.263 22:56:15 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:26.263 22:56:15 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:10:26.523 22:56:15 json_config -- json_config/json_config.sh@345 -- # break 00:10:26.523 22:56:15 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:10:26.523 22:56:15 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:10:26.523 22:56:15 json_config -- json_config/common.sh@31 -- # local app=target 00:10:26.523 22:56:15 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:26.523 22:56:15 json_config -- json_config/common.sh@35 -- # [[ -n 123706 ]] 00:10:26.523 22:56:15 json_config -- json_config/common.sh@38 -- # kill -SIGINT 123706 00:10:26.523 22:56:15 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:26.523 22:56:15 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:26.523 22:56:15 json_config -- json_config/common.sh@41 -- # kill -0 123706 00:10:26.523 22:56:15 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:10:27.091 22:56:16 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:10:27.091 22:56:16 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:27.091 22:56:16 json_config -- json_config/common.sh@41 -- # kill -0 123706 00:10:27.091 22:56:16 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:27.091 22:56:16 json_config -- json_config/common.sh@43 -- # break 00:10:27.091 22:56:16 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:27.091 SPDK target shutdown done 00:10:27.091 22:56:16 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:27.091 INFO: relaunching applications... 00:10:27.091 22:56:16 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:10:27.091 22:56:16 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:27.091 22:56:16 json_config -- json_config/common.sh@9 -- # local app=target 00:10:27.091 22:56:16 json_config -- json_config/common.sh@10 -- # shift 00:10:27.091 22:56:16 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:27.091 22:56:16 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:27.091 22:56:16 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:10:27.091 22:56:16 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:27.091 22:56:16 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:27.091 22:56:16 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=123964 00:10:27.091 Waiting for target to run... 00:10:27.091 22:56:16 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:27.091 22:56:16 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:27.091 22:56:16 json_config -- json_config/common.sh@25 -- # waitforlisten 123964 /var/tmp/spdk_tgt.sock 00:10:27.091 22:56:16 json_config -- common/autotest_common.sh@829 -- # '[' -z 123964 ']' 00:10:27.091 22:56:16 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:27.091 22:56:16 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:27.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:27.091 22:56:16 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:27.091 22:56:16 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:27.091 22:56:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:27.091 [2024-07-13 22:56:16.369794] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:27.091 [2024-07-13 22:56:16.370080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123964 ] 00:10:27.658 [2024-07-13 22:56:16.806701] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.658 [2024-07-13 22:56:16.871800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.658 [2024-07-13 22:56:17.031424] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:10:27.658 [2024-07-13 22:56:17.031545] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:10:27.658 [2024-07-13 22:56:17.039388] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:27.658 [2024-07-13 22:56:17.039458] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:27.658 [2024-07-13 22:56:17.047433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:27.658 [2024-07-13 22:56:17.047545] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:27.658 [2024-07-13 22:56:17.047584] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:27.917 [2024-07-13 22:56:17.136684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:27.917 [2024-07-13 22:56:17.136837] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.917 [2024-07-13 22:56:17.136891] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:27.917 [2024-07-13 22:56:17.136976] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.917 [2024-07-13 22:56:17.137687] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.917 [2024-07-13 22:56:17.137760] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:10:27.917 22:56:17 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:27.917 00:10:27.917 22:56:17 json_config -- common/autotest_common.sh@862 -- # return 0 00:10:27.917 22:56:17 json_config -- json_config/common.sh@26 -- # echo '' 00:10:27.917 22:56:17 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:10:27.917 INFO: Checking if target configuration is the same... 00:10:27.917 22:56:17 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:10:27.917 22:56:17 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:27.917 22:56:17 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:10:27.917 22:56:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:27.917 + '[' 2 -ne 2 ']' 00:10:27.917 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:10:27.917 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:10:27.917 + rootdir=/home/vagrant/spdk_repo/spdk 00:10:27.917 +++ basename /dev/fd/62 00:10:27.917 ++ mktemp /tmp/62.XXX 00:10:27.917 + tmp_file_1=/tmp/62.xXJ 00:10:27.917 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:27.917 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:27.917 + tmp_file_2=/tmp/spdk_tgt_config.json.wZK 00:10:27.917 + ret=0 00:10:27.917 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:28.484 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:28.484 + diff -u /tmp/62.xXJ /tmp/spdk_tgt_config.json.wZK 00:10:28.484 INFO: JSON config files are the same 00:10:28.484 + echo 'INFO: JSON config files are the same' 00:10:28.484 + rm /tmp/62.xXJ /tmp/spdk_tgt_config.json.wZK 00:10:28.484 + exit 0 00:10:28.484 22:56:17 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:10:28.484 INFO: changing configuration and checking if this can be detected... 00:10:28.484 22:56:17 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:10:28.484 22:56:17 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:28.484 22:56:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:28.743 22:56:17 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:28.743 22:56:17 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:10:28.743 22:56:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:28.743 + '[' 2 -ne 2 ']' 00:10:28.743 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:10:28.743 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:10:28.743 + rootdir=/home/vagrant/spdk_repo/spdk 00:10:28.743 +++ basename /dev/fd/62 00:10:28.743 ++ mktemp /tmp/62.XXX 00:10:28.743 + tmp_file_1=/tmp/62.EIa 00:10:28.743 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:28.743 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:28.743 + tmp_file_2=/tmp/spdk_tgt_config.json.mXl 00:10:28.743 + ret=0 00:10:28.743 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:29.001 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:29.260 + diff -u /tmp/62.EIa /tmp/spdk_tgt_config.json.mXl 00:10:29.260 + ret=1 00:10:29.260 + echo '=== Start of file: /tmp/62.EIa ===' 00:10:29.260 + cat /tmp/62.EIa 00:10:29.260 + echo '=== End of file: /tmp/62.EIa ===' 00:10:29.260 + echo '' 00:10:29.260 + echo '=== Start of file: /tmp/spdk_tgt_config.json.mXl ===' 00:10:29.260 + cat /tmp/spdk_tgt_config.json.mXl 00:10:29.260 + echo '=== End of file: /tmp/spdk_tgt_config.json.mXl ===' 00:10:29.260 + echo '' 00:10:29.260 + rm /tmp/62.EIa /tmp/spdk_tgt_config.json.mXl 00:10:29.260 + exit 1 00:10:29.260 INFO: configuration change detected. 00:10:29.260 22:56:18 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:10:29.260 22:56:18 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:10:29.260 22:56:18 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:10:29.260 22:56:18 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:29.260 22:56:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:29.260 22:56:18 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:10:29.260 22:56:18 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:10:29.260 22:56:18 json_config -- json_config/json_config.sh@317 -- # [[ -n 123964 ]] 00:10:29.260 22:56:18 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:10:29.260 22:56:18 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:10:29.260 22:56:18 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:29.260 22:56:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:29.260 22:56:18 json_config -- json_config/json_config.sh@186 -- # [[ 1 -eq 1 ]] 00:10:29.260 22:56:18 json_config -- json_config/json_config.sh@187 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:10:29.260 22:56:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:10:29.544 22:56:18 json_config -- json_config/json_config.sh@188 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:10:29.544 22:56:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:10:29.802 22:56:19 json_config -- json_config/json_config.sh@189 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:10:29.802 22:56:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:10:30.060 22:56:19 json_config -- json_config/json_config.sh@190 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:10:30.060 22:56:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:10:30.319 22:56:19 json_config -- json_config/json_config.sh@193 -- # uname -s 00:10:30.319 22:56:19 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:10:30.319 22:56:19 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:10:30.319 22:56:19 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:10:30.319 22:56:19 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:10:30.319 22:56:19 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:30.319 22:56:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:30.319 22:56:19 json_config -- json_config/json_config.sh@323 -- # killprocess 123964 00:10:30.319 22:56:19 json_config -- common/autotest_common.sh@948 -- # '[' -z 123964 ']' 00:10:30.319 22:56:19 json_config -- common/autotest_common.sh@952 -- # kill -0 123964 00:10:30.319 22:56:19 json_config -- common/autotest_common.sh@953 -- # uname 00:10:30.319 22:56:19 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:30.319 22:56:19 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 123964 00:10:30.319 22:56:19 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:30.319 22:56:19 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:30.319 killing process with pid 123964 00:10:30.319 22:56:19 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 123964' 00:10:30.319 22:56:19 json_config -- common/autotest_common.sh@967 -- # kill 123964 00:10:30.319 22:56:19 json_config -- common/autotest_common.sh@972 -- # wait 123964 00:10:30.578 22:56:19 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:30.578 22:56:19 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:10:30.578 22:56:19 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:30.578 22:56:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:30.837 22:56:19 json_config -- json_config/json_config.sh@328 -- # return 0 00:10:30.837 INFO: Success 00:10:30.837 22:56:19 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:10:30.837 00:10:30.837 real 0m12.100s 00:10:30.837 user 0m18.751s 00:10:30.837 sys 0m2.451s 00:10:30.837 22:56:19 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:30.837 22:56:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:30.837 ************************************ 00:10:30.837 END TEST json_config 00:10:30.837 ************************************ 00:10:30.837 22:56:20 -- common/autotest_common.sh@1142 -- # return 0 00:10:30.838 22:56:20 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:30.838 22:56:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:30.838 22:56:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:30.838 22:56:20 -- common/autotest_common.sh@10 -- # set +x 00:10:30.838 ************************************ 00:10:30.838 START TEST json_config_extra_key 00:10:30.838 ************************************ 00:10:30.838 22:56:20 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:30.838 22:56:20 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:30.838 22:56:20 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:10:30.838 22:56:20 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.838 22:56:20 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.838 22:56:20 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.838 22:56:20 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.838 22:56:20 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.838 22:56:20 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.838 22:56:20 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.838 22:56:20 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.838 22:56:20 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.838 22:56:20 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.838 22:56:20 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da612c88-89a1-417d-bb8f-7f6b4f36ebd1 00:10:30.838 22:56:20 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=da612c88-89a1-417d-bb8f-7f6b4f36ebd1 00:10:30.838 22:56:20 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.838 22:56:20 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.838 22:56:20 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:30.838 22:56:20 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.838 22:56:20 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:30.838 22:56:20 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.838 22:56:20 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.838 22:56:20 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.838 22:56:20 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:30.838 22:56:20 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:30.838 22:56:20 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:30.838 22:56:20 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:10:30.838 22:56:20 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:30.838 22:56:20 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:10:30.838 22:56:20 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:30.838 22:56:20 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:30.838 22:56:20 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.838 22:56:20 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.838 22:56:20 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.838 22:56:20 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:30.838 22:56:20 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:30.838 22:56:20 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:30.838 22:56:20 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:30.838 22:56:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:10:30.838 22:56:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:10:30.838 22:56:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:10:30.838 22:56:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:10:30.838 22:56:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:10:30.838 22:56:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:10:30.838 22:56:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:10:30.838 22:56:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:10:30.838 22:56:20 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:30.838 22:56:20 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:10:30.838 INFO: launching applications... 00:10:30.838 22:56:20 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:30.838 22:56:20 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:10:30.838 22:56:20 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:10:30.838 22:56:20 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:30.838 22:56:20 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:30.838 22:56:20 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:10:30.838 22:56:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:30.838 22:56:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:30.838 22:56:20 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=124133 00:10:30.838 22:56:20 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:30.838 Waiting for target to run... 00:10:30.838 22:56:20 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:30.838 22:56:20 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 124133 /var/tmp/spdk_tgt.sock 00:10:30.838 22:56:20 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 124133 ']' 00:10:30.838 22:56:20 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:30.838 22:56:20 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:30.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:30.838 22:56:20 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:30.838 22:56:20 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:30.838 22:56:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:30.838 [2024-07-13 22:56:20.186693] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:30.838 [2024-07-13 22:56:20.186981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124133 ] 00:10:31.405 [2024-07-13 22:56:20.632340] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.405 [2024-07-13 22:56:20.697445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.971 22:56:21 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:31.971 22:56:21 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:10:31.971 00:10:31.971 22:56:21 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:10:31.971 INFO: shutting down applications... 00:10:31.971 22:56:21 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:10:31.971 22:56:21 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:10:31.971 22:56:21 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:10:31.971 22:56:21 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:31.971 22:56:21 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 124133 ]] 00:10:31.971 22:56:21 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 124133 00:10:31.971 22:56:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:31.971 22:56:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:31.971 22:56:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 124133 00:10:31.971 22:56:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:32.537 22:56:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:32.537 22:56:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:32.537 22:56:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 124133 00:10:32.537 22:56:21 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:32.537 22:56:21 json_config_extra_key -- json_config/common.sh@43 -- # break 00:10:32.537 22:56:21 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:32.537 SPDK target shutdown done 00:10:32.537 22:56:21 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:32.537 Success 00:10:32.537 22:56:21 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:10:32.537 ************************************ 00:10:32.537 END TEST json_config_extra_key 00:10:32.537 ************************************ 00:10:32.537 00:10:32.537 real 0m1.600s 00:10:32.537 user 0m1.483s 00:10:32.537 sys 0m0.489s 00:10:32.537 22:56:21 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:32.537 22:56:21 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:32.537 22:56:21 -- common/autotest_common.sh@1142 -- # return 0 00:10:32.537 22:56:21 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:32.537 22:56:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:32.537 22:56:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:32.537 22:56:21 -- common/autotest_common.sh@10 -- # set +x 00:10:32.537 ************************************ 00:10:32.537 START TEST alias_rpc 00:10:32.537 ************************************ 00:10:32.537 22:56:21 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:32.537 * Looking for test storage... 00:10:32.537 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:10:32.537 22:56:21 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:32.537 22:56:21 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:32.537 22:56:21 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=124212 00:10:32.537 22:56:21 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 124212 00:10:32.537 22:56:21 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 124212 ']' 00:10:32.537 22:56:21 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.537 22:56:21 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:32.537 22:56:21 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.537 22:56:21 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:32.537 22:56:21 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.537 [2024-07-13 22:56:21.831756] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:32.537 [2024-07-13 22:56:21.832255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124212 ] 00:10:32.796 [2024-07-13 22:56:21.971533] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.796 [2024-07-13 22:56:22.064183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.731 22:56:22 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:33.732 22:56:22 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:10:33.732 22:56:22 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:10:33.732 22:56:23 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 124212 00:10:33.732 22:56:23 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 124212 ']' 00:10:33.732 22:56:23 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 124212 00:10:33.732 22:56:23 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:10:33.732 22:56:23 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:33.732 22:56:23 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124212 00:10:33.732 22:56:23 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:33.732 22:56:23 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:33.732 22:56:23 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124212' 00:10:33.732 killing process with pid 124212 00:10:33.732 22:56:23 alias_rpc -- common/autotest_common.sh@967 -- # kill 124212 00:10:33.732 22:56:23 alias_rpc -- common/autotest_common.sh@972 -- # wait 124212 00:10:34.299 00:10:34.299 real 0m1.821s 00:10:34.299 user 0m2.064s 00:10:34.299 sys 0m0.434s 00:10:34.299 22:56:23 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:34.299 22:56:23 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:34.299 ************************************ 00:10:34.299 END TEST alias_rpc 00:10:34.299 ************************************ 00:10:34.299 22:56:23 -- common/autotest_common.sh@1142 -- # return 0 00:10:34.299 22:56:23 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:10:34.299 22:56:23 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:34.299 22:56:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:34.299 22:56:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:34.299 22:56:23 -- common/autotest_common.sh@10 -- # set +x 00:10:34.299 ************************************ 00:10:34.299 START TEST spdkcli_tcp 00:10:34.299 ************************************ 00:10:34.299 22:56:23 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:34.299 * Looking for test storage... 00:10:34.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:10:34.300 22:56:23 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:10:34.300 22:56:23 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:10:34.300 22:56:23 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:10:34.300 22:56:23 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:10:34.300 22:56:23 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:10:34.300 22:56:23 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:34.300 22:56:23 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:10:34.300 22:56:23 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:34.300 22:56:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:34.300 22:56:23 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=124301 00:10:34.300 22:56:23 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 124301 00:10:34.300 22:56:23 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:10:34.300 22:56:23 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 124301 ']' 00:10:34.300 22:56:23 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.300 22:56:23 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:34.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.300 22:56:23 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.300 22:56:23 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:34.300 22:56:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:34.558 [2024-07-13 22:56:23.723000] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:34.558 [2024-07-13 22:56:23.723281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124301 ] 00:10:34.558 [2024-07-13 22:56:23.875249] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:34.558 [2024-07-13 22:56:23.942432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.558 [2024-07-13 22:56:23.942431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:35.492 22:56:24 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:35.492 22:56:24 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:10:35.492 22:56:24 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=124321 00:10:35.492 22:56:24 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:10:35.492 22:56:24 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:10:35.751 [ 00:10:35.751 "spdk_get_version", 00:10:35.751 "rpc_get_methods", 00:10:35.751 "keyring_get_keys", 00:10:35.751 "trace_get_info", 00:10:35.751 "trace_get_tpoint_group_mask", 00:10:35.751 "trace_disable_tpoint_group", 00:10:35.751 "trace_enable_tpoint_group", 00:10:35.751 "trace_clear_tpoint_mask", 00:10:35.751 "trace_set_tpoint_mask", 00:10:35.751 "framework_get_pci_devices", 00:10:35.751 "framework_get_config", 00:10:35.751 "framework_get_subsystems", 00:10:35.751 "iobuf_get_stats", 00:10:35.751 "iobuf_set_options", 00:10:35.751 "sock_get_default_impl", 00:10:35.751 "sock_set_default_impl", 00:10:35.751 "sock_impl_set_options", 00:10:35.751 "sock_impl_get_options", 00:10:35.751 "vmd_rescan", 00:10:35.751 "vmd_remove_device", 00:10:35.751 "vmd_enable", 00:10:35.751 "accel_get_stats", 00:10:35.751 "accel_set_options", 00:10:35.751 "accel_set_driver", 00:10:35.751 "accel_crypto_key_destroy", 00:10:35.751 "accel_crypto_keys_get", 00:10:35.751 "accel_crypto_key_create", 00:10:35.751 "accel_assign_opc", 00:10:35.751 "accel_get_module_info", 00:10:35.751 "accel_get_opc_assignments", 00:10:35.751 "notify_get_notifications", 00:10:35.751 "notify_get_types", 00:10:35.751 "bdev_get_histogram", 00:10:35.751 "bdev_enable_histogram", 00:10:35.751 "bdev_set_qos_limit", 00:10:35.751 "bdev_set_qd_sampling_period", 00:10:35.751 "bdev_get_bdevs", 00:10:35.751 "bdev_reset_iostat", 00:10:35.751 "bdev_get_iostat", 00:10:35.751 "bdev_examine", 00:10:35.751 "bdev_wait_for_examine", 00:10:35.751 "bdev_set_options", 00:10:35.751 "scsi_get_devices", 00:10:35.751 "thread_set_cpumask", 00:10:35.751 "framework_get_governor", 00:10:35.751 "framework_get_scheduler", 00:10:35.751 "framework_set_scheduler", 00:10:35.751 "framework_get_reactors", 00:10:35.751 "thread_get_io_channels", 00:10:35.751 "thread_get_pollers", 00:10:35.751 "thread_get_stats", 00:10:35.751 "framework_monitor_context_switch", 00:10:35.751 "spdk_kill_instance", 00:10:35.751 "log_enable_timestamps", 00:10:35.751 "log_get_flags", 00:10:35.751 "log_clear_flag", 00:10:35.751 "log_set_flag", 00:10:35.751 "log_get_level", 00:10:35.751 "log_set_level", 00:10:35.751 "log_get_print_level", 00:10:35.751 "log_set_print_level", 00:10:35.751 "framework_enable_cpumask_locks", 00:10:35.751 "framework_disable_cpumask_locks", 00:10:35.751 "framework_wait_init", 00:10:35.751 "framework_start_init", 00:10:35.751 "virtio_blk_create_transport", 00:10:35.751 "virtio_blk_get_transports", 00:10:35.751 "vhost_controller_set_coalescing", 00:10:35.751 "vhost_get_controllers", 00:10:35.751 "vhost_delete_controller", 00:10:35.751 "vhost_create_blk_controller", 00:10:35.751 "vhost_scsi_controller_remove_target", 00:10:35.751 "vhost_scsi_controller_add_target", 00:10:35.751 "vhost_start_scsi_controller", 00:10:35.751 "vhost_create_scsi_controller", 00:10:35.751 "nbd_get_disks", 00:10:35.751 "nbd_stop_disk", 00:10:35.751 "nbd_start_disk", 00:10:35.751 "env_dpdk_get_mem_stats", 00:10:35.751 "nvmf_stop_mdns_prr", 00:10:35.751 "nvmf_publish_mdns_prr", 00:10:35.751 "nvmf_subsystem_get_listeners", 00:10:35.751 "nvmf_subsystem_get_qpairs", 00:10:35.751 "nvmf_subsystem_get_controllers", 00:10:35.751 "nvmf_get_stats", 00:10:35.751 "nvmf_get_transports", 00:10:35.751 "nvmf_create_transport", 00:10:35.751 "nvmf_get_targets", 00:10:35.751 "nvmf_delete_target", 00:10:35.751 "nvmf_create_target", 00:10:35.752 "nvmf_subsystem_allow_any_host", 00:10:35.752 "nvmf_subsystem_remove_host", 00:10:35.752 "nvmf_subsystem_add_host", 00:10:35.752 "nvmf_ns_remove_host", 00:10:35.752 "nvmf_ns_add_host", 00:10:35.752 "nvmf_subsystem_remove_ns", 00:10:35.752 "nvmf_subsystem_add_ns", 00:10:35.752 "nvmf_subsystem_listener_set_ana_state", 00:10:35.752 "nvmf_discovery_get_referrals", 00:10:35.752 "nvmf_discovery_remove_referral", 00:10:35.752 "nvmf_discovery_add_referral", 00:10:35.752 "nvmf_subsystem_remove_listener", 00:10:35.752 "nvmf_subsystem_add_listener", 00:10:35.752 "nvmf_delete_subsystem", 00:10:35.752 "nvmf_create_subsystem", 00:10:35.752 "nvmf_get_subsystems", 00:10:35.752 "nvmf_set_crdt", 00:10:35.752 "nvmf_set_config", 00:10:35.752 "nvmf_set_max_subsystems", 00:10:35.752 "iscsi_get_histogram", 00:10:35.752 "iscsi_enable_histogram", 00:10:35.752 "iscsi_set_options", 00:10:35.752 "iscsi_get_auth_groups", 00:10:35.752 "iscsi_auth_group_remove_secret", 00:10:35.752 "iscsi_auth_group_add_secret", 00:10:35.752 "iscsi_delete_auth_group", 00:10:35.752 "iscsi_create_auth_group", 00:10:35.752 "iscsi_set_discovery_auth", 00:10:35.752 "iscsi_get_options", 00:10:35.752 "iscsi_target_node_request_logout", 00:10:35.752 "iscsi_target_node_set_redirect", 00:10:35.752 "iscsi_target_node_set_auth", 00:10:35.752 "iscsi_target_node_add_lun", 00:10:35.752 "iscsi_get_stats", 00:10:35.752 "iscsi_get_connections", 00:10:35.752 "iscsi_portal_group_set_auth", 00:10:35.752 "iscsi_start_portal_group", 00:10:35.752 "iscsi_delete_portal_group", 00:10:35.752 "iscsi_create_portal_group", 00:10:35.752 "iscsi_get_portal_groups", 00:10:35.752 "iscsi_delete_target_node", 00:10:35.752 "iscsi_target_node_remove_pg_ig_maps", 00:10:35.752 "iscsi_target_node_add_pg_ig_maps", 00:10:35.752 "iscsi_create_target_node", 00:10:35.752 "iscsi_get_target_nodes", 00:10:35.752 "iscsi_delete_initiator_group", 00:10:35.752 "iscsi_initiator_group_remove_initiators", 00:10:35.752 "iscsi_initiator_group_add_initiators", 00:10:35.752 "iscsi_create_initiator_group", 00:10:35.752 "iscsi_get_initiator_groups", 00:10:35.752 "keyring_linux_set_options", 00:10:35.752 "keyring_file_remove_key", 00:10:35.752 "keyring_file_add_key", 00:10:35.752 "iaa_scan_accel_module", 00:10:35.752 "dsa_scan_accel_module", 00:10:35.752 "ioat_scan_accel_module", 00:10:35.752 "accel_error_inject_error", 00:10:35.752 "bdev_iscsi_delete", 00:10:35.752 "bdev_iscsi_create", 00:10:35.752 "bdev_iscsi_set_options", 00:10:35.752 "bdev_virtio_attach_controller", 00:10:35.752 "bdev_virtio_scsi_get_devices", 00:10:35.752 "bdev_virtio_detach_controller", 00:10:35.752 "bdev_virtio_blk_set_hotplug", 00:10:35.752 "bdev_ftl_set_property", 00:10:35.752 "bdev_ftl_get_properties", 00:10:35.752 "bdev_ftl_get_stats", 00:10:35.752 "bdev_ftl_unmap", 00:10:35.752 "bdev_ftl_unload", 00:10:35.752 "bdev_ftl_delete", 00:10:35.752 "bdev_ftl_load", 00:10:35.752 "bdev_ftl_create", 00:10:35.752 "bdev_aio_delete", 00:10:35.752 "bdev_aio_rescan", 00:10:35.752 "bdev_aio_create", 00:10:35.752 "blobfs_create", 00:10:35.752 "blobfs_detect", 00:10:35.752 "blobfs_set_cache_size", 00:10:35.752 "bdev_zone_block_delete", 00:10:35.752 "bdev_zone_block_create", 00:10:35.752 "bdev_delay_delete", 00:10:35.752 "bdev_delay_create", 00:10:35.752 "bdev_delay_update_latency", 00:10:35.752 "bdev_split_delete", 00:10:35.752 "bdev_split_create", 00:10:35.752 "bdev_error_inject_error", 00:10:35.752 "bdev_error_delete", 00:10:35.752 "bdev_error_create", 00:10:35.752 "bdev_raid_set_options", 00:10:35.752 "bdev_raid_remove_base_bdev", 00:10:35.752 "bdev_raid_add_base_bdev", 00:10:35.752 "bdev_raid_delete", 00:10:35.752 "bdev_raid_create", 00:10:35.752 "bdev_raid_get_bdevs", 00:10:35.752 "bdev_lvol_set_parent_bdev", 00:10:35.752 "bdev_lvol_set_parent", 00:10:35.752 "bdev_lvol_check_shallow_copy", 00:10:35.752 "bdev_lvol_start_shallow_copy", 00:10:35.752 "bdev_lvol_grow_lvstore", 00:10:35.752 "bdev_lvol_get_lvols", 00:10:35.752 "bdev_lvol_get_lvstores", 00:10:35.752 "bdev_lvol_delete", 00:10:35.752 "bdev_lvol_set_read_only", 00:10:35.752 "bdev_lvol_resize", 00:10:35.752 "bdev_lvol_decouple_parent", 00:10:35.752 "bdev_lvol_inflate", 00:10:35.752 "bdev_lvol_rename", 00:10:35.752 "bdev_lvol_clone_bdev", 00:10:35.752 "bdev_lvol_clone", 00:10:35.752 "bdev_lvol_snapshot", 00:10:35.752 "bdev_lvol_create", 00:10:35.752 "bdev_lvol_delete_lvstore", 00:10:35.752 "bdev_lvol_rename_lvstore", 00:10:35.752 "bdev_lvol_create_lvstore", 00:10:35.752 "bdev_passthru_delete", 00:10:35.752 "bdev_passthru_create", 00:10:35.752 "bdev_nvme_cuse_unregister", 00:10:35.752 "bdev_nvme_cuse_register", 00:10:35.752 "bdev_opal_new_user", 00:10:35.752 "bdev_opal_set_lock_state", 00:10:35.752 "bdev_opal_delete", 00:10:35.752 "bdev_opal_get_info", 00:10:35.752 "bdev_opal_create", 00:10:35.752 "bdev_nvme_opal_revert", 00:10:35.752 "bdev_nvme_opal_init", 00:10:35.752 "bdev_nvme_send_cmd", 00:10:35.752 "bdev_nvme_get_path_iostat", 00:10:35.752 "bdev_nvme_get_mdns_discovery_info", 00:10:35.752 "bdev_nvme_stop_mdns_discovery", 00:10:35.752 "bdev_nvme_start_mdns_discovery", 00:10:35.752 "bdev_nvme_set_multipath_policy", 00:10:35.752 "bdev_nvme_set_preferred_path", 00:10:35.752 "bdev_nvme_get_io_paths", 00:10:35.752 "bdev_nvme_remove_error_injection", 00:10:35.752 "bdev_nvme_add_error_injection", 00:10:35.752 "bdev_nvme_get_discovery_info", 00:10:35.752 "bdev_nvme_stop_discovery", 00:10:35.752 "bdev_nvme_start_discovery", 00:10:35.752 "bdev_nvme_get_controller_health_info", 00:10:35.752 "bdev_nvme_disable_controller", 00:10:35.752 "bdev_nvme_enable_controller", 00:10:35.752 "bdev_nvme_reset_controller", 00:10:35.752 "bdev_nvme_get_transport_statistics", 00:10:35.752 "bdev_nvme_apply_firmware", 00:10:35.752 "bdev_nvme_detach_controller", 00:10:35.752 "bdev_nvme_get_controllers", 00:10:35.752 "bdev_nvme_attach_controller", 00:10:35.752 "bdev_nvme_set_hotplug", 00:10:35.752 "bdev_nvme_set_options", 00:10:35.752 "bdev_null_resize", 00:10:35.752 "bdev_null_delete", 00:10:35.752 "bdev_null_create", 00:10:35.752 "bdev_malloc_delete", 00:10:35.752 "bdev_malloc_create" 00:10:35.752 ] 00:10:35.752 22:56:24 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:10:35.752 22:56:24 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:35.752 22:56:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:35.752 22:56:25 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:35.752 22:56:25 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 124301 00:10:35.752 22:56:25 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 124301 ']' 00:10:35.752 22:56:25 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 124301 00:10:35.752 22:56:25 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:10:35.752 22:56:25 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:35.752 22:56:25 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124301 00:10:35.752 22:56:25 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:35.752 22:56:25 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:35.752 22:56:25 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124301' 00:10:35.752 killing process with pid 124301 00:10:35.752 22:56:25 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 124301 00:10:35.752 22:56:25 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 124301 00:10:36.320 00:10:36.320 real 0m1.861s 00:10:36.320 user 0m3.460s 00:10:36.320 sys 0m0.523s 00:10:36.320 ************************************ 00:10:36.320 END TEST spdkcli_tcp 00:10:36.320 ************************************ 00:10:36.320 22:56:25 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:36.320 22:56:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:36.320 22:56:25 -- common/autotest_common.sh@1142 -- # return 0 00:10:36.320 22:56:25 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:36.320 22:56:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:36.320 22:56:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:36.320 22:56:25 -- common/autotest_common.sh@10 -- # set +x 00:10:36.320 ************************************ 00:10:36.320 START TEST dpdk_mem_utility 00:10:36.320 ************************************ 00:10:36.320 22:56:25 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:36.320 * Looking for test storage... 00:10:36.320 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:10:36.320 22:56:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:36.320 22:56:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=124395 00:10:36.320 22:56:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 124395 00:10:36.320 22:56:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:36.320 22:56:25 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 124395 ']' 00:10:36.320 22:56:25 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.320 22:56:25 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:36.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.320 22:56:25 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.320 22:56:25 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:36.320 22:56:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:36.320 [2024-07-13 22:56:25.619244] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:36.320 [2024-07-13 22:56:25.619453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124395 ] 00:10:36.579 [2024-07-13 22:56:25.754644] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.579 [2024-07-13 22:56:25.821296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.146 22:56:26 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:37.146 22:56:26 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:10:37.146 22:56:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:10:37.146 22:56:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:10:37.146 22:56:26 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.146 22:56:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:37.406 { 00:10:37.406 "filename": "/tmp/spdk_mem_dump.txt" 00:10:37.406 } 00:10:37.406 22:56:26 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.406 22:56:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:37.406 DPDK memory size 814.000000 MiB in 1 heap(s) 00:10:37.406 1 heaps totaling size 814.000000 MiB 00:10:37.406 size: 814.000000 MiB heap id: 0 00:10:37.406 end heaps---------- 00:10:37.406 8 mempools totaling size 598.116089 MiB 00:10:37.406 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:10:37.406 size: 158.602051 MiB name: PDU_data_out_Pool 00:10:37.406 size: 84.521057 MiB name: bdev_io_124395 00:10:37.406 size: 51.011292 MiB name: evtpool_124395 00:10:37.406 size: 50.003479 MiB name: msgpool_124395 00:10:37.406 size: 21.763794 MiB name: PDU_Pool 00:10:37.406 size: 19.513306 MiB name: SCSI_TASK_Pool 00:10:37.406 size: 0.026123 MiB name: Session_Pool 00:10:37.406 end mempools------- 00:10:37.406 6 memzones totaling size 4.142822 MiB 00:10:37.406 size: 1.000366 MiB name: RG_ring_0_124395 00:10:37.406 size: 1.000366 MiB name: RG_ring_1_124395 00:10:37.406 size: 1.000366 MiB name: RG_ring_4_124395 00:10:37.406 size: 1.000366 MiB name: RG_ring_5_124395 00:10:37.406 size: 0.125366 MiB name: RG_ring_2_124395 00:10:37.406 size: 0.015991 MiB name: RG_ring_3_124395 00:10:37.406 end memzones------- 00:10:37.406 22:56:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:10:37.406 heap id: 0 total size: 814.000000 MiB number of busy elements: 226 number of free elements: 15 00:10:37.406 list of free elements. size: 12.485474 MiB 00:10:37.406 element at address: 0x200000400000 with size: 1.999512 MiB 00:10:37.406 element at address: 0x200018e00000 with size: 0.999878 MiB 00:10:37.406 element at address: 0x200019000000 with size: 0.999878 MiB 00:10:37.406 element at address: 0x200003e00000 with size: 0.996277 MiB 00:10:37.406 element at address: 0x200031c00000 with size: 0.994446 MiB 00:10:37.406 element at address: 0x200013800000 with size: 0.978699 MiB 00:10:37.406 element at address: 0x200007000000 with size: 0.959839 MiB 00:10:37.406 element at address: 0x200019200000 with size: 0.936584 MiB 00:10:37.406 element at address: 0x200000200000 with size: 0.836670 MiB 00:10:37.406 element at address: 0x20001aa00000 with size: 0.568054 MiB 00:10:37.406 element at address: 0x20000b200000 with size: 0.489807 MiB 00:10:37.406 element at address: 0x200000800000 with size: 0.486511 MiB 00:10:37.406 element at address: 0x200019400000 with size: 0.485657 MiB 00:10:37.406 element at address: 0x200027e00000 with size: 0.402161 MiB 00:10:37.406 element at address: 0x200003a00000 with size: 0.351501 MiB 00:10:37.406 list of standard malloc elements. size: 199.251953 MiB 00:10:37.406 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:10:37.406 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:10:37.406 element at address: 0x200018efff80 with size: 1.000122 MiB 00:10:37.406 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:10:37.406 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:10:37.406 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:10:37.406 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:10:37.406 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:10:37.406 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:10:37.406 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:10:37.406 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:10:37.406 element at address: 0x20000087c980 with size: 0.000183 MiB 00:10:37.406 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:10:37.406 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:10:37.406 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:10:37.406 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:10:37.406 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:10:37.406 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:10:37.406 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:10:37.406 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:10:37.406 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:10:37.406 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:10:37.406 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:10:37.406 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:10:37.406 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:10:37.406 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:10:37.406 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:10:37.406 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:10:37.406 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:10:37.406 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:10:37.406 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:10:37.406 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:10:37.406 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:10:37.406 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:10:37.406 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:10:37.406 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:10:37.406 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:10:37.406 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:10:37.406 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:10:37.406 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:10:37.406 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:10:37.406 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:10:37.406 element at address: 0x200003adb300 with size: 0.000183 MiB 00:10:37.406 element at address: 0x200003adb500 with size: 0.000183 MiB 00:10:37.406 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:10:37.406 element at address: 0x200003affa80 with size: 0.000183 MiB 00:10:37.406 element at address: 0x200003affb40 with size: 0.000183 MiB 00:10:37.406 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:10:37.406 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:10:37.406 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:10:37.406 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:10:37.406 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:10:37.406 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:10:37.406 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:10:37.406 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:10:37.406 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:10:37.406 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:10:37.406 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:10:37.407 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e66f40 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e67000 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6dc00 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:10:37.407 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:10:37.407 list of memzone associated elements. size: 602.262573 MiB 00:10:37.407 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:10:37.407 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:10:37.407 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:10:37.407 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:10:37.407 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:10:37.407 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_124395_0 00:10:37.407 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:10:37.407 associated memzone info: size: 48.002930 MiB name: MP_evtpool_124395_0 00:10:37.407 element at address: 0x200003fff380 with size: 48.003052 MiB 00:10:37.407 associated memzone info: size: 48.002930 MiB name: MP_msgpool_124395_0 00:10:37.407 element at address: 0x2000195be940 with size: 20.255554 MiB 00:10:37.407 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:10:37.407 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:10:37.407 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:10:37.407 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:10:37.407 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_124395 00:10:37.407 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:10:37.407 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_124395 00:10:37.407 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:10:37.407 associated memzone info: size: 1.007996 MiB name: MP_evtpool_124395 00:10:37.407 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:10:37.407 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:10:37.407 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:10:37.407 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:10:37.407 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:10:37.407 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:10:37.407 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:10:37.407 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:10:37.408 element at address: 0x200003eff180 with size: 1.000488 MiB 00:10:37.408 associated memzone info: size: 1.000366 MiB name: RG_ring_0_124395 00:10:37.408 element at address: 0x200003affc00 with size: 1.000488 MiB 00:10:37.408 associated memzone info: size: 1.000366 MiB name: RG_ring_1_124395 00:10:37.408 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:10:37.408 associated memzone info: size: 1.000366 MiB name: RG_ring_4_124395 00:10:37.408 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:10:37.408 associated memzone info: size: 1.000366 MiB name: RG_ring_5_124395 00:10:37.408 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:10:37.408 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_124395 00:10:37.408 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:10:37.408 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:10:37.408 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:10:37.408 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:10:37.408 element at address: 0x20001947c540 with size: 0.250488 MiB 00:10:37.408 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:10:37.408 element at address: 0x200003adf880 with size: 0.125488 MiB 00:10:37.408 associated memzone info: size: 0.125366 MiB name: RG_ring_2_124395 00:10:37.408 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:10:37.408 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:10:37.408 element at address: 0x200027e670c0 with size: 0.023743 MiB 00:10:37.408 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:10:37.408 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:10:37.408 associated memzone info: size: 0.015991 MiB name: RG_ring_3_124395 00:10:37.408 element at address: 0x200027e6d200 with size: 0.002441 MiB 00:10:37.408 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:10:37.408 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:10:37.408 associated memzone info: size: 0.000183 MiB name: MP_msgpool_124395 00:10:37.408 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:10:37.408 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_124395 00:10:37.408 element at address: 0x200027e6dcc0 with size: 0.000305 MiB 00:10:37.408 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:10:37.408 22:56:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:10:37.408 22:56:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 124395 00:10:37.408 22:56:26 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 124395 ']' 00:10:37.408 22:56:26 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 124395 00:10:37.408 22:56:26 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:10:37.408 22:56:26 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:37.408 22:56:26 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124395 00:10:37.408 22:56:26 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:37.408 22:56:26 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:37.408 22:56:26 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124395' 00:10:37.408 killing process with pid 124395 00:10:37.408 22:56:26 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 124395 00:10:37.408 22:56:26 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 124395 00:10:38.002 00:10:38.002 real 0m1.635s 00:10:38.002 user 0m1.685s 00:10:38.002 sys 0m0.446s 00:10:38.002 22:56:27 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:38.002 22:56:27 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:38.002 ************************************ 00:10:38.002 END TEST dpdk_mem_utility 00:10:38.002 ************************************ 00:10:38.002 22:56:27 -- common/autotest_common.sh@1142 -- # return 0 00:10:38.002 22:56:27 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:38.002 22:56:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:38.002 22:56:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:38.002 22:56:27 -- common/autotest_common.sh@10 -- # set +x 00:10:38.003 ************************************ 00:10:38.003 START TEST event 00:10:38.003 ************************************ 00:10:38.003 22:56:27 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:38.003 * Looking for test storage... 00:10:38.003 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:38.003 22:56:27 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:38.003 22:56:27 event -- bdev/nbd_common.sh@6 -- # set -e 00:10:38.003 22:56:27 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:38.003 22:56:27 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:10:38.003 22:56:27 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:38.003 22:56:27 event -- common/autotest_common.sh@10 -- # set +x 00:10:38.003 ************************************ 00:10:38.003 START TEST event_perf 00:10:38.003 ************************************ 00:10:38.003 22:56:27 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:38.003 Running I/O for 1 seconds...[2024-07-13 22:56:27.306263] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:38.003 [2024-07-13 22:56:27.307176] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124479 ] 00:10:38.261 [2024-07-13 22:56:27.474248] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:38.261 [2024-07-13 22:56:27.561843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:38.261 [2024-07-13 22:56:27.561972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:38.261 [2024-07-13 22:56:27.562802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:38.261 [2024-07-13 22:56:27.562830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.637 Running I/O for 1 seconds... 00:10:39.637 lcore 0: 206763 00:10:39.637 lcore 1: 206763 00:10:39.637 lcore 2: 206762 00:10:39.637 lcore 3: 206762 00:10:39.637 done. 00:10:39.637 00:10:39.637 real 0m1.381s 00:10:39.637 user 0m4.186s 00:10:39.637 sys 0m0.092s 00:10:39.637 22:56:28 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:39.637 22:56:28 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:10:39.637 ************************************ 00:10:39.637 END TEST event_perf 00:10:39.637 ************************************ 00:10:39.637 22:56:28 event -- common/autotest_common.sh@1142 -- # return 0 00:10:39.637 22:56:28 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:39.637 22:56:28 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:39.637 22:56:28 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:39.637 22:56:28 event -- common/autotest_common.sh@10 -- # set +x 00:10:39.637 ************************************ 00:10:39.637 START TEST event_reactor 00:10:39.637 ************************************ 00:10:39.637 22:56:28 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:39.637 [2024-07-13 22:56:28.732588] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:39.637 [2024-07-13 22:56:28.732794] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124529 ] 00:10:39.637 [2024-07-13 22:56:28.875056] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.637 [2024-07-13 22:56:28.972806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.013 test_start 00:10:41.013 oneshot 00:10:41.013 tick 100 00:10:41.013 tick 100 00:10:41.013 tick 250 00:10:41.013 tick 100 00:10:41.013 tick 100 00:10:41.013 tick 100 00:10:41.013 tick 250 00:10:41.013 tick 500 00:10:41.013 tick 100 00:10:41.013 tick 100 00:10:41.013 tick 250 00:10:41.013 tick 100 00:10:41.013 tick 100 00:10:41.013 test_end 00:10:41.013 00:10:41.013 real 0m1.356s 00:10:41.013 user 0m1.150s 00:10:41.013 sys 0m0.105s 00:10:41.013 22:56:30 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:41.013 ************************************ 00:10:41.013 END TEST event_reactor 00:10:41.013 ************************************ 00:10:41.013 22:56:30 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:10:41.013 22:56:30 event -- common/autotest_common.sh@1142 -- # return 0 00:10:41.013 22:56:30 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:41.013 22:56:30 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:41.013 22:56:30 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:41.013 22:56:30 event -- common/autotest_common.sh@10 -- # set +x 00:10:41.013 ************************************ 00:10:41.013 START TEST event_reactor_perf 00:10:41.013 ************************************ 00:10:41.013 22:56:30 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:41.013 [2024-07-13 22:56:30.145645] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:41.013 [2024-07-13 22:56:30.145897] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124572 ] 00:10:41.013 [2024-07-13 22:56:30.290488] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.013 [2024-07-13 22:56:30.346902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.388 test_start 00:10:42.388 test_end 00:10:42.388 Performance: 393952 events per second 00:10:42.388 00:10:42.388 real 0m1.319s 00:10:42.388 user 0m1.142s 00:10:42.388 sys 0m0.077s 00:10:42.388 22:56:31 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:42.388 ************************************ 00:10:42.388 END TEST event_reactor_perf 00:10:42.388 ************************************ 00:10:42.388 22:56:31 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:10:42.388 22:56:31 event -- common/autotest_common.sh@1142 -- # return 0 00:10:42.388 22:56:31 event -- event/event.sh@49 -- # uname -s 00:10:42.388 22:56:31 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:10:42.388 22:56:31 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:42.388 22:56:31 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:42.388 22:56:31 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:42.388 22:56:31 event -- common/autotest_common.sh@10 -- # set +x 00:10:42.388 ************************************ 00:10:42.388 START TEST event_scheduler 00:10:42.388 ************************************ 00:10:42.388 22:56:31 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:42.388 * Looking for test storage... 00:10:42.388 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:10:42.388 22:56:31 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:10:42.388 22:56:31 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=124638 00:10:42.388 22:56:31 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:10:42.388 22:56:31 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 124638 00:10:42.388 22:56:31 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 124638 ']' 00:10:42.388 22:56:31 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.388 22:56:31 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:42.388 22:56:31 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.388 22:56:31 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:42.388 22:56:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:42.388 22:56:31 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:10:42.388 [2024-07-13 22:56:31.645415] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:42.389 [2024-07-13 22:56:31.645703] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124638 ] 00:10:42.646 [2024-07-13 22:56:31.816019] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:42.647 [2024-07-13 22:56:31.899859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.647 [2024-07-13 22:56:31.900021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.647 [2024-07-13 22:56:31.900144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:42.647 [2024-07-13 22:56:31.900144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:43.581 22:56:32 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:43.581 22:56:32 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:10:43.581 22:56:32 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:10:43.581 22:56:32 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.581 22:56:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:43.581 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:43.581 POWER: Cannot set governor of lcore 0 to userspace 00:10:43.581 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:43.581 POWER: Cannot set governor of lcore 0 to performance 00:10:43.581 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:43.581 POWER: Cannot set governor of lcore 0 to userspace 00:10:43.581 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:10:43.581 POWER: Unable to set Power Management Environment for lcore 0 00:10:43.581 [2024-07-13 22:56:32.623235] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:10:43.581 [2024-07-13 22:56:32.623277] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:10:43.581 [2024-07-13 22:56:32.623343] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:10:43.581 [2024-07-13 22:56:32.623406] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:10:43.582 [2024-07-13 22:56:32.623446] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:10:43.582 [2024-07-13 22:56:32.623487] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:10:43.582 22:56:32 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.582 22:56:32 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:10:43.582 22:56:32 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.582 22:56:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:43.582 [2024-07-13 22:56:32.709292] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:10:43.582 22:56:32 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.582 22:56:32 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:10:43.582 22:56:32 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:43.582 22:56:32 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:43.582 22:56:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:43.582 ************************************ 00:10:43.582 START TEST scheduler_create_thread 00:10:43.582 ************************************ 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.582 2 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.582 3 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.582 4 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.582 5 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.582 6 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.582 7 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.582 8 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.582 9 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.582 10 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.582 22:56:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:44.515 22:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.515 00:10:44.515 real 0m1.168s 00:10:44.515 user 0m0.017s 00:10:44.515 sys 0m0.005s 00:10:44.515 22:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:44.515 22:56:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:44.515 ************************************ 00:10:44.515 END TEST scheduler_create_thread 00:10:44.515 ************************************ 00:10:44.773 22:56:33 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:10:44.773 22:56:33 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:10:44.773 22:56:33 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 124638 00:10:44.773 22:56:33 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 124638 ']' 00:10:44.773 22:56:33 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 124638 00:10:44.773 22:56:33 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:10:44.773 22:56:33 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:44.773 22:56:33 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124638 00:10:44.773 22:56:33 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:10:44.773 22:56:33 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:10:44.773 killing process with pid 124638 00:10:44.773 22:56:33 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124638' 00:10:44.773 22:56:33 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 124638 00:10:44.773 22:56:33 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 124638 00:10:45.033 [2024-07-13 22:56:34.371776] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:10:45.291 00:10:45.291 real 0m3.144s 00:10:45.291 user 0m5.617s 00:10:45.291 sys 0m0.418s 00:10:45.291 22:56:34 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:45.291 22:56:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:45.291 ************************************ 00:10:45.291 END TEST event_scheduler 00:10:45.291 ************************************ 00:10:45.291 22:56:34 event -- common/autotest_common.sh@1142 -- # return 0 00:10:45.291 22:56:34 event -- event/event.sh@51 -- # modprobe -n nbd 00:10:45.291 22:56:34 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:10:45.291 22:56:34 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:45.291 22:56:34 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:45.291 22:56:34 event -- common/autotest_common.sh@10 -- # set +x 00:10:45.549 ************************************ 00:10:45.549 START TEST app_repeat 00:10:45.549 ************************************ 00:10:45.549 22:56:34 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:10:45.549 22:56:34 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:45.549 22:56:34 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:45.549 22:56:34 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:10:45.549 22:56:34 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:45.549 22:56:34 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:10:45.549 22:56:34 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:10:45.549 22:56:34 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:10:45.549 22:56:34 event.app_repeat -- event/event.sh@19 -- # repeat_pid=124737 00:10:45.549 22:56:34 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:10:45.549 22:56:34 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:10:45.549 22:56:34 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 124737' 00:10:45.549 Process app_repeat pid: 124737 00:10:45.549 22:56:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:45.549 spdk_app_start Round 0 00:10:45.549 22:56:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:10:45.549 22:56:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 124737 /var/tmp/spdk-nbd.sock 00:10:45.549 22:56:34 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 124737 ']' 00:10:45.549 22:56:34 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:45.549 22:56:34 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:45.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:45.549 22:56:34 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:45.549 22:56:34 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:45.549 22:56:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:45.549 [2024-07-13 22:56:34.746133] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:45.549 [2024-07-13 22:56:34.746452] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124737 ] 00:10:45.549 [2024-07-13 22:56:34.902772] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:45.807 [2024-07-13 22:56:34.993190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.807 [2024-07-13 22:56:34.993197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.383 22:56:35 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:46.383 22:56:35 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:10:46.383 22:56:35 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:46.673 Malloc0 00:10:46.673 22:56:35 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:46.931 Malloc1 00:10:46.931 22:56:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:46.931 22:56:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:46.931 22:56:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:46.931 22:56:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:46.931 22:56:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:46.931 22:56:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:46.931 22:56:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:46.931 22:56:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:46.931 22:56:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:46.931 22:56:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:46.931 22:56:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:46.931 22:56:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:46.931 22:56:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:46.931 22:56:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:46.931 22:56:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:46.931 22:56:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:47.190 /dev/nbd0 00:10:47.190 22:56:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:47.190 22:56:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:47.190 22:56:36 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:10:47.190 22:56:36 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:10:47.190 22:56:36 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:47.190 22:56:36 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:47.190 22:56:36 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:10:47.190 22:56:36 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:10:47.190 22:56:36 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:47.190 22:56:36 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:47.190 22:56:36 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:47.190 1+0 records in 00:10:47.190 1+0 records out 00:10:47.190 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000468881 s, 8.7 MB/s 00:10:47.190 22:56:36 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:47.190 22:56:36 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:10:47.190 22:56:36 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:47.190 22:56:36 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:47.190 22:56:36 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:10:47.190 22:56:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:47.190 22:56:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:47.190 22:56:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:47.449 /dev/nbd1 00:10:47.449 22:56:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:47.449 22:56:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:47.449 22:56:36 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:10:47.449 22:56:36 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:10:47.449 22:56:36 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:47.449 22:56:36 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:47.449 22:56:36 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:10:47.449 22:56:36 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:10:47.449 22:56:36 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:47.449 22:56:36 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:47.449 22:56:36 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:47.449 1+0 records in 00:10:47.449 1+0 records out 00:10:47.449 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348336 s, 11.8 MB/s 00:10:47.449 22:56:36 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:47.449 22:56:36 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:10:47.449 22:56:36 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:47.449 22:56:36 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:47.449 22:56:36 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:10:47.449 22:56:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:47.449 22:56:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:47.449 22:56:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:47.449 22:56:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:47.449 22:56:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:47.708 22:56:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:47.708 { 00:10:47.708 "nbd_device": "/dev/nbd0", 00:10:47.708 "bdev_name": "Malloc0" 00:10:47.708 }, 00:10:47.708 { 00:10:47.708 "nbd_device": "/dev/nbd1", 00:10:47.708 "bdev_name": "Malloc1" 00:10:47.708 } 00:10:47.708 ]' 00:10:47.708 22:56:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:47.708 { 00:10:47.708 "nbd_device": "/dev/nbd0", 00:10:47.708 "bdev_name": "Malloc0" 00:10:47.708 }, 00:10:47.708 { 00:10:47.708 "nbd_device": "/dev/nbd1", 00:10:47.708 "bdev_name": "Malloc1" 00:10:47.708 } 00:10:47.708 ]' 00:10:47.709 22:56:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:47.968 /dev/nbd1' 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:47.968 /dev/nbd1' 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:47.968 256+0 records in 00:10:47.968 256+0 records out 00:10:47.968 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102681 s, 102 MB/s 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:47.968 256+0 records in 00:10:47.968 256+0 records out 00:10:47.968 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221595 s, 47.3 MB/s 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:47.968 256+0 records in 00:10:47.968 256+0 records out 00:10:47.968 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0312703 s, 33.5 MB/s 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:47.968 22:56:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:48.227 22:56:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:48.227 22:56:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:48.227 22:56:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:48.227 22:56:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:48.227 22:56:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:48.227 22:56:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:48.227 22:56:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:48.227 22:56:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:48.227 22:56:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:48.227 22:56:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:48.485 22:56:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:48.485 22:56:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:48.485 22:56:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:48.485 22:56:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:48.486 22:56:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:48.486 22:56:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:48.486 22:56:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:48.486 22:56:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:48.486 22:56:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:48.486 22:56:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:48.486 22:56:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:48.744 22:56:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:48.744 22:56:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:48.744 22:56:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:48.744 22:56:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:48.744 22:56:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:48.744 22:56:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:48.744 22:56:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:48.744 22:56:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:48.744 22:56:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:48.744 22:56:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:48.744 22:56:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:48.744 22:56:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:48.745 22:56:38 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:49.003 22:56:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:49.262 [2024-07-13 22:56:38.512002] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:49.262 [2024-07-13 22:56:38.568102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.262 [2024-07-13 22:56:38.568105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.262 [2024-07-13 22:56:38.624795] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:49.262 [2024-07-13 22:56:38.625016] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:52.547 22:56:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:52.547 spdk_app_start Round 1 00:10:52.547 22:56:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:10:52.547 22:56:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 124737 /var/tmp/spdk-nbd.sock 00:10:52.547 22:56:41 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 124737 ']' 00:10:52.547 22:56:41 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:52.547 22:56:41 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:52.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:52.547 22:56:41 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:52.547 22:56:41 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:52.547 22:56:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:52.547 22:56:41 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:52.547 22:56:41 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:10:52.547 22:56:41 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:52.547 Malloc0 00:10:52.547 22:56:41 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:52.806 Malloc1 00:10:52.806 22:56:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:52.806 22:56:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:52.806 22:56:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:52.806 22:56:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:52.806 22:56:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:52.806 22:56:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:52.806 22:56:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:52.806 22:56:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:52.806 22:56:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:52.806 22:56:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:52.806 22:56:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:52.806 22:56:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:52.806 22:56:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:52.806 22:56:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:52.806 22:56:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:52.806 22:56:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:53.064 /dev/nbd0 00:10:53.064 22:56:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:53.064 22:56:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:53.065 22:56:42 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:10:53.065 22:56:42 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:10:53.065 22:56:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:53.065 22:56:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:53.065 22:56:42 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:10:53.065 22:56:42 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:10:53.065 22:56:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:53.065 22:56:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:53.065 22:56:42 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:53.065 1+0 records in 00:10:53.065 1+0 records out 00:10:53.065 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417033 s, 9.8 MB/s 00:10:53.065 22:56:42 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:53.065 22:56:42 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:10:53.065 22:56:42 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:53.065 22:56:42 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:53.065 22:56:42 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:10:53.065 22:56:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:53.065 22:56:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:53.065 22:56:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:53.324 /dev/nbd1 00:10:53.324 22:56:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:53.324 22:56:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:53.324 22:56:42 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:10:53.324 22:56:42 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:10:53.324 22:56:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:53.324 22:56:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:53.324 22:56:42 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:10:53.324 22:56:42 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:10:53.324 22:56:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:53.324 22:56:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:53.324 22:56:42 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:53.324 1+0 records in 00:10:53.324 1+0 records out 00:10:53.324 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286379 s, 14.3 MB/s 00:10:53.324 22:56:42 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:53.324 22:56:42 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:10:53.324 22:56:42 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:53.324 22:56:42 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:53.324 22:56:42 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:10:53.324 22:56:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:53.324 22:56:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:53.324 22:56:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:53.324 22:56:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:53.324 22:56:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:53.583 { 00:10:53.583 "nbd_device": "/dev/nbd0", 00:10:53.583 "bdev_name": "Malloc0" 00:10:53.583 }, 00:10:53.583 { 00:10:53.583 "nbd_device": "/dev/nbd1", 00:10:53.583 "bdev_name": "Malloc1" 00:10:53.583 } 00:10:53.583 ]' 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:53.583 { 00:10:53.583 "nbd_device": "/dev/nbd0", 00:10:53.583 "bdev_name": "Malloc0" 00:10:53.583 }, 00:10:53.583 { 00:10:53.583 "nbd_device": "/dev/nbd1", 00:10:53.583 "bdev_name": "Malloc1" 00:10:53.583 } 00:10:53.583 ]' 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:53.583 /dev/nbd1' 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:53.583 /dev/nbd1' 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:53.583 256+0 records in 00:10:53.583 256+0 records out 00:10:53.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00827975 s, 127 MB/s 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:53.583 256+0 records in 00:10:53.583 256+0 records out 00:10:53.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253136 s, 41.4 MB/s 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:53.583 256+0 records in 00:10:53.583 256+0 records out 00:10:53.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0280662 s, 37.4 MB/s 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:53.583 22:56:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:53.854 22:56:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:54.130 22:56:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:54.130 22:56:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:54.130 22:56:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:54.130 22:56:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:54.130 22:56:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:54.130 22:56:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:54.130 22:56:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:54.130 22:56:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:54.130 22:56:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:54.130 22:56:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:54.130 22:56:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:54.130 22:56:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:54.130 22:56:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:54.130 22:56:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:54.130 22:56:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:54.130 22:56:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:54.130 22:56:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:54.130 22:56:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:54.130 22:56:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:54.130 22:56:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:54.387 22:56:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:54.387 22:56:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:54.387 22:56:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:54.387 22:56:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:54.387 22:56:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:54.387 22:56:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:54.387 22:56:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:54.387 22:56:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:54.387 22:56:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:54.387 22:56:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:54.387 22:56:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:54.387 22:56:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:54.387 22:56:43 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:54.953 22:56:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:54.953 [2024-07-13 22:56:44.252047] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:54.953 [2024-07-13 22:56:44.304024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.953 [2024-07-13 22:56:44.304030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.953 [2024-07-13 22:56:44.357467] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:54.953 [2024-07-13 22:56:44.357814] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:58.310 spdk_app_start Round 2 00:10:58.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:58.310 22:56:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:58.310 22:56:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:10:58.310 22:56:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 124737 /var/tmp/spdk-nbd.sock 00:10:58.310 22:56:47 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 124737 ']' 00:10:58.310 22:56:47 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:58.310 22:56:47 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:58.310 22:56:47 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:58.311 22:56:47 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:58.311 22:56:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:58.311 22:56:47 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:58.311 22:56:47 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:10:58.311 22:56:47 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:58.311 Malloc0 00:10:58.311 22:56:47 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:58.569 Malloc1 00:10:58.569 22:56:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:58.569 22:56:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:58.569 22:56:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:58.569 22:56:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:58.569 22:56:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:58.569 22:56:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:58.569 22:56:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:58.569 22:56:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:58.569 22:56:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:58.569 22:56:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:58.569 22:56:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:58.569 22:56:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:58.569 22:56:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:58.569 22:56:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:58.569 22:56:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:58.569 22:56:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:58.827 /dev/nbd0 00:10:58.827 22:56:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:58.827 22:56:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:58.827 22:56:48 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:10:58.827 22:56:48 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:10:58.827 22:56:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:58.827 22:56:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:58.827 22:56:48 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:10:58.827 22:56:48 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:10:58.827 22:56:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:58.827 22:56:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:58.827 22:56:48 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:58.827 1+0 records in 00:10:58.827 1+0 records out 00:10:58.827 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000611135 s, 6.7 MB/s 00:10:58.828 22:56:48 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:58.828 22:56:48 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:10:58.828 22:56:48 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:58.828 22:56:48 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:58.828 22:56:48 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:10:58.828 22:56:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:58.828 22:56:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:58.828 22:56:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:59.395 /dev/nbd1 00:10:59.395 22:56:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:59.395 22:56:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:59.395 22:56:48 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:10:59.395 22:56:48 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:10:59.395 22:56:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:59.395 22:56:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:59.395 22:56:48 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:10:59.395 22:56:48 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:10:59.395 22:56:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:59.395 22:56:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:59.395 22:56:48 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:59.395 1+0 records in 00:10:59.395 1+0 records out 00:10:59.395 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00063895 s, 6.4 MB/s 00:10:59.395 22:56:48 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:59.395 22:56:48 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:10:59.395 22:56:48 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:59.395 22:56:48 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:59.395 22:56:48 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:10:59.395 22:56:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:59.395 22:56:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:59.395 22:56:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:59.395 22:56:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:59.395 22:56:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:59.654 22:56:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:59.654 { 00:10:59.654 "nbd_device": "/dev/nbd0", 00:10:59.654 "bdev_name": "Malloc0" 00:10:59.654 }, 00:10:59.654 { 00:10:59.654 "nbd_device": "/dev/nbd1", 00:10:59.654 "bdev_name": "Malloc1" 00:10:59.654 } 00:10:59.654 ]' 00:10:59.654 22:56:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:59.654 { 00:10:59.654 "nbd_device": "/dev/nbd0", 00:10:59.654 "bdev_name": "Malloc0" 00:10:59.654 }, 00:10:59.654 { 00:10:59.654 "nbd_device": "/dev/nbd1", 00:10:59.654 "bdev_name": "Malloc1" 00:10:59.654 } 00:10:59.654 ]' 00:10:59.654 22:56:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:59.654 22:56:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:59.654 /dev/nbd1' 00:10:59.654 22:56:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:59.654 /dev/nbd1' 00:10:59.654 22:56:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:59.654 22:56:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:59.654 22:56:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:59.654 22:56:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:59.654 22:56:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:59.654 22:56:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:59.654 22:56:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:59.654 22:56:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:59.654 22:56:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:59.654 22:56:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:59.654 22:56:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:59.654 22:56:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:59.654 256+0 records in 00:10:59.654 256+0 records out 00:10:59.655 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0082579 s, 127 MB/s 00:10:59.655 22:56:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:59.655 22:56:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:59.655 256+0 records in 00:10:59.655 256+0 records out 00:10:59.655 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0272154 s, 38.5 MB/s 00:10:59.655 22:56:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:59.655 22:56:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:59.655 256+0 records in 00:10:59.655 256+0 records out 00:10:59.655 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02724 s, 38.5 MB/s 00:10:59.655 22:56:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:59.655 22:56:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:59.655 22:56:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:59.655 22:56:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:59.655 22:56:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:59.655 22:56:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:59.655 22:56:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:59.655 22:56:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:59.655 22:56:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:59.655 22:56:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:59.655 22:56:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:59.655 22:56:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:59.655 22:56:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:59.655 22:56:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:59.655 22:56:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:59.655 22:56:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:59.655 22:56:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:59.655 22:56:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:59.655 22:56:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:59.913 22:56:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:59.913 22:56:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:59.913 22:56:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:59.913 22:56:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:59.913 22:56:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:59.913 22:56:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:59.913 22:56:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:59.913 22:56:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:59.913 22:56:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:59.913 22:56:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:00.480 22:56:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:00.480 22:56:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:00.480 22:56:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:00.480 22:56:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:00.480 22:56:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:00.480 22:56:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:00.480 22:56:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:00.480 22:56:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:00.480 22:56:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:00.480 22:56:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:00.480 22:56:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:00.480 22:56:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:00.480 22:56:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:00.480 22:56:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:00.480 22:56:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:00.480 22:56:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:00.480 22:56:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:00.480 22:56:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:00.480 22:56:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:00.480 22:56:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:00.480 22:56:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:00.480 22:56:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:00.480 22:56:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:00.480 22:56:49 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:01.047 22:56:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:01.047 [2024-07-13 22:56:50.391959] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:01.306 [2024-07-13 22:56:50.474867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.306 [2024-07-13 22:56:50.474873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.306 [2024-07-13 22:56:50.530577] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:01.306 [2024-07-13 22:56:50.530677] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:03.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:03.831 22:56:53 event.app_repeat -- event/event.sh@38 -- # waitforlisten 124737 /var/tmp/spdk-nbd.sock 00:11:03.831 22:56:53 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 124737 ']' 00:11:03.831 22:56:53 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:03.831 22:56:53 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:03.831 22:56:53 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:03.831 22:56:53 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:03.831 22:56:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:04.088 22:56:53 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:04.088 22:56:53 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:11:04.088 22:56:53 event.app_repeat -- event/event.sh@39 -- # killprocess 124737 00:11:04.088 22:56:53 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 124737 ']' 00:11:04.088 22:56:53 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 124737 00:11:04.088 22:56:53 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:11:04.088 22:56:53 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:04.088 22:56:53 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124737 00:11:04.088 killing process with pid 124737 00:11:04.088 22:56:53 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:04.088 22:56:53 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:04.088 22:56:53 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124737' 00:11:04.088 22:56:53 event.app_repeat -- common/autotest_common.sh@967 -- # kill 124737 00:11:04.088 22:56:53 event.app_repeat -- common/autotest_common.sh@972 -- # wait 124737 00:11:04.346 spdk_app_start is called in Round 0. 00:11:04.346 Shutdown signal received, stop current app iteration 00:11:04.346 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 reinitialization... 00:11:04.346 spdk_app_start is called in Round 1. 00:11:04.346 Shutdown signal received, stop current app iteration 00:11:04.346 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 reinitialization... 00:11:04.346 spdk_app_start is called in Round 2. 00:11:04.346 Shutdown signal received, stop current app iteration 00:11:04.346 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 reinitialization... 00:11:04.346 spdk_app_start is called in Round 3. 00:11:04.346 Shutdown signal received, stop current app iteration 00:11:04.346 ************************************ 00:11:04.346 END TEST app_repeat 00:11:04.346 ************************************ 00:11:04.346 22:56:53 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:11:04.346 22:56:53 event.app_repeat -- event/event.sh@42 -- # return 0 00:11:04.346 00:11:04.346 real 0m19.020s 00:11:04.346 user 0m42.785s 00:11:04.346 sys 0m2.845s 00:11:04.346 22:56:53 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:04.346 22:56:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:04.605 22:56:53 event -- common/autotest_common.sh@1142 -- # return 0 00:11:04.605 22:56:53 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:11:04.605 22:56:53 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:04.605 22:56:53 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:04.605 22:56:53 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:04.605 22:56:53 event -- common/autotest_common.sh@10 -- # set +x 00:11:04.605 ************************************ 00:11:04.605 START TEST cpu_locks 00:11:04.605 ************************************ 00:11:04.605 22:56:53 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:04.605 * Looking for test storage... 00:11:04.605 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:04.605 22:56:53 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:11:04.605 22:56:53 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:11:04.605 22:56:53 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:11:04.605 22:56:53 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:11:04.605 22:56:53 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:04.605 22:56:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:04.605 22:56:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:04.605 ************************************ 00:11:04.605 START TEST default_locks 00:11:04.605 ************************************ 00:11:04.605 22:56:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:11:04.605 22:56:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=125331 00:11:04.605 22:56:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 125331 00:11:04.605 22:56:53 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 125331 ']' 00:11:04.605 22:56:53 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.605 22:56:53 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:04.605 22:56:53 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.605 22:56:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:04.605 22:56:53 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:04.605 22:56:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:04.605 [2024-07-13 22:56:53.927302] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:04.605 [2024-07-13 22:56:53.927556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125331 ] 00:11:04.864 [2024-07-13 22:56:54.072406] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.864 [2024-07-13 22:56:54.167409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.798 22:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:05.798 22:56:54 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:11:05.798 22:56:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 125331 00:11:05.798 22:56:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 125331 00:11:05.798 22:56:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:05.798 22:56:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 125331 00:11:05.798 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 125331 ']' 00:11:05.798 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 125331 00:11:05.798 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:11:05.798 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:05.798 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125331 00:11:05.798 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:05.798 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:05.798 killing process with pid 125331 00:11:05.798 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125331' 00:11:05.798 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 125331 00:11:05.798 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 125331 00:11:06.365 22:56:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 125331 00:11:06.365 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:11:06.365 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 125331 00:11:06.365 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:11:06.365 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:06.365 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:11:06.365 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:06.365 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 125331 00:11:06.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.365 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 125331 ']' 00:11:06.365 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.365 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:06.365 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.365 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:06.365 ERROR: process (pid: 125331) is no longer running 00:11:06.365 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:06.365 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (125331) - No such process 00:11:06.365 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:06.365 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:11:06.365 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:11:06.365 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:06.365 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:06.365 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:06.365 22:56:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:11:06.365 22:56:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:06.365 22:56:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:11:06.365 22:56:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:06.365 00:11:06.365 real 0m1.669s 00:11:06.365 user 0m1.722s 00:11:06.365 sys 0m0.516s 00:11:06.365 ************************************ 00:11:06.365 END TEST default_locks 00:11:06.365 ************************************ 00:11:06.365 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:06.365 22:56:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:06.365 22:56:55 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:11:06.365 22:56:55 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:11:06.365 22:56:55 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:06.365 22:56:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:06.365 22:56:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:06.365 ************************************ 00:11:06.365 START TEST default_locks_via_rpc 00:11:06.365 ************************************ 00:11:06.365 22:56:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:11:06.365 22:56:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=125387 00:11:06.365 22:56:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 125387 00:11:06.365 22:56:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 125387 ']' 00:11:06.365 22:56:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:06.365 22:56:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.366 22:56:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:06.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.366 22:56:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.366 22:56:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:06.366 22:56:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.366 [2024-07-13 22:56:55.641425] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:06.366 [2024-07-13 22:56:55.641737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125387 ] 00:11:06.624 [2024-07-13 22:56:55.788588] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.624 [2024-07-13 22:56:55.857222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.189 22:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:07.189 22:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:07.189 22:56:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:11:07.189 22:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.189 22:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.189 22:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.189 22:56:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:11:07.189 22:56:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:07.189 22:56:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:11:07.189 22:56:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:07.190 22:56:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:11:07.190 22:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.190 22:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.190 22:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.190 22:56:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 125387 00:11:07.190 22:56:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 125387 00:11:07.190 22:56:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:07.448 22:56:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 125387 00:11:07.448 22:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 125387 ']' 00:11:07.448 22:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 125387 00:11:07.448 22:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:11:07.448 22:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:07.448 22:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125387 00:11:07.448 22:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:07.448 22:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:07.448 killing process with pid 125387 00:11:07.448 22:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125387' 00:11:07.448 22:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 125387 00:11:07.448 22:56:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 125387 00:11:08.015 00:11:08.015 real 0m1.595s 00:11:08.015 user 0m1.630s 00:11:08.015 sys 0m0.485s 00:11:08.015 ************************************ 00:11:08.015 END TEST default_locks_via_rpc 00:11:08.015 ************************************ 00:11:08.015 22:56:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:08.015 22:56:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.015 22:56:57 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:11:08.015 22:56:57 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:11:08.015 22:56:57 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:08.015 22:56:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:08.015 22:56:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:08.015 ************************************ 00:11:08.015 START TEST non_locking_app_on_locked_coremask 00:11:08.015 ************************************ 00:11:08.015 22:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:11:08.015 22:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=125440 00:11:08.015 22:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 125440 /var/tmp/spdk.sock 00:11:08.015 22:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:08.015 22:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 125440 ']' 00:11:08.015 22:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.015 22:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:08.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.015 22:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.015 22:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:08.015 22:56:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:08.015 [2024-07-13 22:56:57.283773] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:08.015 [2024-07-13 22:56:57.284010] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125440 ] 00:11:08.274 [2024-07-13 22:56:57.426993] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.274 [2024-07-13 22:56:57.488557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.209 22:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:09.209 22:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:11:09.209 22:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=125461 00:11:09.209 22:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:11:09.209 22:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 125461 /var/tmp/spdk2.sock 00:11:09.209 22:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 125461 ']' 00:11:09.209 22:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:09.209 22:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:09.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:09.209 22:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:09.209 22:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:09.209 22:56:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:09.209 [2024-07-13 22:56:58.355826] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:09.209 [2024-07-13 22:56:58.356087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125461 ] 00:11:09.209 [2024-07-13 22:56:58.496655] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:09.209 [2024-07-13 22:56:58.496740] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.468 [2024-07-13 22:56:58.641569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.035 22:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:10.035 22:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:11:10.035 22:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 125440 00:11:10.035 22:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:10.035 22:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 125440 00:11:10.601 22:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 125440 00:11:10.601 22:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 125440 ']' 00:11:10.602 22:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 125440 00:11:10.602 22:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:11:10.602 22:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:10.602 22:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125440 00:11:10.602 22:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:10.602 killing process with pid 125440 00:11:10.602 22:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:10.602 22:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125440' 00:11:10.602 22:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 125440 00:11:10.602 22:56:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 125440 00:11:11.169 22:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 125461 00:11:11.169 22:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 125461 ']' 00:11:11.169 22:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 125461 00:11:11.169 22:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:11:11.428 22:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:11.428 22:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125461 00:11:11.428 22:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:11.428 killing process with pid 125461 00:11:11.428 22:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:11.428 22:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125461' 00:11:11.428 22:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 125461 00:11:11.428 22:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 125461 00:11:11.687 00:11:11.687 real 0m3.805s 00:11:11.687 user 0m4.245s 00:11:11.687 sys 0m1.045s 00:11:11.687 ************************************ 00:11:11.687 22:57:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:11.687 22:57:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:11.687 END TEST non_locking_app_on_locked_coremask 00:11:11.687 ************************************ 00:11:11.687 22:57:01 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:11:11.687 22:57:01 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:11:11.687 22:57:01 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:11.687 22:57:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:11.687 22:57:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:11.687 ************************************ 00:11:11.687 START TEST locking_app_on_unlocked_coremask 00:11:11.687 ************************************ 00:11:11.687 22:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:11:11.687 22:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=125533 00:11:11.687 22:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 125533 /var/tmp/spdk.sock 00:11:11.687 22:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:11:11.687 22:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 125533 ']' 00:11:11.687 22:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.687 22:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:11.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.687 22:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.687 22:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:11.687 22:57:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:11.946 [2024-07-13 22:57:01.149312] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:11.946 [2024-07-13 22:57:01.149590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125533 ] 00:11:11.946 [2024-07-13 22:57:01.295413] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:11.946 [2024-07-13 22:57:01.295482] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.204 [2024-07-13 22:57:01.364608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.771 22:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:12.771 22:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:11:12.771 22:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=125554 00:11:12.771 22:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:12.771 22:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 125554 /var/tmp/spdk2.sock 00:11:12.771 22:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 125554 ']' 00:11:12.771 22:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:12.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:12.771 22:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:12.771 22:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:12.771 22:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:12.771 22:57:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:12.771 [2024-07-13 22:57:02.142852] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:12.771 [2024-07-13 22:57:02.143124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125554 ] 00:11:13.030 [2024-07-13 22:57:02.282501] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.030 [2024-07-13 22:57:02.435514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.973 22:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:13.973 22:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:11:13.973 22:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 125554 00:11:13.973 22:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 125554 00:11:13.973 22:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:14.231 22:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 125533 00:11:14.231 22:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 125533 ']' 00:11:14.231 22:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 125533 00:11:14.231 22:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:11:14.231 22:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:14.231 22:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125533 00:11:14.231 22:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:14.231 22:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:14.231 killing process with pid 125533 00:11:14.231 22:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125533' 00:11:14.231 22:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 125533 00:11:14.231 22:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 125533 00:11:15.167 22:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 125554 00:11:15.167 22:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 125554 ']' 00:11:15.167 22:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 125554 00:11:15.167 22:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:11:15.167 22:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:15.167 22:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125554 00:11:15.167 22:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:15.167 killing process with pid 125554 00:11:15.167 22:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:15.167 22:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125554' 00:11:15.167 22:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 125554 00:11:15.167 22:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 125554 00:11:15.426 00:11:15.426 real 0m3.661s 00:11:15.426 user 0m3.974s 00:11:15.426 sys 0m1.042s 00:11:15.426 22:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:15.426 22:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:15.426 ************************************ 00:11:15.426 END TEST locking_app_on_unlocked_coremask 00:11:15.426 ************************************ 00:11:15.426 22:57:04 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:11:15.426 22:57:04 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:11:15.426 22:57:04 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:15.426 22:57:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:15.426 22:57:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:15.426 ************************************ 00:11:15.426 START TEST locking_app_on_locked_coremask 00:11:15.426 ************************************ 00:11:15.426 22:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:11:15.426 22:57:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=125628 00:11:15.427 22:57:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:15.427 22:57:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 125628 /var/tmp/spdk.sock 00:11:15.427 22:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 125628 ']' 00:11:15.427 22:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.427 22:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:15.427 22:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.427 22:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:15.427 22:57:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:15.686 [2024-07-13 22:57:04.861552] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:15.686 [2024-07-13 22:57:04.862077] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125628 ] 00:11:15.686 [2024-07-13 22:57:05.011325] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.686 [2024-07-13 22:57:05.083799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.622 22:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:16.622 22:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:11:16.622 22:57:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:16.622 22:57:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=125649 00:11:16.622 22:57:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 125649 /var/tmp/spdk2.sock 00:11:16.622 22:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:11:16.622 22:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 125649 /var/tmp/spdk2.sock 00:11:16.622 22:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:11:16.622 22:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:16.622 22:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:11:16.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:16.622 22:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:16.622 22:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 125649 /var/tmp/spdk2.sock 00:11:16.622 22:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 125649 ']' 00:11:16.622 22:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:16.622 22:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:16.623 22:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:16.623 22:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:16.623 22:57:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:16.623 [2024-07-13 22:57:05.892707] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:16.623 [2024-07-13 22:57:05.893204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125649 ] 00:11:16.882 [2024-07-13 22:57:06.041610] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 125628 has claimed it. 00:11:16.882 [2024-07-13 22:57:06.041725] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:17.449 ERROR: process (pid: 125649) is no longer running 00:11:17.449 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (125649) - No such process 00:11:17.449 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:17.449 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:11:17.449 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:11:17.449 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:17.449 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:17.449 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:17.449 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 125628 00:11:17.449 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 125628 00:11:17.449 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:17.708 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 125628 00:11:17.708 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 125628 ']' 00:11:17.708 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 125628 00:11:17.708 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:11:17.708 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:17.708 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125628 00:11:17.708 killing process with pid 125628 00:11:17.708 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:17.708 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:17.708 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125628' 00:11:17.708 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 125628 00:11:17.708 22:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 125628 00:11:17.969 ************************************ 00:11:17.969 END TEST locking_app_on_locked_coremask 00:11:17.969 ************************************ 00:11:17.969 00:11:17.969 real 0m2.516s 00:11:17.969 user 0m2.856s 00:11:17.969 sys 0m0.681s 00:11:17.969 22:57:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:17.969 22:57:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:17.969 22:57:07 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:11:17.969 22:57:07 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:11:17.969 22:57:07 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:17.969 22:57:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:17.969 22:57:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:17.969 ************************************ 00:11:17.969 START TEST locking_overlapped_coremask 00:11:17.969 ************************************ 00:11:17.969 22:57:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:11:17.969 22:57:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=125694 00:11:17.969 22:57:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 125694 /var/tmp/spdk.sock 00:11:17.969 22:57:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:11:17.969 22:57:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 125694 ']' 00:11:17.969 22:57:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.969 22:57:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:17.969 22:57:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.969 22:57:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:17.969 22:57:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:18.227 [2024-07-13 22:57:07.428459] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:18.227 [2024-07-13 22:57:07.428991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125694 ] 00:11:18.227 [2024-07-13 22:57:07.586419] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:18.486 [2024-07-13 22:57:07.669032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.486 [2024-07-13 22:57:07.669116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.486 [2024-07-13 22:57:07.669115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:19.053 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:19.053 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:11:19.053 22:57:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=125717 00:11:19.053 22:57:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 125717 /var/tmp/spdk2.sock 00:11:19.053 22:57:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:11:19.053 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:11:19.053 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 125717 /var/tmp/spdk2.sock 00:11:19.053 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:11:19.053 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:19.053 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:11:19.053 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:19.053 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 125717 /var/tmp/spdk2.sock 00:11:19.053 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 125717 ']' 00:11:19.053 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:19.053 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:19.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:19.053 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:19.053 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:19.053 22:57:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:19.311 [2024-07-13 22:57:08.460725] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:19.311 [2024-07-13 22:57:08.461179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125717 ] 00:11:19.311 [2024-07-13 22:57:08.619509] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 125694 has claimed it. 00:11:19.311 [2024-07-13 22:57:08.619619] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:19.879 ERROR: process (pid: 125717) is no longer running 00:11:19.879 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (125717) - No such process 00:11:19.879 22:57:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:19.879 22:57:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:11:19.879 22:57:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:11:19.879 22:57:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:19.879 22:57:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:19.879 22:57:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:19.879 22:57:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:11:19.879 22:57:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:19.879 22:57:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:19.879 22:57:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:19.879 22:57:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 125694 00:11:19.879 22:57:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 125694 ']' 00:11:19.879 22:57:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 125694 00:11:19.879 22:57:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:11:19.879 22:57:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:19.879 22:57:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125694 00:11:19.879 22:57:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:19.879 22:57:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:19.879 22:57:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125694' 00:11:19.879 killing process with pid 125694 00:11:19.879 22:57:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 125694 00:11:19.879 22:57:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 125694 00:11:20.447 ************************************ 00:11:20.447 END TEST locking_overlapped_coremask 00:11:20.447 ************************************ 00:11:20.447 00:11:20.447 real 0m2.280s 00:11:20.447 user 0m6.248s 00:11:20.447 sys 0m0.516s 00:11:20.447 22:57:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:20.447 22:57:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:20.447 22:57:09 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:11:20.447 22:57:09 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:11:20.447 22:57:09 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:20.447 22:57:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:20.447 22:57:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:20.447 ************************************ 00:11:20.447 START TEST locking_overlapped_coremask_via_rpc 00:11:20.447 ************************************ 00:11:20.447 22:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:11:20.447 22:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=125769 00:11:20.447 22:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:11:20.447 22:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 125769 /var/tmp/spdk.sock 00:11:20.447 22:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 125769 ']' 00:11:20.447 22:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.447 22:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:20.447 22:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.447 22:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:20.447 22:57:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.447 [2024-07-13 22:57:09.755568] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:20.447 [2024-07-13 22:57:09.755805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125769 ] 00:11:20.706 [2024-07-13 22:57:09.905166] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:20.706 [2024-07-13 22:57:09.905241] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:20.706 [2024-07-13 22:57:09.995528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.706 [2024-07-13 22:57:09.995671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.706 [2024-07-13 22:57:09.995672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:21.642 22:57:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:21.642 22:57:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:21.642 22:57:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=125795 00:11:21.642 22:57:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 125795 /var/tmp/spdk2.sock 00:11:21.642 22:57:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:11:21.642 22:57:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 125795 ']' 00:11:21.642 22:57:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:21.642 22:57:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:21.642 22:57:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:21.642 22:57:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:21.642 22:57:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:21.642 [2024-07-13 22:57:10.816633] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:21.642 [2024-07-13 22:57:10.817437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125795 ] 00:11:21.642 [2024-07-13 22:57:10.982237] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:21.642 [2024-07-13 22:57:10.982333] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:21.901 [2024-07-13 22:57:11.174551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:21.902 [2024-07-13 22:57:11.185063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.902 [2024-07-13 22:57:11.185067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:22.469 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:22.469 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:22.469 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:11:22.469 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.469 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:22.469 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.469 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:22.469 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:11:22.469 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:22.469 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:11:22.469 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:22.470 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:11:22.470 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:22.470 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:22.470 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.470 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:22.470 [2024-07-13 22:57:11.793154] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 125769 has claimed it. 00:11:22.470 request: 00:11:22.470 { 00:11:22.470 "method": "framework_enable_cpumask_locks", 00:11:22.470 "req_id": 1 00:11:22.470 } 00:11:22.470 Got JSON-RPC error response 00:11:22.470 response: 00:11:22.470 { 00:11:22.470 "code": -32603, 00:11:22.470 "message": "Failed to claim CPU core: 2" 00:11:22.470 } 00:11:22.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.470 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:11:22.470 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:11:22.470 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:22.470 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:22.470 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:22.470 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 125769 /var/tmp/spdk.sock 00:11:22.470 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 125769 ']' 00:11:22.470 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.470 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:22.470 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.470 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:22.470 22:57:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:22.728 22:57:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:22.728 22:57:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:22.729 22:57:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 125795 /var/tmp/spdk2.sock 00:11:22.729 22:57:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 125795 ']' 00:11:22.729 22:57:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:22.729 22:57:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:22.729 22:57:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:22.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:22.729 22:57:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:22.729 22:57:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:22.987 ************************************ 00:11:22.987 END TEST locking_overlapped_coremask_via_rpc 00:11:22.987 ************************************ 00:11:22.987 22:57:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:22.987 22:57:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:22.987 22:57:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:11:22.987 22:57:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:22.987 22:57:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:22.987 22:57:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:22.987 00:11:22.987 real 0m2.641s 00:11:22.987 user 0m1.410s 00:11:22.987 sys 0m0.181s 00:11:22.987 22:57:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:22.987 22:57:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:22.987 22:57:12 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:11:22.987 22:57:12 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:11:22.987 22:57:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 125769 ]] 00:11:22.987 22:57:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 125769 00:11:22.987 22:57:12 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 125769 ']' 00:11:22.987 22:57:12 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 125769 00:11:22.987 22:57:12 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:11:22.987 22:57:12 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:22.987 22:57:12 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125769 00:11:22.987 killing process with pid 125769 00:11:22.987 22:57:12 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:22.987 22:57:12 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:22.987 22:57:12 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125769' 00:11:22.987 22:57:12 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 125769 00:11:22.987 22:57:12 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 125769 00:11:23.554 22:57:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 125795 ]] 00:11:23.554 22:57:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 125795 00:11:23.554 22:57:12 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 125795 ']' 00:11:23.554 22:57:12 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 125795 00:11:23.554 22:57:12 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:11:23.554 22:57:12 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:23.554 22:57:12 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125795 00:11:23.554 killing process with pid 125795 00:11:23.554 22:57:12 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:11:23.554 22:57:12 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:11:23.554 22:57:12 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125795' 00:11:23.554 22:57:12 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 125795 00:11:23.554 22:57:12 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 125795 00:11:24.119 22:57:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:24.119 Process with pid 125769 is not found 00:11:24.119 Process with pid 125795 is not found 00:11:24.119 22:57:13 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:11:24.119 22:57:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 125769 ]] 00:11:24.119 22:57:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 125769 00:11:24.119 22:57:13 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 125769 ']' 00:11:24.119 22:57:13 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 125769 00:11:24.119 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (125769) - No such process 00:11:24.119 22:57:13 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 125769 is not found' 00:11:24.119 22:57:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 125795 ]] 00:11:24.119 22:57:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 125795 00:11:24.119 22:57:13 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 125795 ']' 00:11:24.119 22:57:13 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 125795 00:11:24.119 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (125795) - No such process 00:11:24.119 22:57:13 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 125795 is not found' 00:11:24.119 22:57:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:24.119 00:11:24.119 real 0m19.550s 00:11:24.119 user 0m34.827s 00:11:24.119 sys 0m5.307s 00:11:24.119 22:57:13 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:24.119 22:57:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:24.119 ************************************ 00:11:24.119 END TEST cpu_locks 00:11:24.119 ************************************ 00:11:24.119 22:57:13 event -- common/autotest_common.sh@1142 -- # return 0 00:11:24.119 00:11:24.119 real 0m46.183s 00:11:24.119 user 1m29.939s 00:11:24.119 sys 0m9.007s 00:11:24.119 22:57:13 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:24.119 22:57:13 event -- common/autotest_common.sh@10 -- # set +x 00:11:24.119 ************************************ 00:11:24.119 END TEST event 00:11:24.119 ************************************ 00:11:24.119 22:57:13 -- common/autotest_common.sh@1142 -- # return 0 00:11:24.119 22:57:13 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:24.119 22:57:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:24.119 22:57:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:24.119 22:57:13 -- common/autotest_common.sh@10 -- # set +x 00:11:24.119 ************************************ 00:11:24.119 START TEST thread 00:11:24.119 ************************************ 00:11:24.119 22:57:13 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:24.119 * Looking for test storage... 00:11:24.119 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:11:24.119 22:57:13 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:24.119 22:57:13 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:11:24.119 22:57:13 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:24.119 22:57:13 thread -- common/autotest_common.sh@10 -- # set +x 00:11:24.119 ************************************ 00:11:24.119 START TEST thread_poller_perf 00:11:24.119 ************************************ 00:11:24.119 22:57:13 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:24.376 [2024-07-13 22:57:13.530581] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:24.376 [2024-07-13 22:57:13.530813] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125930 ] 00:11:24.376 [2024-07-13 22:57:13.671663] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.376 [2024-07-13 22:57:13.757073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.376 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:11:25.747 ====================================== 00:11:25.747 busy:2213115254 (cyc) 00:11:25.747 total_run_count: 351000 00:11:25.747 tsc_hz: 2200000000 (cyc) 00:11:25.747 ====================================== 00:11:25.747 poller_cost: 6305 (cyc), 2865 (nsec) 00:11:25.747 00:11:25.747 real 0m1.358s 00:11:25.747 user 0m1.186s 00:11:25.747 sys 0m0.071s 00:11:25.747 22:57:14 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:25.747 22:57:14 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:25.747 ************************************ 00:11:25.747 END TEST thread_poller_perf 00:11:25.747 ************************************ 00:11:25.747 22:57:14 thread -- common/autotest_common.sh@1142 -- # return 0 00:11:25.747 22:57:14 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:25.747 22:57:14 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:11:25.747 22:57:14 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:25.747 22:57:14 thread -- common/autotest_common.sh@10 -- # set +x 00:11:25.747 ************************************ 00:11:25.747 START TEST thread_poller_perf 00:11:25.747 ************************************ 00:11:25.747 22:57:14 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:25.747 [2024-07-13 22:57:14.944345] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:25.747 [2024-07-13 22:57:14.944551] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125974 ] 00:11:25.747 [2024-07-13 22:57:15.083215] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.747 [2024-07-13 22:57:15.139572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.747 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:11:27.121 ====================================== 00:11:27.121 busy:2203547210 (cyc) 00:11:27.121 total_run_count: 4520000 00:11:27.121 tsc_hz: 2200000000 (cyc) 00:11:27.121 ====================================== 00:11:27.121 poller_cost: 487 (cyc), 221 (nsec) 00:11:27.121 00:11:27.121 real 0m1.311s 00:11:27.121 user 0m1.139s 00:11:27.121 sys 0m0.073s 00:11:27.121 22:57:16 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:27.121 22:57:16 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:27.121 ************************************ 00:11:27.121 END TEST thread_poller_perf 00:11:27.121 ************************************ 00:11:27.121 22:57:16 thread -- common/autotest_common.sh@1142 -- # return 0 00:11:27.121 22:57:16 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:11:27.121 22:57:16 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:11:27.121 22:57:16 thread -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:27.121 22:57:16 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:27.121 22:57:16 thread -- common/autotest_common.sh@10 -- # set +x 00:11:27.121 ************************************ 00:11:27.121 START TEST thread_spdk_lock 00:11:27.121 ************************************ 00:11:27.121 22:57:16 thread.thread_spdk_lock -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:11:27.121 [2024-07-13 22:57:16.312713] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:27.121 [2024-07-13 22:57:16.313014] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126010 ] 00:11:27.121 [2024-07-13 22:57:16.464235] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:27.380 [2024-07-13 22:57:16.535546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.380 [2024-07-13 22:57:16.535549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:27.947 [2024-07-13 22:57:17.057231] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 965:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:27.947 [2024-07-13 22:57:17.057380] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:11:27.947 [2024-07-13 22:57:17.057412] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x5639c25b5e80 00:11:27.947 [2024-07-13 22:57:17.058867] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:27.947 [2024-07-13 22:57:17.058977] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1026:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:27.947 [2024-07-13 22:57:17.059029] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:27.947 Starting test contend 00:11:27.947 Worker Delay Wait us Hold us Total us 00:11:27.947 0 3 145308 194582 339890 00:11:27.947 1 5 70901 298047 368949 00:11:27.947 PASS test contend 00:11:27.947 Starting test hold_by_poller 00:11:27.947 PASS test hold_by_poller 00:11:27.947 Starting test hold_by_message 00:11:27.947 PASS test hold_by_message 00:11:27.947 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:11:27.947 100014 assertions passed 00:11:27.947 0 assertions failed 00:11:27.947 00:11:27.947 real 0m0.870s 00:11:27.947 user 0m1.209s 00:11:27.947 sys 0m0.084s 00:11:27.947 22:57:17 thread.thread_spdk_lock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:27.947 22:57:17 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:11:27.947 ************************************ 00:11:27.947 END TEST thread_spdk_lock 00:11:27.947 ************************************ 00:11:27.947 22:57:17 thread -- common/autotest_common.sh@1142 -- # return 0 00:11:27.947 00:11:27.947 real 0m3.785s 00:11:27.947 user 0m3.666s 00:11:27.947 sys 0m0.338s 00:11:27.947 22:57:17 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:27.947 22:57:17 thread -- common/autotest_common.sh@10 -- # set +x 00:11:27.947 ************************************ 00:11:27.947 END TEST thread 00:11:27.947 ************************************ 00:11:27.947 22:57:17 -- common/autotest_common.sh@1142 -- # return 0 00:11:27.947 22:57:17 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:11:27.947 22:57:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:27.947 22:57:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:27.947 22:57:17 -- common/autotest_common.sh@10 -- # set +x 00:11:27.947 ************************************ 00:11:27.947 START TEST accel 00:11:27.947 ************************************ 00:11:27.947 22:57:17 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:11:27.947 * Looking for test storage... 00:11:27.947 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:11:27.947 22:57:17 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:11:27.947 22:57:17 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:11:27.947 22:57:17 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:27.947 22:57:17 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=126090 00:11:27.947 22:57:17 accel -- accel/accel.sh@63 -- # waitforlisten 126090 00:11:27.947 22:57:17 accel -- common/autotest_common.sh@829 -- # '[' -z 126090 ']' 00:11:27.947 22:57:17 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.947 22:57:17 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:27.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.947 22:57:17 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.947 22:57:17 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:11:27.947 22:57:17 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:27.947 22:57:17 accel -- common/autotest_common.sh@10 -- # set +x 00:11:27.947 22:57:17 accel -- accel/accel.sh@61 -- # build_accel_config 00:11:27.947 22:57:17 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:27.947 22:57:17 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:27.947 22:57:17 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:27.947 22:57:17 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:27.947 22:57:17 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:27.947 22:57:17 accel -- accel/accel.sh@40 -- # local IFS=, 00:11:27.947 22:57:17 accel -- accel/accel.sh@41 -- # jq -r . 00:11:28.207 [2024-07-13 22:57:17.395715] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:28.207 [2024-07-13 22:57:17.395976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126090 ] 00:11:28.207 [2024-07-13 22:57:17.543493] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.465 [2024-07-13 22:57:17.633820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.032 22:57:18 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:29.032 22:57:18 accel -- common/autotest_common.sh@862 -- # return 0 00:11:29.032 22:57:18 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:11:29.032 22:57:18 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:11:29.032 22:57:18 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:11:29.032 22:57:18 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:11:29.032 22:57:18 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:11:29.032 22:57:18 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:11:29.032 22:57:18 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.032 22:57:18 accel -- common/autotest_common.sh@10 -- # set +x 00:11:29.032 22:57:18 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:11:29.032 22:57:18 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.032 22:57:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:29.032 22:57:18 accel -- accel/accel.sh@72 -- # IFS== 00:11:29.032 22:57:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:29.032 22:57:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:29.032 22:57:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:29.032 22:57:18 accel -- accel/accel.sh@72 -- # IFS== 00:11:29.032 22:57:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:29.032 22:57:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:29.032 22:57:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:29.032 22:57:18 accel -- accel/accel.sh@72 -- # IFS== 00:11:29.032 22:57:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:29.032 22:57:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:29.032 22:57:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:29.032 22:57:18 accel -- accel/accel.sh@72 -- # IFS== 00:11:29.032 22:57:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:29.032 22:57:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:29.032 22:57:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:29.032 22:57:18 accel -- accel/accel.sh@72 -- # IFS== 00:11:29.032 22:57:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:29.032 22:57:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:29.032 22:57:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:29.032 22:57:18 accel -- accel/accel.sh@72 -- # IFS== 00:11:29.032 22:57:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:29.032 22:57:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:29.032 22:57:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:29.032 22:57:18 accel -- accel/accel.sh@72 -- # IFS== 00:11:29.032 22:57:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:29.032 22:57:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:29.032 22:57:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:29.032 22:57:18 accel -- accel/accel.sh@72 -- # IFS== 00:11:29.032 22:57:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:29.032 22:57:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:29.032 22:57:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:29.032 22:57:18 accel -- accel/accel.sh@72 -- # IFS== 00:11:29.032 22:57:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:29.032 22:57:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:29.032 22:57:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:29.032 22:57:18 accel -- accel/accel.sh@72 -- # IFS== 00:11:29.032 22:57:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:29.032 22:57:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:29.032 22:57:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:29.032 22:57:18 accel -- accel/accel.sh@72 -- # IFS== 00:11:29.032 22:57:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:29.032 22:57:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:29.032 22:57:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:29.032 22:57:18 accel -- accel/accel.sh@72 -- # IFS== 00:11:29.032 22:57:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:29.032 22:57:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:29.032 22:57:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:29.032 22:57:18 accel -- accel/accel.sh@72 -- # IFS== 00:11:29.032 22:57:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:29.032 22:57:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:29.032 22:57:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:29.032 22:57:18 accel -- accel/accel.sh@72 -- # IFS== 00:11:29.032 22:57:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:29.032 22:57:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:29.032 22:57:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:29.032 22:57:18 accel -- accel/accel.sh@72 -- # IFS== 00:11:29.032 22:57:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:29.032 22:57:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:29.032 22:57:18 accel -- accel/accel.sh@75 -- # killprocess 126090 00:11:29.032 22:57:18 accel -- common/autotest_common.sh@948 -- # '[' -z 126090 ']' 00:11:29.032 22:57:18 accel -- common/autotest_common.sh@952 -- # kill -0 126090 00:11:29.032 22:57:18 accel -- common/autotest_common.sh@953 -- # uname 00:11:29.032 22:57:18 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:29.032 22:57:18 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 126090 00:11:29.032 22:57:18 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:29.032 22:57:18 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:29.032 killing process with pid 126090 00:11:29.032 22:57:18 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 126090' 00:11:29.032 22:57:18 accel -- common/autotest_common.sh@967 -- # kill 126090 00:11:29.032 22:57:18 accel -- common/autotest_common.sh@972 -- # wait 126090 00:11:29.600 22:57:18 accel -- accel/accel.sh@76 -- # trap - ERR 00:11:29.600 22:57:18 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:11:29.600 22:57:18 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:29.600 22:57:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:29.600 22:57:18 accel -- common/autotest_common.sh@10 -- # set +x 00:11:29.600 22:57:18 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:11:29.600 22:57:18 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:11:29.600 22:57:18 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:11:29.600 22:57:18 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:29.600 22:57:18 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:29.600 22:57:18 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:29.600 22:57:18 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:29.600 22:57:18 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:29.600 22:57:18 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:11:29.600 22:57:18 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:11:29.600 22:57:18 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:29.600 22:57:18 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:11:29.600 22:57:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:29.600 22:57:18 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:11:29.600 22:57:18 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:11:29.600 22:57:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:29.600 22:57:18 accel -- common/autotest_common.sh@10 -- # set +x 00:11:29.600 ************************************ 00:11:29.600 START TEST accel_missing_filename 00:11:29.600 ************************************ 00:11:29.600 22:57:18 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:11:29.600 22:57:18 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:11:29.600 22:57:18 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:11:29.600 22:57:18 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:11:29.600 22:57:18 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:29.600 22:57:18 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:11:29.600 22:57:18 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:29.600 22:57:18 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:11:29.600 22:57:18 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:11:29.600 22:57:18 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:11:29.600 22:57:18 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:29.600 22:57:18 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:29.600 22:57:18 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:29.600 22:57:18 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:29.600 22:57:18 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:29.600 22:57:18 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:11:29.600 22:57:18 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:11:29.600 [2024-07-13 22:57:19.000196] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:29.600 [2024-07-13 22:57:19.000460] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126163 ] 00:11:29.859 [2024-07-13 22:57:19.142431] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.859 [2024-07-13 22:57:19.218638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.118 [2024-07-13 22:57:19.277329] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:30.118 [2024-07-13 22:57:19.360278] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:11:30.118 A filename is required. 00:11:30.118 22:57:19 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:11:30.118 22:57:19 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:30.118 22:57:19 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:11:30.118 22:57:19 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:11:30.118 22:57:19 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:11:30.118 22:57:19 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:30.118 00:11:30.118 real 0m0.492s 00:11:30.118 user 0m0.273s 00:11:30.118 sys 0m0.170s 00:11:30.118 22:57:19 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:30.118 22:57:19 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:11:30.118 ************************************ 00:11:30.118 END TEST accel_missing_filename 00:11:30.118 ************************************ 00:11:30.118 22:57:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:30.118 22:57:19 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:30.118 22:57:19 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:11:30.118 22:57:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:30.118 22:57:19 accel -- common/autotest_common.sh@10 -- # set +x 00:11:30.118 ************************************ 00:11:30.118 START TEST accel_compress_verify 00:11:30.118 ************************************ 00:11:30.118 22:57:19 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:30.118 22:57:19 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:11:30.118 22:57:19 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:30.118 22:57:19 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:11:30.118 22:57:19 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:30.118 22:57:19 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:11:30.118 22:57:19 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:30.118 22:57:19 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:30.118 22:57:19 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:30.118 22:57:19 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:11:30.118 22:57:19 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:30.118 22:57:19 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:30.118 22:57:19 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:30.118 22:57:19 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:30.118 22:57:19 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:30.118 22:57:19 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:11:30.118 22:57:19 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:11:30.378 [2024-07-13 22:57:19.538702] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:30.378 [2024-07-13 22:57:19.538919] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126187 ] 00:11:30.378 [2024-07-13 22:57:19.675910] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.378 [2024-07-13 22:57:19.737137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.637 [2024-07-13 22:57:19.798527] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:30.637 [2024-07-13 22:57:19.883944] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:11:30.637 00:11:30.637 Compression does not support the verify option, aborting. 00:11:30.637 22:57:19 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:11:30.637 22:57:19 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:30.637 22:57:19 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:11:30.637 22:57:19 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:11:30.637 22:57:19 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:11:30.637 22:57:19 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:30.637 00:11:30.637 real 0m0.480s 00:11:30.637 user 0m0.290s 00:11:30.637 sys 0m0.141s 00:11:30.637 22:57:19 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:30.637 22:57:19 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:11:30.637 ************************************ 00:11:30.637 END TEST accel_compress_verify 00:11:30.637 ************************************ 00:11:30.637 22:57:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:30.637 22:57:20 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:11:30.637 22:57:20 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:11:30.637 22:57:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:30.637 22:57:20 accel -- common/autotest_common.sh@10 -- # set +x 00:11:30.637 ************************************ 00:11:30.637 START TEST accel_wrong_workload 00:11:30.637 ************************************ 00:11:30.637 22:57:20 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:11:30.637 22:57:20 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:11:30.637 22:57:20 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:11:30.637 22:57:20 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:11:30.637 22:57:20 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:30.637 22:57:20 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:11:30.637 22:57:20 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:30.637 22:57:20 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:11:30.637 22:57:20 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:11:30.637 22:57:20 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:11:30.896 22:57:20 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:30.896 22:57:20 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:30.896 22:57:20 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:30.896 22:57:20 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:30.896 22:57:20 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:30.896 22:57:20 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:11:30.896 22:57:20 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:11:30.896 Unsupported workload type: foobar 00:11:30.896 [2024-07-13 22:57:20.066506] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:11:30.896 accel_perf options: 00:11:30.896 [-h help message] 00:11:30.896 [-q queue depth per core] 00:11:30.896 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:11:30.896 [-T number of threads per core 00:11:30.896 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:11:30.896 [-t time in seconds] 00:11:30.896 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:11:30.896 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:11:30.896 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:11:30.896 [-l for compress/decompress workloads, name of uncompressed input file 00:11:30.896 [-S for crc32c workload, use this seed value (default 0) 00:11:30.896 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:11:30.896 [-f for fill workload, use this BYTE value (default 255) 00:11:30.896 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:11:30.896 [-y verify result if this switch is on] 00:11:30.896 [-a tasks to allocate per core (default: same value as -q)] 00:11:30.896 Can be used to spread operations across a wider range of memory. 00:11:30.896 22:57:20 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:11:30.896 22:57:20 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:30.896 22:57:20 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:30.896 22:57:20 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:30.896 00:11:30.896 real 0m0.051s 00:11:30.896 user 0m0.034s 00:11:30.896 sys 0m0.017s 00:11:30.896 22:57:20 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:30.896 22:57:20 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:11:30.896 ************************************ 00:11:30.896 END TEST accel_wrong_workload 00:11:30.896 ************************************ 00:11:30.896 22:57:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:30.896 22:57:20 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:11:30.896 22:57:20 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:11:30.896 22:57:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:30.896 22:57:20 accel -- common/autotest_common.sh@10 -- # set +x 00:11:30.896 ************************************ 00:11:30.896 START TEST accel_negative_buffers 00:11:30.896 ************************************ 00:11:30.896 22:57:20 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:11:30.896 22:57:20 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:11:30.896 22:57:20 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:11:30.896 22:57:20 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:11:30.896 22:57:20 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:30.896 22:57:20 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:11:30.896 22:57:20 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:30.896 22:57:20 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:11:30.896 22:57:20 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:11:30.896 22:57:20 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:11:30.896 22:57:20 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:30.896 22:57:20 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:30.896 22:57:20 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:30.896 22:57:20 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:30.896 22:57:20 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:30.896 22:57:20 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:11:30.896 22:57:20 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:11:30.896 -x option must be non-negative. 00:11:30.896 [2024-07-13 22:57:20.164018] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:11:30.896 accel_perf options: 00:11:30.896 [-h help message] 00:11:30.896 [-q queue depth per core] 00:11:30.896 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:11:30.896 [-T number of threads per core 00:11:30.896 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:11:30.896 [-t time in seconds] 00:11:30.896 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:11:30.896 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:11:30.896 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:11:30.896 [-l for compress/decompress workloads, name of uncompressed input file 00:11:30.896 [-S for crc32c workload, use this seed value (default 0) 00:11:30.896 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:11:30.896 [-f for fill workload, use this BYTE value (default 255) 00:11:30.896 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:11:30.896 [-y verify result if this switch is on] 00:11:30.896 [-a tasks to allocate per core (default: same value as -q)] 00:11:30.896 Can be used to spread operations across a wider range of memory. 00:11:30.896 22:57:20 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:11:30.896 22:57:20 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:30.896 22:57:20 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:30.896 22:57:20 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:30.896 00:11:30.896 real 0m0.050s 00:11:30.896 user 0m0.034s 00:11:30.896 sys 0m0.016s 00:11:30.896 22:57:20 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:30.896 22:57:20 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:11:30.896 ************************************ 00:11:30.896 END TEST accel_negative_buffers 00:11:30.896 ************************************ 00:11:30.896 22:57:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:30.896 22:57:20 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:11:30.896 22:57:20 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:11:30.896 22:57:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:30.896 22:57:20 accel -- common/autotest_common.sh@10 -- # set +x 00:11:30.896 ************************************ 00:11:30.896 START TEST accel_crc32c 00:11:30.896 ************************************ 00:11:30.896 22:57:20 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:11:30.896 22:57:20 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:11:30.896 22:57:20 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:11:30.896 22:57:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:30.896 22:57:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:30.896 22:57:20 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:11:30.896 22:57:20 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:11:30.896 22:57:20 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:11:30.896 22:57:20 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:30.896 22:57:20 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:30.897 22:57:20 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:30.897 22:57:20 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:30.897 22:57:20 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:30.897 22:57:20 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:11:30.897 22:57:20 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:11:30.897 [2024-07-13 22:57:20.262477] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:30.897 [2024-07-13 22:57:20.262749] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126272 ] 00:11:31.155 [2024-07-13 22:57:20.408798] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.155 [2024-07-13 22:57:20.486838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.155 22:57:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:31.155 22:57:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:31.155 22:57:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:31.155 22:57:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:31.155 22:57:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:31.155 22:57:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:31.155 22:57:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:31.155 22:57:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:31.155 22:57:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:11:31.155 22:57:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:31.155 22:57:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:31.155 22:57:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:31.155 22:57:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:31.155 22:57:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:31.155 22:57:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:31.155 22:57:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:31.155 22:57:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:31.155 22:57:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:31.155 22:57:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:31.155 22:57:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:31.155 22:57:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:11:31.155 22:57:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:31.155 22:57:20 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:11:31.155 22:57:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:31.155 22:57:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:31.155 22:57:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:11:31.155 22:57:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:31.155 22:57:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:31.155 22:57:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:31.155 22:57:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:31.414 22:57:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:32.349 22:57:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:32.349 22:57:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:32.349 22:57:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:32.349 22:57:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:32.349 22:57:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:32.349 22:57:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:32.349 22:57:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:32.349 22:57:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:32.349 22:57:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:32.349 22:57:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:32.349 22:57:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:32.349 22:57:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:32.349 22:57:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:32.349 22:57:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:32.349 22:57:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:32.349 22:57:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:32.349 22:57:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:32.349 22:57:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:32.349 22:57:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:32.349 22:57:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:32.349 22:57:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:32.349 22:57:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:32.349 22:57:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:32.349 22:57:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:32.349 22:57:21 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:32.349 22:57:21 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:11:32.349 22:57:21 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:32.349 00:11:32.349 real 0m1.503s 00:11:32.349 user 0m1.269s 00:11:32.349 sys 0m0.174s 00:11:32.349 22:57:21 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:32.349 22:57:21 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:11:32.349 ************************************ 00:11:32.349 END TEST accel_crc32c 00:11:32.349 ************************************ 00:11:32.609 22:57:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:32.609 22:57:21 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:11:32.609 22:57:21 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:11:32.609 22:57:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:32.609 22:57:21 accel -- common/autotest_common.sh@10 -- # set +x 00:11:32.609 ************************************ 00:11:32.609 START TEST accel_crc32c_C2 00:11:32.609 ************************************ 00:11:32.609 22:57:21 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:11:32.609 22:57:21 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:11:32.609 22:57:21 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:11:32.609 22:57:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:32.609 22:57:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:32.609 22:57:21 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:11:32.609 22:57:21 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:11:32.609 22:57:21 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:11:32.609 22:57:21 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:32.609 22:57:21 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:32.609 22:57:21 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:32.609 22:57:21 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:32.609 22:57:21 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:32.609 22:57:21 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:11:32.609 22:57:21 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:11:32.609 [2024-07-13 22:57:21.823855] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:32.609 [2024-07-13 22:57:21.824161] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126314 ] 00:11:32.609 [2024-07-13 22:57:21.967999] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.868 [2024-07-13 22:57:22.062214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:32.868 22:57:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:34.245 22:57:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:34.245 22:57:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:34.245 22:57:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:34.245 22:57:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:34.245 22:57:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:34.245 22:57:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:34.245 22:57:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:34.245 22:57:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:34.245 22:57:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:34.245 22:57:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:34.245 22:57:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:34.245 22:57:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:34.245 22:57:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:34.245 22:57:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:34.245 22:57:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:34.245 22:57:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:34.245 22:57:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:34.245 22:57:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:34.245 22:57:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:34.245 22:57:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:34.245 22:57:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:34.245 22:57:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:34.245 22:57:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:34.245 22:57:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:34.245 22:57:23 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:34.245 22:57:23 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:11:34.245 22:57:23 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:34.245 00:11:34.245 real 0m1.518s 00:11:34.245 user 0m1.303s 00:11:34.245 sys 0m0.148s 00:11:34.245 22:57:23 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:34.245 22:57:23 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:11:34.245 ************************************ 00:11:34.245 END TEST accel_crc32c_C2 00:11:34.245 ************************************ 00:11:34.245 22:57:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:34.245 22:57:23 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:11:34.245 22:57:23 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:11:34.245 22:57:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:34.245 22:57:23 accel -- common/autotest_common.sh@10 -- # set +x 00:11:34.245 ************************************ 00:11:34.245 START TEST accel_copy 00:11:34.245 ************************************ 00:11:34.245 22:57:23 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:11:34.245 22:57:23 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:11:34.245 22:57:23 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:11:34.245 22:57:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:34.245 22:57:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:34.245 22:57:23 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:11:34.245 22:57:23 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:11:34.245 22:57:23 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:11:34.245 22:57:23 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:34.245 22:57:23 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:34.245 22:57:23 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:34.245 22:57:23 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:34.245 22:57:23 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:34.245 22:57:23 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:11:34.245 22:57:23 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:11:34.245 [2024-07-13 22:57:23.391282] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:34.245 [2024-07-13 22:57:23.391521] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126365 ] 00:11:34.245 [2024-07-13 22:57:23.532928] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.245 [2024-07-13 22:57:23.626437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:34.504 22:57:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:35.878 22:57:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:35.878 22:57:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:35.878 22:57:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:35.878 22:57:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:35.878 22:57:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:35.878 22:57:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:35.879 22:57:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:35.879 22:57:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:35.879 22:57:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:35.879 22:57:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:35.879 22:57:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:35.879 22:57:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:35.879 22:57:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:35.879 22:57:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:35.879 22:57:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:35.879 22:57:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:35.879 22:57:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:35.879 22:57:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:35.879 22:57:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:35.879 22:57:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:35.879 22:57:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:35.879 22:57:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:35.879 22:57:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:35.879 22:57:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:35.879 22:57:24 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:35.879 22:57:24 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:11:35.879 22:57:24 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:35.879 00:11:35.879 real 0m1.516s 00:11:35.879 user 0m1.294s 00:11:35.879 sys 0m0.156s 00:11:35.879 22:57:24 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:35.879 ************************************ 00:11:35.879 END TEST accel_copy 00:11:35.879 ************************************ 00:11:35.879 22:57:24 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:11:35.879 22:57:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:35.879 22:57:24 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:11:35.879 22:57:24 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:11:35.879 22:57:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:35.879 22:57:24 accel -- common/autotest_common.sh@10 -- # set +x 00:11:35.879 ************************************ 00:11:35.879 START TEST accel_fill 00:11:35.879 ************************************ 00:11:35.879 22:57:24 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:11:35.879 22:57:24 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:11:35.879 22:57:24 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:11:35.879 22:57:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:35.879 22:57:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:35.879 22:57:24 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:11:35.879 22:57:24 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:11:35.879 22:57:24 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:11:35.879 22:57:24 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:35.879 22:57:24 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:35.879 22:57:24 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:35.879 22:57:24 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:35.879 22:57:24 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:35.879 22:57:24 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:11:35.879 22:57:24 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:11:35.879 [2024-07-13 22:57:24.952685] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:35.879 [2024-07-13 22:57:24.952972] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126410 ] 00:11:35.879 [2024-07-13 22:57:25.089590] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.879 [2024-07-13 22:57:25.157603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:35.879 22:57:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:35.880 22:57:25 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:11:35.880 22:57:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:35.880 22:57:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:35.880 22:57:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:35.880 22:57:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:35.880 22:57:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:35.880 22:57:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:35.880 22:57:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:35.880 22:57:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:35.880 22:57:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:35.880 22:57:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:35.880 22:57:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:37.395 22:57:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:37.396 22:57:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:37.396 22:57:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:37.396 22:57:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:37.396 22:57:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:37.396 22:57:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:37.396 22:57:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:37.396 22:57:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:37.396 22:57:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:37.396 22:57:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:37.396 22:57:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:37.396 22:57:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:37.396 22:57:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:37.396 22:57:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:37.396 22:57:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:37.396 22:57:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:37.396 22:57:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:37.396 22:57:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:37.396 22:57:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:37.396 22:57:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:37.396 22:57:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:37.396 22:57:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:37.396 22:57:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:37.396 22:57:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:37.396 22:57:26 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:37.396 22:57:26 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:11:37.396 22:57:26 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:37.396 00:11:37.396 real 0m1.503s 00:11:37.396 user 0m1.281s 00:11:37.396 sys 0m0.161s 00:11:37.396 ************************************ 00:11:37.396 END TEST accel_fill 00:11:37.396 ************************************ 00:11:37.396 22:57:26 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:37.396 22:57:26 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:11:37.396 22:57:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:37.396 22:57:26 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:11:37.396 22:57:26 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:11:37.396 22:57:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:37.396 22:57:26 accel -- common/autotest_common.sh@10 -- # set +x 00:11:37.396 ************************************ 00:11:37.396 START TEST accel_copy_crc32c 00:11:37.396 ************************************ 00:11:37.396 22:57:26 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:11:37.396 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:11:37.396 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:11:37.396 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:37.396 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:37.396 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:11:37.396 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:11:37.396 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:11:37.396 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:37.396 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:37.396 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:37.396 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:37.396 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:37.396 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:11:37.396 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:11:37.396 [2024-07-13 22:57:26.506628] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:37.396 [2024-07-13 22:57:26.506885] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126449 ] 00:11:37.396 [2024-07-13 22:57:26.652547] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.396 [2024-07-13 22:57:26.745002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.653 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:37.653 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:37.653 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:37.653 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:37.653 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:37.653 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:37.653 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:37.653 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:37.653 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:11:37.653 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:37.653 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:37.653 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:37.653 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:37.653 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:37.653 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:37.653 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:37.654 22:57:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:39.026 22:57:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:39.026 22:57:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:39.026 22:57:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:39.026 22:57:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:39.026 22:57:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:39.026 22:57:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:39.026 22:57:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:39.026 22:57:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:39.026 22:57:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:39.026 22:57:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:39.026 22:57:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:39.026 22:57:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:39.026 22:57:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:39.026 22:57:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:39.026 22:57:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:39.026 22:57:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:39.026 22:57:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:39.026 22:57:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:39.026 22:57:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:39.026 22:57:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:39.026 22:57:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:39.026 22:57:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:39.026 22:57:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:39.026 22:57:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:39.026 22:57:28 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:39.026 22:57:28 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:11:39.026 22:57:28 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:39.026 00:11:39.026 real 0m1.537s 00:11:39.026 user 0m1.296s 00:11:39.026 sys 0m0.166s 00:11:39.026 22:57:28 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:39.026 ************************************ 00:11:39.026 END TEST accel_copy_crc32c 00:11:39.026 ************************************ 00:11:39.026 22:57:28 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:11:39.026 22:57:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:39.026 22:57:28 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:11:39.026 22:57:28 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:11:39.026 22:57:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:39.026 22:57:28 accel -- common/autotest_common.sh@10 -- # set +x 00:11:39.026 ************************************ 00:11:39.026 START TEST accel_copy_crc32c_C2 00:11:39.026 ************************************ 00:11:39.026 22:57:28 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:11:39.026 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:11:39.026 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:11:39.026 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:39.026 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:39.026 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:11:39.026 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:11:39.026 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:11:39.026 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:39.026 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:39.026 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:39.026 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:39.026 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:39.026 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:11:39.026 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:11:39.026 [2024-07-13 22:57:28.093968] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:39.026 [2024-07-13 22:57:28.094298] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126500 ] 00:11:39.026 [2024-07-13 22:57:28.240071] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.026 [2024-07-13 22:57:28.315795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.026 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:39.026 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:39.026 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:39.026 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:39.026 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:39.026 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:39.026 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:39.026 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:39.026 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:11:39.026 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:39.026 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:39.026 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:39.026 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:39.026 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:39.026 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:39.026 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:39.026 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:39.027 22:57:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:40.400 22:57:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:40.400 22:57:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:40.400 22:57:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:40.400 22:57:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:40.400 22:57:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:40.400 22:57:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:40.400 22:57:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:40.400 22:57:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:40.400 22:57:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:40.400 22:57:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:40.400 22:57:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:40.400 22:57:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:40.400 22:57:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:40.400 22:57:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:40.400 22:57:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:40.400 22:57:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:40.400 22:57:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:40.400 22:57:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:40.400 22:57:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:40.400 22:57:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:40.400 22:57:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:40.400 22:57:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:40.400 22:57:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:40.400 22:57:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:40.400 22:57:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:40.400 22:57:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:11:40.400 22:57:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:40.400 00:11:40.400 real 0m1.513s 00:11:40.401 user 0m1.306s 00:11:40.401 sys 0m0.142s 00:11:40.401 22:57:29 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:40.401 22:57:29 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:11:40.401 ************************************ 00:11:40.401 END TEST accel_copy_crc32c_C2 00:11:40.401 ************************************ 00:11:40.401 22:57:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:40.401 22:57:29 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:11:40.401 22:57:29 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:11:40.401 22:57:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:40.401 22:57:29 accel -- common/autotest_common.sh@10 -- # set +x 00:11:40.401 ************************************ 00:11:40.401 START TEST accel_dualcast 00:11:40.401 ************************************ 00:11:40.401 22:57:29 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:11:40.401 22:57:29 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:11:40.401 22:57:29 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:11:40.401 22:57:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:40.401 22:57:29 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:11:40.401 22:57:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:40.401 22:57:29 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:11:40.401 22:57:29 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:11:40.401 22:57:29 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:40.401 22:57:29 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:40.401 22:57:29 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:40.401 22:57:29 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:40.401 22:57:29 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:40.401 22:57:29 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:11:40.401 22:57:29 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:11:40.401 [2024-07-13 22:57:29.660301] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:40.401 [2024-07-13 22:57:29.660562] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126539 ] 00:11:40.401 [2024-07-13 22:57:29.799995] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.659 [2024-07-13 22:57:29.905424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:40.659 22:57:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:11:40.660 22:57:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:40.660 22:57:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:40.660 22:57:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:40.660 22:57:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:11:40.660 22:57:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:40.660 22:57:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:40.660 22:57:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:40.660 22:57:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:11:40.660 22:57:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:40.660 22:57:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:40.660 22:57:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:40.660 22:57:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:40.660 22:57:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:40.660 22:57:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:40.660 22:57:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:40.660 22:57:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:40.660 22:57:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:40.660 22:57:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:40.660 22:57:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:42.035 22:57:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:42.035 22:57:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:42.035 22:57:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:42.035 22:57:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:42.035 22:57:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:42.035 22:57:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:42.035 22:57:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:42.035 22:57:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:42.035 22:57:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:42.035 22:57:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:42.035 22:57:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:42.035 22:57:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:42.035 22:57:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:42.035 22:57:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:42.035 22:57:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:42.035 22:57:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:42.035 22:57:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:42.035 22:57:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:42.035 22:57:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:42.035 22:57:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:42.035 22:57:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:42.035 22:57:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:42.035 22:57:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:42.035 22:57:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:42.035 22:57:31 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:42.035 22:57:31 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:11:42.035 22:57:31 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:42.035 00:11:42.035 real 0m1.542s 00:11:42.035 user 0m1.315s 00:11:42.035 sys 0m0.156s 00:11:42.035 22:57:31 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:42.035 22:57:31 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:11:42.035 ************************************ 00:11:42.035 END TEST accel_dualcast 00:11:42.035 ************************************ 00:11:42.035 22:57:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:42.035 22:57:31 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:11:42.035 22:57:31 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:11:42.035 22:57:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:42.035 22:57:31 accel -- common/autotest_common.sh@10 -- # set +x 00:11:42.035 ************************************ 00:11:42.035 START TEST accel_compare 00:11:42.035 ************************************ 00:11:42.035 22:57:31 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:11:42.035 22:57:31 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:11:42.035 22:57:31 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:11:42.035 22:57:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:42.035 22:57:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:42.035 22:57:31 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:11:42.035 22:57:31 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:11:42.035 22:57:31 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:11:42.035 22:57:31 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:42.035 22:57:31 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:42.035 22:57:31 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:42.035 22:57:31 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:42.035 22:57:31 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:42.035 22:57:31 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:11:42.035 22:57:31 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:11:42.035 [2024-07-13 22:57:31.253950] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:42.035 [2024-07-13 22:57:31.254216] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126592 ] 00:11:42.035 [2024-07-13 22:57:31.400432] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.294 [2024-07-13 22:57:31.481034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:42.294 22:57:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:42.295 22:57:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:42.295 22:57:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:42.295 22:57:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:42.295 22:57:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:42.295 22:57:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:42.295 22:57:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:42.295 22:57:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:42.295 22:57:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:43.672 22:57:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:43.672 22:57:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:43.672 22:57:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:43.672 22:57:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:43.672 22:57:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:43.672 22:57:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:43.672 22:57:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:43.672 22:57:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:43.672 22:57:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:43.672 22:57:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:43.672 22:57:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:43.672 22:57:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:43.672 22:57:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:43.672 22:57:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:43.672 22:57:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:43.672 22:57:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:43.672 22:57:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:43.672 22:57:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:43.672 22:57:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:43.672 22:57:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:43.672 22:57:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:43.672 22:57:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:43.672 22:57:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:43.672 22:57:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:43.672 22:57:32 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:43.672 22:57:32 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:11:43.672 22:57:32 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:43.672 00:11:43.672 real 0m1.509s 00:11:43.672 user 0m1.291s 00:11:43.672 sys 0m0.152s 00:11:43.672 22:57:32 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:43.672 ************************************ 00:11:43.672 END TEST accel_compare 00:11:43.672 ************************************ 00:11:43.672 22:57:32 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:11:43.672 22:57:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:43.672 22:57:32 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:11:43.672 22:57:32 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:11:43.672 22:57:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:43.672 22:57:32 accel -- common/autotest_common.sh@10 -- # set +x 00:11:43.672 ************************************ 00:11:43.672 START TEST accel_xor 00:11:43.672 ************************************ 00:11:43.672 22:57:32 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:11:43.672 22:57:32 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:11:43.672 22:57:32 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:11:43.672 22:57:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:43.672 22:57:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:43.672 22:57:32 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:11:43.672 22:57:32 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:11:43.672 22:57:32 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:11:43.673 22:57:32 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:43.673 22:57:32 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:43.673 22:57:32 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:43.673 22:57:32 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:43.673 22:57:32 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:43.673 22:57:32 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:11:43.673 22:57:32 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:11:43.673 [2024-07-13 22:57:32.814554] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:43.673 [2024-07-13 22:57:32.814801] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126631 ] 00:11:43.673 [2024-07-13 22:57:32.960920] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.673 [2024-07-13 22:57:33.064500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:43.938 22:57:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:43.939 22:57:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:43.939 22:57:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:43.939 22:57:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:43.939 22:57:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:45.317 00:11:45.317 real 0m1.550s 00:11:45.317 user 0m1.334s 00:11:45.317 sys 0m0.144s 00:11:45.317 22:57:34 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:45.317 22:57:34 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:11:45.317 ************************************ 00:11:45.317 END TEST accel_xor 00:11:45.317 ************************************ 00:11:45.317 22:57:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:45.317 22:57:34 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:11:45.317 22:57:34 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:11:45.317 22:57:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:45.317 22:57:34 accel -- common/autotest_common.sh@10 -- # set +x 00:11:45.317 ************************************ 00:11:45.317 START TEST accel_xor 00:11:45.317 ************************************ 00:11:45.317 22:57:34 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:11:45.317 22:57:34 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:11:45.317 [2024-07-13 22:57:34.413873] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:45.317 [2024-07-13 22:57:34.414113] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126683 ] 00:11:45.317 [2024-07-13 22:57:34.551306] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.317 [2024-07-13 22:57:34.658776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.576 22:57:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:45.576 22:57:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.576 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.576 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.576 22:57:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:45.576 22:57:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.576 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.576 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.576 22:57:34 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:11:45.576 22:57:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.576 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.576 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.576 22:57:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:45.576 22:57:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.577 22:57:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:46.513 22:57:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:46.513 22:57:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:46.513 22:57:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:46.513 22:57:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:46.513 22:57:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:46.513 22:57:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:46.513 22:57:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:46.513 22:57:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:46.513 22:57:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:46.513 22:57:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:46.513 22:57:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:46.513 22:57:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:46.513 22:57:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:46.513 22:57:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:46.513 22:57:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:46.513 22:57:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:46.513 22:57:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:46.513 22:57:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:46.513 22:57:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:46.513 22:57:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:46.513 22:57:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:46.513 22:57:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:46.513 22:57:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:46.513 22:57:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:46.513 22:57:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:46.513 22:57:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:11:46.513 22:57:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:46.513 00:11:46.513 real 0m1.523s 00:11:46.513 user 0m1.312s 00:11:46.513 sys 0m0.153s 00:11:46.513 22:57:35 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:46.513 22:57:35 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:11:46.513 ************************************ 00:11:46.513 END TEST accel_xor 00:11:46.513 ************************************ 00:11:46.772 22:57:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:46.772 22:57:35 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:11:46.772 22:57:35 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:11:46.772 22:57:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:46.772 22:57:35 accel -- common/autotest_common.sh@10 -- # set +x 00:11:46.772 ************************************ 00:11:46.772 START TEST accel_dif_verify 00:11:46.772 ************************************ 00:11:46.772 22:57:35 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:11:46.772 22:57:35 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:11:46.772 22:57:35 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:11:46.772 22:57:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:46.772 22:57:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:46.772 22:57:35 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:11:46.772 22:57:35 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:11:46.772 22:57:35 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:11:46.772 22:57:35 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:46.772 22:57:35 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:46.772 22:57:35 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:46.772 22:57:35 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:46.772 22:57:35 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:46.772 22:57:35 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:11:46.772 22:57:35 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:11:46.772 [2024-07-13 22:57:35.991051] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:46.772 [2024-07-13 22:57:35.991286] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126720 ] 00:11:46.772 [2024-07-13 22:57:36.138191] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.031 [2024-07-13 22:57:36.235048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:47.031 22:57:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:48.409 22:57:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:48.409 22:57:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:48.409 22:57:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:48.409 22:57:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:48.409 22:57:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:48.409 22:57:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:48.409 22:57:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:48.409 22:57:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:48.409 22:57:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:48.409 22:57:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:48.409 22:57:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:48.409 22:57:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:48.409 22:57:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:48.409 22:57:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:48.409 22:57:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:48.409 22:57:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:48.409 22:57:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:48.409 22:57:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:48.409 22:57:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:48.409 22:57:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:48.409 22:57:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:48.409 22:57:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:48.409 22:57:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:48.409 22:57:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:48.409 22:57:37 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:48.409 22:57:37 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:11:48.409 22:57:37 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:48.409 00:11:48.409 real 0m1.533s 00:11:48.409 user 0m1.304s 00:11:48.409 sys 0m0.151s 00:11:48.409 22:57:37 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:48.409 22:57:37 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:11:48.409 ************************************ 00:11:48.409 END TEST accel_dif_verify 00:11:48.409 ************************************ 00:11:48.409 22:57:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:48.409 22:57:37 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:11:48.409 22:57:37 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:11:48.409 22:57:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:48.409 22:57:37 accel -- common/autotest_common.sh@10 -- # set +x 00:11:48.409 ************************************ 00:11:48.409 START TEST accel_dif_generate 00:11:48.409 ************************************ 00:11:48.409 22:57:37 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:11:48.409 22:57:37 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:11:48.409 22:57:37 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:11:48.409 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:48.409 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:48.409 22:57:37 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:11:48.409 22:57:37 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:11:48.409 22:57:37 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:11:48.409 22:57:37 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:48.409 22:57:37 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:48.409 22:57:37 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:48.409 22:57:37 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:48.409 22:57:37 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:48.409 22:57:37 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:11:48.409 22:57:37 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:11:48.409 [2024-07-13 22:57:37.575458] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:48.409 [2024-07-13 22:57:37.575716] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126766 ] 00:11:48.409 [2024-07-13 22:57:37.721409] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.669 [2024-07-13 22:57:37.815751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:48.669 22:57:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:11:48.670 22:57:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:48.670 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:48.670 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:48.670 22:57:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:11:48.670 22:57:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:48.670 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:48.670 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:48.670 22:57:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:11:48.670 22:57:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:48.670 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:48.670 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:48.670 22:57:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:11:48.670 22:57:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:48.670 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:48.670 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:48.670 22:57:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:48.670 22:57:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:48.670 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:48.670 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:48.670 22:57:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:48.670 22:57:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:48.670 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:48.670 22:57:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:50.049 22:57:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:50.049 22:57:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:50.049 22:57:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:50.049 22:57:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:50.049 22:57:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:50.049 22:57:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:50.049 22:57:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:50.049 22:57:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:50.049 22:57:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:50.049 22:57:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:50.049 22:57:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:50.049 22:57:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:50.049 22:57:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:50.049 22:57:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:50.049 22:57:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:50.049 22:57:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:50.049 22:57:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:50.049 22:57:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:50.049 22:57:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:50.049 22:57:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:50.049 22:57:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:50.049 22:57:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:50.049 22:57:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:50.049 22:57:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:50.049 22:57:39 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:50.049 22:57:39 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:11:50.049 22:57:39 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:50.049 00:11:50.049 real 0m1.538s 00:11:50.049 user 0m1.333s 00:11:50.049 sys 0m0.142s 00:11:50.049 22:57:39 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:50.049 22:57:39 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:11:50.049 ************************************ 00:11:50.049 END TEST accel_dif_generate 00:11:50.049 ************************************ 00:11:50.049 22:57:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:50.049 22:57:39 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:11:50.050 22:57:39 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:11:50.050 22:57:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:50.050 22:57:39 accel -- common/autotest_common.sh@10 -- # set +x 00:11:50.050 ************************************ 00:11:50.050 START TEST accel_dif_generate_copy 00:11:50.050 ************************************ 00:11:50.050 22:57:39 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:11:50.050 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:11:50.050 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:11:50.050 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:50.050 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:11:50.050 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:50.050 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:11:50.050 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:11:50.050 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:50.050 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:50.050 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:50.050 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:50.050 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:50.050 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:11:50.050 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:11:50.050 [2024-07-13 22:57:39.168425] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:50.050 [2024-07-13 22:57:39.168707] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126810 ] 00:11:50.050 [2024-07-13 22:57:39.321446] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.050 [2024-07-13 22:57:39.430761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:50.309 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:11:50.310 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:50.310 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:50.310 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:11:50.310 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:50.310 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:50.310 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:50.310 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:11:50.310 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:50.310 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:50.310 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:50.310 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:11:50.310 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:50.310 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:50.310 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:50.310 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:11:50.310 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:50.310 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:50.310 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:50.310 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:11:50.310 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:50.310 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:50.310 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:50.310 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:50.310 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:50.310 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:50.310 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:50.310 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:50.310 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:50.310 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:50.310 22:57:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:51.687 22:57:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:51.687 22:57:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:51.687 22:57:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:51.687 22:57:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:51.687 22:57:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:51.687 22:57:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:51.687 22:57:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:51.687 22:57:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:51.687 22:57:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:51.687 22:57:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:51.687 22:57:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:51.687 22:57:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:51.687 22:57:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:51.687 22:57:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:51.687 22:57:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:51.687 22:57:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:51.687 22:57:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:51.687 22:57:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:51.687 22:57:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:51.687 22:57:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:51.687 22:57:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:51.687 22:57:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:51.687 22:57:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:51.687 22:57:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:51.687 22:57:40 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:51.687 22:57:40 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:11:51.687 22:57:40 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:51.687 00:11:51.687 real 0m1.572s 00:11:51.687 user 0m1.370s 00:11:51.687 sys 0m0.144s 00:11:51.687 22:57:40 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:51.687 22:57:40 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:11:51.687 ************************************ 00:11:51.687 END TEST accel_dif_generate_copy 00:11:51.687 ************************************ 00:11:51.687 22:57:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:51.687 22:57:40 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:11:51.687 22:57:40 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:51.687 22:57:40 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:11:51.687 22:57:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:51.687 22:57:40 accel -- common/autotest_common.sh@10 -- # set +x 00:11:51.687 ************************************ 00:11:51.687 START TEST accel_comp 00:11:51.687 ************************************ 00:11:51.687 22:57:40 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:51.687 22:57:40 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:11:51.687 22:57:40 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:11:51.687 22:57:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:51.687 22:57:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:51.687 22:57:40 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:51.687 22:57:40 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:11:51.687 22:57:40 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:51.687 22:57:40 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:51.687 22:57:40 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:51.687 22:57:40 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:51.687 22:57:40 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:51.687 22:57:40 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:51.687 22:57:40 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:11:51.687 22:57:40 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:11:51.687 [2024-07-13 22:57:40.796601] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:51.687 [2024-07-13 22:57:40.796880] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126856 ] 00:11:51.687 [2024-07-13 22:57:40.946088] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.687 [2024-07-13 22:57:41.059091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:51.945 22:57:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:53.318 22:57:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:53.318 22:57:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:53.318 22:57:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:53.318 22:57:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:53.318 22:57:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:53.318 22:57:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:53.318 22:57:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:53.318 22:57:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:53.318 22:57:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:53.318 22:57:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:53.318 22:57:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:53.318 22:57:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:53.318 22:57:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:53.318 22:57:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:53.318 22:57:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:53.318 22:57:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:53.318 22:57:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:53.318 22:57:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:53.318 22:57:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:53.318 22:57:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:53.318 22:57:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:53.318 22:57:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:53.318 22:57:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:53.318 22:57:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:53.318 22:57:42 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:53.318 22:57:42 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:11:53.318 22:57:42 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:53.318 00:11:53.318 real 0m1.576s 00:11:53.318 user 0m1.339s 00:11:53.318 sys 0m0.169s 00:11:53.318 22:57:42 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:53.318 22:57:42 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:11:53.318 ************************************ 00:11:53.318 END TEST accel_comp 00:11:53.318 ************************************ 00:11:53.318 22:57:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:53.318 22:57:42 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:53.318 22:57:42 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:11:53.318 22:57:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:53.318 22:57:42 accel -- common/autotest_common.sh@10 -- # set +x 00:11:53.318 ************************************ 00:11:53.318 START TEST accel_decomp 00:11:53.318 ************************************ 00:11:53.318 22:57:42 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:53.318 22:57:42 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:11:53.318 22:57:42 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:11:53.318 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:53.319 22:57:42 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:53.319 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:53.319 22:57:42 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:53.319 22:57:42 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:11:53.319 22:57:42 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:53.319 22:57:42 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:53.319 22:57:42 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:53.319 22:57:42 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:53.319 22:57:42 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:53.319 22:57:42 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:11:53.319 22:57:42 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:11:53.319 [2024-07-13 22:57:42.419960] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:53.319 [2024-07-13 22:57:42.420234] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126909 ] 00:11:53.319 [2024-07-13 22:57:42.568253] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.319 [2024-07-13 22:57:42.648552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:53.577 22:57:42 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:11:53.578 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:53.578 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:53.578 22:57:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:53.578 22:57:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:53.578 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:53.578 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:53.578 22:57:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:11:53.578 22:57:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:53.578 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:53.578 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:53.578 22:57:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:11:53.578 22:57:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:53.578 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:53.578 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:53.578 22:57:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:11:53.578 22:57:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:53.578 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:53.578 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:53.578 22:57:42 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:11:53.578 22:57:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:53.578 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:53.578 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:53.578 22:57:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:11:53.578 22:57:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:53.578 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:53.578 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:53.578 22:57:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:53.578 22:57:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:53.578 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:53.578 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:53.578 22:57:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:53.578 22:57:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:53.578 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:53.578 22:57:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:54.511 22:57:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:54.511 22:57:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:54.511 22:57:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:54.511 22:57:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:54.511 22:57:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:54.511 22:57:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:54.511 22:57:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:54.511 22:57:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:54.511 22:57:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:54.511 22:57:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:54.511 22:57:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:54.511 22:57:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:54.511 22:57:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:54.511 22:57:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:54.511 22:57:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:54.511 22:57:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:54.511 22:57:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:54.511 22:57:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:54.511 22:57:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:54.511 22:57:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:54.511 22:57:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:54.511 22:57:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:54.511 22:57:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:54.511 22:57:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:54.511 22:57:43 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:54.511 22:57:43 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:11:54.511 22:57:43 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:54.511 00:11:54.511 real 0m1.514s 00:11:54.511 user 0m1.279s 00:11:54.511 sys 0m0.170s 00:11:54.511 22:57:43 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:54.511 22:57:43 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:11:54.511 ************************************ 00:11:54.511 END TEST accel_decomp 00:11:54.511 ************************************ 00:11:54.770 22:57:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:54.770 22:57:43 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:54.770 22:57:43 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:11:54.770 22:57:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:54.770 22:57:43 accel -- common/autotest_common.sh@10 -- # set +x 00:11:54.770 ************************************ 00:11:54.770 START TEST accel_decomp_full 00:11:54.770 ************************************ 00:11:54.770 22:57:43 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:54.770 22:57:43 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:11:54.770 22:57:43 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:11:54.770 22:57:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:54.770 22:57:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:54.770 22:57:43 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:54.770 22:57:43 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:54.770 22:57:43 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:11:54.770 22:57:43 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:54.770 22:57:43 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:54.770 22:57:43 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:54.770 22:57:43 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:54.770 22:57:43 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:54.770 22:57:43 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:11:54.770 22:57:43 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:11:54.770 [2024-07-13 22:57:43.985238] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:54.770 [2024-07-13 22:57:43.985487] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126948 ] 00:11:54.770 [2024-07-13 22:57:44.133193] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.028 [2024-07-13 22:57:44.207646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.028 22:57:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:55.028 22:57:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:55.028 22:57:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:55.028 22:57:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:55.028 22:57:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:55.028 22:57:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:55.028 22:57:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:55.028 22:57:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:55.028 22:57:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:55.028 22:57:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:55.028 22:57:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:55.028 22:57:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:55.028 22:57:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:11:55.028 22:57:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:55.028 22:57:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:55.028 22:57:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:55.028 22:57:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:55.028 22:57:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:55.028 22:57:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:55.028 22:57:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:55.028 22:57:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:55.028 22:57:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:55.028 22:57:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:55.028 22:57:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:55.028 22:57:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:11:55.028 22:57:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:55.028 22:57:44 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:55.029 22:57:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:56.404 22:57:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:56.404 22:57:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:56.404 22:57:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:56.404 22:57:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:56.404 22:57:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:56.404 22:57:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:56.404 22:57:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:56.404 22:57:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:56.404 22:57:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:56.404 22:57:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:56.404 22:57:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:56.404 22:57:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:56.404 22:57:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:56.404 22:57:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:56.404 22:57:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:56.404 22:57:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:56.404 22:57:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:56.404 22:57:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:56.404 22:57:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:56.404 22:57:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:56.404 22:57:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:11:56.404 22:57:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:11:56.404 22:57:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:11:56.404 22:57:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:11:56.404 22:57:45 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:56.404 22:57:45 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:11:56.404 22:57:45 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:56.404 00:11:56.404 real 0m1.522s 00:11:56.404 user 0m1.310s 00:11:56.404 sys 0m0.157s 00:11:56.404 22:57:45 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:56.404 22:57:45 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:11:56.404 ************************************ 00:11:56.404 END TEST accel_decomp_full 00:11:56.404 ************************************ 00:11:56.404 22:57:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:56.404 22:57:45 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:56.404 22:57:45 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:11:56.404 22:57:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:56.404 22:57:45 accel -- common/autotest_common.sh@10 -- # set +x 00:11:56.404 ************************************ 00:11:56.404 START TEST accel_decomp_mcore 00:11:56.404 ************************************ 00:11:56.404 22:57:45 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:56.404 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:11:56.404 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:11:56.404 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:56.404 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:56.404 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:56.404 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:11:56.404 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:56.404 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:56.404 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:56.404 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:56.404 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:56.404 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:56.404 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:11:56.404 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:11:56.404 [2024-07-13 22:57:45.555894] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:56.404 [2024-07-13 22:57:45.556350] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126999 ] 00:11:56.404 [2024-07-13 22:57:45.721511] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:56.663 [2024-07-13 22:57:45.821036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:56.663 [2024-07-13 22:57:45.821182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:56.663 [2024-07-13 22:57:45.821495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:56.663 [2024-07-13 22:57:45.821541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:56.663 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:56.664 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:56.664 22:57:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:58.040 00:11:58.040 real 0m1.573s 00:11:58.040 user 0m4.840s 00:11:58.040 sys 0m0.166s 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:58.040 22:57:47 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:11:58.040 ************************************ 00:11:58.040 END TEST accel_decomp_mcore 00:11:58.040 ************************************ 00:11:58.040 22:57:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:58.040 22:57:47 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:58.040 22:57:47 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:11:58.040 22:57:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:58.040 22:57:47 accel -- common/autotest_common.sh@10 -- # set +x 00:11:58.040 ************************************ 00:11:58.040 START TEST accel_decomp_full_mcore 00:11:58.040 ************************************ 00:11:58.040 22:57:47 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:58.040 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:11:58.040 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:11:58.040 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:58.040 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:58.040 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:58.040 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:58.040 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:11:58.040 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:58.040 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:58.040 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:58.040 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:58.041 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:58.041 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:11:58.041 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:11:58.041 [2024-07-13 22:57:47.180504] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:58.041 [2024-07-13 22:57:47.180789] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127041 ] 00:11:58.041 [2024-07-13 22:57:47.346065] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:58.299 [2024-07-13 22:57:47.462515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.299 [2024-07-13 22:57:47.462645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:58.300 [2024-07-13 22:57:47.463130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:58.300 [2024-07-13 22:57:47.463157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:58.300 22:57:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:59.676 00:11:59.676 real 0m1.611s 00:11:59.676 user 0m4.943s 00:11:59.676 sys 0m0.148s 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:59.676 22:57:48 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:11:59.676 ************************************ 00:11:59.676 END TEST accel_decomp_full_mcore 00:11:59.676 ************************************ 00:11:59.676 22:57:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:59.676 22:57:48 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:59.676 22:57:48 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:11:59.676 22:57:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:59.676 22:57:48 accel -- common/autotest_common.sh@10 -- # set +x 00:11:59.676 ************************************ 00:11:59.676 START TEST accel_decomp_mthread 00:11:59.676 ************************************ 00:11:59.676 22:57:48 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:59.676 22:57:48 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:11:59.676 22:57:48 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:11:59.676 22:57:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:59.676 22:57:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:59.676 22:57:48 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:59.676 22:57:48 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:59.676 22:57:48 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:11:59.676 22:57:48 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:59.676 22:57:48 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:59.676 22:57:48 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:59.676 22:57:48 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:59.676 22:57:48 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:59.676 22:57:48 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:11:59.676 22:57:48 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:11:59.676 [2024-07-13 22:57:48.839121] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:59.676 [2024-07-13 22:57:48.839400] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127095 ] 00:11:59.676 [2024-07-13 22:57:48.987503] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.934 [2024-07-13 22:57:49.103912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:59.934 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:59.935 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:59.935 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:59.935 22:57:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:01.312 22:57:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:01.312 22:57:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:01.312 22:57:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:01.312 22:57:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:01.312 22:57:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:01.312 22:57:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:01.312 22:57:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:01.312 22:57:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:01.312 22:57:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:01.312 22:57:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:01.312 22:57:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:01.312 22:57:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:01.312 22:57:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:01.312 22:57:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:01.312 22:57:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:01.312 22:57:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:01.312 22:57:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:01.312 22:57:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:01.312 22:57:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:01.312 22:57:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:01.312 22:57:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:01.312 22:57:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:01.312 22:57:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:01.312 22:57:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:01.312 22:57:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:01.312 22:57:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:01.312 22:57:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:01.312 22:57:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:01.312 22:57:50 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:01.312 22:57:50 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:01.312 22:57:50 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:01.312 00:12:01.312 real 0m1.563s 00:12:01.312 user 0m1.319s 00:12:01.312 sys 0m0.151s 00:12:01.312 22:57:50 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:01.312 ************************************ 00:12:01.312 END TEST accel_decomp_mthread 00:12:01.312 ************************************ 00:12:01.312 22:57:50 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:12:01.312 22:57:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:01.312 22:57:50 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:01.312 22:57:50 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:12:01.312 22:57:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:01.312 22:57:50 accel -- common/autotest_common.sh@10 -- # set +x 00:12:01.312 ************************************ 00:12:01.312 START TEST accel_decomp_full_mthread 00:12:01.312 ************************************ 00:12:01.312 22:57:50 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:01.312 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:12:01.312 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:12:01.312 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:01.312 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:01.312 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:01.312 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:01.312 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:12:01.312 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:01.312 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:01.312 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:01.312 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:01.312 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:01.312 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:12:01.312 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:12:01.312 [2024-07-13 22:57:50.451876] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:12:01.312 [2024-07-13 22:57:50.452123] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127134 ] 00:12:01.312 [2024-07-13 22:57:50.596596] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.312 [2024-07-13 22:57:50.669183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.571 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:01.571 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:01.571 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:01.571 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:01.571 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:01.571 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:01.571 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:01.571 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:01.571 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:01.571 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:01.571 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:01.571 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:01.571 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:12:01.571 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:01.571 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:01.572 22:57:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:02.983 22:57:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:02.983 22:57:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:02.983 22:57:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:02.983 22:57:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:02.983 22:57:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:02.983 22:57:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:02.983 22:57:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:02.983 22:57:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:02.983 22:57:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:02.983 22:57:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:02.983 22:57:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:02.983 22:57:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:02.983 22:57:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:02.983 22:57:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:02.983 22:57:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:02.983 22:57:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:02.983 22:57:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:02.983 22:57:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:02.983 22:57:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:02.983 22:57:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:02.983 22:57:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:02.983 22:57:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:02.983 22:57:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:02.983 22:57:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:02.983 22:57:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:02.983 22:57:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:02.983 22:57:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:02.983 22:57:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:02.983 22:57:51 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:02.983 22:57:51 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:02.983 22:57:51 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:02.983 00:12:02.983 real 0m1.543s 00:12:02.983 user 0m1.305s 00:12:02.983 sys 0m0.159s 00:12:02.983 22:57:51 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:02.983 22:57:51 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:12:02.983 ************************************ 00:12:02.983 END TEST accel_decomp_full_mthread 00:12:02.983 ************************************ 00:12:02.983 22:57:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:02.983 22:57:51 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:12:02.983 22:57:51 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:12:02.983 22:57:51 accel -- accel/accel.sh@137 -- # build_accel_config 00:12:02.983 22:57:51 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:02.983 22:57:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:02.983 22:57:51 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:02.983 22:57:51 accel -- common/autotest_common.sh@10 -- # set +x 00:12:02.983 22:57:51 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:02.983 22:57:51 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:02.983 22:57:51 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:02.983 22:57:52 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:02.983 22:57:52 accel -- accel/accel.sh@40 -- # local IFS=, 00:12:02.983 22:57:52 accel -- accel/accel.sh@41 -- # jq -r . 00:12:02.983 ************************************ 00:12:02.983 START TEST accel_dif_functional_tests 00:12:02.983 ************************************ 00:12:02.983 22:57:52 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:12:02.983 [2024-07-13 22:57:52.078574] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:12:02.983 [2024-07-13 22:57:52.079358] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127188 ] 00:12:02.983 [2024-07-13 22:57:52.233851] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:02.983 [2024-07-13 22:57:52.293957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:02.983 [2024-07-13 22:57:52.294116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.983 [2024-07-13 22:57:52.294111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:02.983 00:12:02.983 00:12:02.983 CUnit - A unit testing framework for C - Version 2.1-3 00:12:02.983 http://cunit.sourceforge.net/ 00:12:02.983 00:12:02.983 00:12:02.983 Suite: accel_dif 00:12:02.983 Test: verify: DIF generated, GUARD check ...passed 00:12:02.983 Test: verify: DIF generated, APPTAG check ...passed 00:12:02.983 Test: verify: DIF generated, REFTAG check ...passed 00:12:02.983 Test: verify: DIF not generated, GUARD check ...passed 00:12:02.983 Test: verify: DIF not generated, APPTAG check ...[2024-07-13 22:57:52.382529] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:12:02.983 [2024-07-13 22:57:52.382689] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:12:02.983 passed 00:12:02.983 Test: verify: DIF not generated, REFTAG check ...passed 00:12:02.983 Test: verify: APPTAG correct, APPTAG check ...[2024-07-13 22:57:52.382780] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:12:02.983 passed 00:12:02.983 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:12:02.983 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:12:02.983 Test: verify: REFTAG incorrect, REFTAG ignore ...[2024-07-13 22:57:52.382974] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:12:02.983 passed 00:12:02.984 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:12:02.984 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:12:02.984 Test: verify copy: DIF generated, GUARD check ...[2024-07-13 22:57:52.383253] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:12:02.984 passed 00:12:02.984 Test: verify copy: DIF generated, APPTAG check ...passed 00:12:02.984 Test: verify copy: DIF generated, REFTAG check ...passed 00:12:02.984 Test: verify copy: DIF not generated, GUARD check ...[2024-07-13 22:57:52.383616] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:12:02.984 passed 00:12:02.984 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-13 22:57:52.383768] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:12:02.984 passed 00:12:02.984 Test: verify copy: DIF not generated, REFTAG check ...passed 00:12:02.984 Test: generate copy: DIF generated, GUARD check ...passed 00:12:02.984 Test: generate copy: DIF generated, APTTAG check ...[2024-07-13 22:57:52.383895] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:12:02.984 passed 00:12:02.984 Test: generate copy: DIF generated, REFTAG check ...passed 00:12:02.984 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:12:02.984 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:12:02.984 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:12:02.984 Test: generate copy: iovecs-len validate ...[2024-07-13 22:57:52.384421] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:12:02.984 passed 00:12:02.984 Test: generate copy: buffer alignment validate ...passed 00:12:02.984 00:12:02.984 Run Summary: Type Total Ran Passed Failed Inactive 00:12:02.984 suites 1 1 n/a 0 0 00:12:02.984 tests 26 26 26 0 0 00:12:02.984 asserts 115 115 115 0 n/a 00:12:02.984 00:12:02.984 Elapsed time = 0.001 seconds 00:12:03.243 00:12:03.243 real 0m0.623s 00:12:03.243 user 0m0.798s 00:12:03.243 sys 0m0.209s 00:12:03.243 22:57:52 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:03.243 ************************************ 00:12:03.243 END TEST accel_dif_functional_tests 00:12:03.243 ************************************ 00:12:03.243 22:57:52 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:12:03.502 22:57:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:03.502 00:12:03.502 real 0m35.425s 00:12:03.502 user 0m37.220s 00:12:03.502 sys 0m4.691s 00:12:03.502 22:57:52 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:03.502 22:57:52 accel -- common/autotest_common.sh@10 -- # set +x 00:12:03.502 ************************************ 00:12:03.502 END TEST accel 00:12:03.502 ************************************ 00:12:03.502 22:57:52 -- common/autotest_common.sh@1142 -- # return 0 00:12:03.502 22:57:52 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:12:03.502 22:57:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:03.502 22:57:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:03.502 22:57:52 -- common/autotest_common.sh@10 -- # set +x 00:12:03.502 ************************************ 00:12:03.502 START TEST accel_rpc 00:12:03.502 ************************************ 00:12:03.502 22:57:52 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:12:03.502 * Looking for test storage... 00:12:03.502 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:12:03.502 22:57:52 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:03.502 22:57:52 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=127262 00:12:03.502 22:57:52 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 127262 00:12:03.502 22:57:52 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:12:03.502 22:57:52 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 127262 ']' 00:12:03.502 22:57:52 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.502 22:57:52 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:03.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.502 22:57:52 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.502 22:57:52 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:03.502 22:57:52 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.502 [2024-07-13 22:57:52.874976] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:12:03.502 [2024-07-13 22:57:52.875256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127262 ] 00:12:03.760 [2024-07-13 22:57:53.019826] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.760 [2024-07-13 22:57:53.109525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.700 22:57:53 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:04.700 22:57:53 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:12:04.700 22:57:53 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:12:04.700 22:57:53 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:12:04.700 22:57:53 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:12:04.700 22:57:53 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:12:04.700 22:57:53 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:12:04.700 22:57:53 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:04.700 22:57:53 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:04.700 22:57:53 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.700 ************************************ 00:12:04.700 START TEST accel_assign_opcode 00:12:04.700 ************************************ 00:12:04.700 22:57:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:12:04.700 22:57:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:12:04.700 22:57:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.700 22:57:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:12:04.700 [2024-07-13 22:57:53.834439] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:12:04.700 22:57:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.700 22:57:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:12:04.700 22:57:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.700 22:57:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:12:04.700 [2024-07-13 22:57:53.842401] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:12:04.700 22:57:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.700 22:57:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:12:04.700 22:57:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.700 22:57:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:12:04.700 22:57:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.700 22:57:54 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:12:04.700 22:57:54 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:12:04.700 22:57:54 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:12:04.700 22:57:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.700 22:57:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:12:04.700 22:57:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.958 software 00:12:04.958 00:12:04.958 real 0m0.292s 00:12:04.958 user 0m0.052s 00:12:04.958 sys 0m0.011s 00:12:04.958 22:57:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:04.958 22:57:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:12:04.958 ************************************ 00:12:04.958 END TEST accel_assign_opcode 00:12:04.958 ************************************ 00:12:04.958 22:57:54 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:12:04.958 22:57:54 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 127262 00:12:04.958 22:57:54 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 127262 ']' 00:12:04.958 22:57:54 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 127262 00:12:04.958 22:57:54 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:12:04.958 22:57:54 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:04.958 22:57:54 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 127262 00:12:04.958 22:57:54 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:04.958 22:57:54 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:04.958 killing process with pid 127262 00:12:04.958 22:57:54 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 127262' 00:12:04.959 22:57:54 accel_rpc -- common/autotest_common.sh@967 -- # kill 127262 00:12:04.959 22:57:54 accel_rpc -- common/autotest_common.sh@972 -- # wait 127262 00:12:05.217 00:12:05.217 real 0m1.897s 00:12:05.217 user 0m1.979s 00:12:05.217 sys 0m0.446s 00:12:05.217 22:57:54 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:05.217 22:57:54 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.217 ************************************ 00:12:05.217 END TEST accel_rpc 00:12:05.217 ************************************ 00:12:05.476 22:57:54 -- common/autotest_common.sh@1142 -- # return 0 00:12:05.476 22:57:54 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:05.476 22:57:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:05.476 22:57:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:05.476 22:57:54 -- common/autotest_common.sh@10 -- # set +x 00:12:05.476 ************************************ 00:12:05.476 START TEST app_cmdline 00:12:05.476 ************************************ 00:12:05.476 22:57:54 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:05.476 * Looking for test storage... 00:12:05.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:05.476 22:57:54 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:12:05.476 22:57:54 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=127372 00:12:05.476 22:57:54 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 127372 00:12:05.476 22:57:54 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:12:05.476 22:57:54 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 127372 ']' 00:12:05.476 22:57:54 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.476 22:57:54 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:05.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.476 22:57:54 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.476 22:57:54 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:05.476 22:57:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:05.476 [2024-07-13 22:57:54.813343] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:12:05.476 [2024-07-13 22:57:54.813634] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127372 ] 00:12:05.735 [2024-07-13 22:57:54.960649] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.735 [2024-07-13 22:57:55.030226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.671 22:57:55 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:06.671 22:57:55 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:12:06.671 22:57:55 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:12:06.671 { 00:12:06.671 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:12:06.671 "fields": { 00:12:06.671 "major": 24, 00:12:06.671 "minor": 9, 00:12:06.671 "patch": 0, 00:12:06.671 "suffix": "-pre", 00:12:06.671 "commit": "719d03c6a" 00:12:06.671 } 00:12:06.671 } 00:12:06.671 22:57:55 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:12:06.671 22:57:55 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:12:06.671 22:57:55 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:12:06.671 22:57:55 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:12:06.671 22:57:55 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:12:06.671 22:57:55 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.671 22:57:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:06.671 22:57:55 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:12:06.671 22:57:55 app_cmdline -- app/cmdline.sh@26 -- # sort 00:12:06.671 22:57:55 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.671 22:57:56 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:12:06.671 22:57:56 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:12:06.671 22:57:56 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:06.671 22:57:56 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:12:06.671 22:57:56 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:06.671 22:57:56 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:06.671 22:57:56 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.671 22:57:56 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:06.671 22:57:56 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.671 22:57:56 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:06.671 22:57:56 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.671 22:57:56 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:06.671 22:57:56 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:06.671 22:57:56 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:06.930 request: 00:12:06.930 { 00:12:06.930 "method": "env_dpdk_get_mem_stats", 00:12:06.930 "req_id": 1 00:12:06.930 } 00:12:06.930 Got JSON-RPC error response 00:12:06.930 response: 00:12:06.930 { 00:12:06.930 "code": -32601, 00:12:06.930 "message": "Method not found" 00:12:06.930 } 00:12:06.930 22:57:56 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:12:06.930 22:57:56 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:06.930 22:57:56 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:06.930 22:57:56 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:06.930 22:57:56 app_cmdline -- app/cmdline.sh@1 -- # killprocess 127372 00:12:06.930 22:57:56 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 127372 ']' 00:12:06.930 22:57:56 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 127372 00:12:06.930 22:57:56 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:12:06.930 22:57:56 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:06.930 22:57:56 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 127372 00:12:06.930 22:57:56 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:06.930 22:57:56 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:06.930 killing process with pid 127372 00:12:06.930 22:57:56 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 127372' 00:12:06.930 22:57:56 app_cmdline -- common/autotest_common.sh@967 -- # kill 127372 00:12:06.930 22:57:56 app_cmdline -- common/autotest_common.sh@972 -- # wait 127372 00:12:07.498 00:12:07.498 real 0m2.033s 00:12:07.498 user 0m2.428s 00:12:07.498 sys 0m0.520s 00:12:07.498 22:57:56 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:07.498 22:57:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:07.498 ************************************ 00:12:07.498 END TEST app_cmdline 00:12:07.498 ************************************ 00:12:07.498 22:57:56 -- common/autotest_common.sh@1142 -- # return 0 00:12:07.498 22:57:56 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:07.498 22:57:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:07.498 22:57:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:07.498 22:57:56 -- common/autotest_common.sh@10 -- # set +x 00:12:07.498 ************************************ 00:12:07.498 START TEST version 00:12:07.498 ************************************ 00:12:07.498 22:57:56 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:07.498 * Looking for test storage... 00:12:07.498 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:07.498 22:57:56 version -- app/version.sh@17 -- # get_header_version major 00:12:07.498 22:57:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:07.498 22:57:56 version -- app/version.sh@14 -- # cut -f2 00:12:07.498 22:57:56 version -- app/version.sh@14 -- # tr -d '"' 00:12:07.498 22:57:56 version -- app/version.sh@17 -- # major=24 00:12:07.498 22:57:56 version -- app/version.sh@18 -- # get_header_version minor 00:12:07.498 22:57:56 version -- app/version.sh@14 -- # cut -f2 00:12:07.498 22:57:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:07.498 22:57:56 version -- app/version.sh@14 -- # tr -d '"' 00:12:07.498 22:57:56 version -- app/version.sh@18 -- # minor=9 00:12:07.498 22:57:56 version -- app/version.sh@19 -- # get_header_version patch 00:12:07.498 22:57:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:07.498 22:57:56 version -- app/version.sh@14 -- # cut -f2 00:12:07.498 22:57:56 version -- app/version.sh@14 -- # tr -d '"' 00:12:07.498 22:57:56 version -- app/version.sh@19 -- # patch=0 00:12:07.498 22:57:56 version -- app/version.sh@20 -- # get_header_version suffix 00:12:07.498 22:57:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:07.498 22:57:56 version -- app/version.sh@14 -- # cut -f2 00:12:07.498 22:57:56 version -- app/version.sh@14 -- # tr -d '"' 00:12:07.498 22:57:56 version -- app/version.sh@20 -- # suffix=-pre 00:12:07.498 22:57:56 version -- app/version.sh@22 -- # version=24.9 00:12:07.498 22:57:56 version -- app/version.sh@25 -- # (( patch != 0 )) 00:12:07.498 22:57:56 version -- app/version.sh@28 -- # version=24.9rc0 00:12:07.498 22:57:56 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:07.498 22:57:56 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:12:07.498 22:57:56 version -- app/version.sh@30 -- # py_version=24.9rc0 00:12:07.498 22:57:56 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:12:07.498 00:12:07.498 real 0m0.148s 00:12:07.498 user 0m0.111s 00:12:07.498 sys 0m0.071s 00:12:07.498 22:57:56 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:07.498 22:57:56 version -- common/autotest_common.sh@10 -- # set +x 00:12:07.498 ************************************ 00:12:07.498 END TEST version 00:12:07.498 ************************************ 00:12:07.757 22:57:56 -- common/autotest_common.sh@1142 -- # return 0 00:12:07.757 22:57:56 -- spdk/autotest.sh@188 -- # '[' 1 -eq 1 ']' 00:12:07.757 22:57:56 -- spdk/autotest.sh@189 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:12:07.757 22:57:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:07.757 22:57:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:07.757 22:57:56 -- common/autotest_common.sh@10 -- # set +x 00:12:07.757 ************************************ 00:12:07.757 START TEST blockdev_general 00:12:07.757 ************************************ 00:12:07.757 22:57:56 blockdev_general -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:12:07.757 * Looking for test storage... 00:12:07.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:07.757 22:57:57 blockdev_general -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:07.757 22:57:57 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:12:07.757 22:57:57 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:07.757 22:57:57 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:07.757 22:57:57 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:07.757 22:57:57 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:07.757 22:57:57 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:07.757 22:57:57 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:07.757 22:57:57 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:12:07.757 22:57:57 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:12:07.757 22:57:57 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:12:07.757 22:57:57 blockdev_general -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:12:07.757 22:57:57 blockdev_general -- bdev/blockdev.sh@674 -- # uname -s 00:12:07.758 22:57:57 blockdev_general -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:12:07.758 22:57:57 blockdev_general -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:12:07.758 22:57:57 blockdev_general -- bdev/blockdev.sh@682 -- # test_type=bdev 00:12:07.758 22:57:57 blockdev_general -- bdev/blockdev.sh@683 -- # crypto_device= 00:12:07.758 22:57:57 blockdev_general -- bdev/blockdev.sh@684 -- # dek= 00:12:07.758 22:57:57 blockdev_general -- bdev/blockdev.sh@685 -- # env_ctx= 00:12:07.758 22:57:57 blockdev_general -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:12:07.758 22:57:57 blockdev_general -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:12:07.758 22:57:57 blockdev_general -- bdev/blockdev.sh@690 -- # [[ bdev == bdev ]] 00:12:07.758 22:57:57 blockdev_general -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:12:07.758 22:57:57 blockdev_general -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:12:07.758 22:57:57 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=127529 00:12:07.758 22:57:57 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:07.758 22:57:57 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 127529 00:12:07.758 22:57:57 blockdev_general -- common/autotest_common.sh@829 -- # '[' -z 127529 ']' 00:12:07.758 22:57:57 blockdev_general -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.758 22:57:57 blockdev_general -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:07.758 22:57:57 blockdev_general -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.758 22:57:57 blockdev_general -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:07.758 22:57:57 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:07.758 22:57:57 blockdev_general -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:12:07.758 [2024-07-13 22:57:57.100504] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:12:07.758 [2024-07-13 22:57:57.100758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127529 ] 00:12:08.016 [2024-07-13 22:57:57.248417] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.016 [2024-07-13 22:57:57.311768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.953 22:57:58 blockdev_general -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:08.953 22:57:58 blockdev_general -- common/autotest_common.sh@862 -- # return 0 00:12:08.953 22:57:58 blockdev_general -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:12:08.953 22:57:58 blockdev_general -- bdev/blockdev.sh@696 -- # setup_bdev_conf 00:12:08.953 22:57:58 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:12:08.953 22:57:58 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.953 22:57:58 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:08.953 [2024-07-13 22:57:58.342951] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:08.953 [2024-07-13 22:57:58.343087] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:08.953 00:12:08.953 [2024-07-13 22:57:58.350921] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:08.953 [2024-07-13 22:57:58.351045] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:08.953 00:12:09.212 Malloc0 00:12:09.212 Malloc1 00:12:09.212 Malloc2 00:12:09.212 Malloc3 00:12:09.212 Malloc4 00:12:09.212 Malloc5 00:12:09.212 Malloc6 00:12:09.212 Malloc7 00:12:09.212 Malloc8 00:12:09.212 Malloc9 00:12:09.212 [2024-07-13 22:57:58.540616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:09.212 [2024-07-13 22:57:58.540753] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.212 [2024-07-13 22:57:58.540799] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:09.212 [2024-07-13 22:57:58.540844] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.212 [2024-07-13 22:57:58.543569] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.212 [2024-07-13 22:57:58.543666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:09.212 TestPT 00:12:09.212 22:57:58 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.212 22:57:58 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:12:09.212 5000+0 records in 00:12:09.212 5000+0 records out 00:12:09.212 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0246591 s, 415 MB/s 00:12:09.212 22:57:58 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:12:09.212 22:57:58 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.212 22:57:58 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:09.471 AIO0 00:12:09.471 22:57:58 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.471 22:57:58 blockdev_general -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:12:09.472 22:57:58 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.472 22:57:58 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:09.472 22:57:58 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.472 22:57:58 blockdev_general -- bdev/blockdev.sh@740 -- # cat 00:12:09.472 22:57:58 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:12:09.472 22:57:58 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.472 22:57:58 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:09.472 22:57:58 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.472 22:57:58 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:12:09.472 22:57:58 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.472 22:57:58 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:09.472 22:57:58 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.472 22:57:58 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:09.472 22:57:58 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.472 22:57:58 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:09.472 22:57:58 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.472 22:57:58 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:12:09.472 22:57:58 blockdev_general -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:12:09.472 22:57:58 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.472 22:57:58 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:09.472 22:57:58 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:12:09.472 22:57:58 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.472 22:57:58 blockdev_general -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:12:09.472 22:57:58 blockdev_general -- bdev/blockdev.sh@749 -- # jq -r .name 00:12:09.473 22:57:58 blockdev_general -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "d6a634c7-c081-4850-9062-6ec6cb87ecab"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d6a634c7-c081-4850-9062-6ec6cb87ecab",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "106f8ece-21be-5d43-84bf-d85f7b430808"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "106f8ece-21be-5d43-84bf-d85f7b430808",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "b45e48b2-4d78-5d5e-a509-0822a2a54cc3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "b45e48b2-4d78-5d5e-a509-0822a2a54cc3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "3427f0b5-0aea-5b89-96f0-78d2faaf1061"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3427f0b5-0aea-5b89-96f0-78d2faaf1061",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "249e37f0-f744-5245-b184-468a6ecba645"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "249e37f0-f744-5245-b184-468a6ecba645",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "714e76e0-6b68-5619-a04f-f8dfedef7e74"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "714e76e0-6b68-5619-a04f-f8dfedef7e74",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "837fb58e-03b3-52ea-ab3d-fec4e4cf7ed8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "837fb58e-03b3-52ea-ab3d-fec4e4cf7ed8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "c8b87019-c25c-587e-8a14-f50032625978"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c8b87019-c25c-587e-8a14-f50032625978",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "b80c12e8-d175-56f4-8d90-bab22072166a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b80c12e8-d175-56f4-8d90-bab22072166a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "056c029a-8af0-54b5-a974-b0391164dcb9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "056c029a-8af0-54b5-a974-b0391164dcb9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "dabb5577-00ec-553a-8c19-52ec1d5a4d13"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "dabb5577-00ec-553a-8c19-52ec1d5a4d13",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "5ecf1906-ef9b-5de4-8600-f14de505cd49"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "5ecf1906-ef9b-5de4-8600-f14de505cd49",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "d7ea1349-b927-4bf5-a432-356703ea7574"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "d7ea1349-b927-4bf5-a432-356703ea7574",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d7ea1349-b927-4bf5-a432-356703ea7574",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "171b1e06-67fb-49f8-aa8a-011590d26dd6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "78fea3d6-5646-4ef2-9985-7e4904f6ef75",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "47feeb56-7387-4578-a301-b66f60870d3e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "47feeb56-7387-4578-a301-b66f60870d3e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "47feeb56-7387-4578-a301-b66f60870d3e",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "bfc61314-e16e-40d6-96fa-a0edda1a8e57",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "396de916-4c06-4d7b-b833-e9b1602e6f34",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "ceda7f71-794e-4e14-a4b8-fec6e8d240a9"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ceda7f71-794e-4e14-a4b8-fec6e8d240a9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "ceda7f71-794e-4e14-a4b8-fec6e8d240a9",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "a3b4bc8e-6fe8-4d48-b564-8db03d96a7dd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "14c8b8e2-0c8d-4155-aad8-d1cdab4a29fb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "427b3b3d-fed4-4244-81b0-31621926c982"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "427b3b3d-fed4-4244-81b0-31621926c982",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:12:09.473 22:57:58 blockdev_general -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:12:09.473 22:57:58 blockdev_general -- bdev/blockdev.sh@752 -- # hello_world_bdev=Malloc0 00:12:09.732 22:57:58 blockdev_general -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:12:09.732 22:57:58 blockdev_general -- bdev/blockdev.sh@754 -- # killprocess 127529 00:12:09.732 22:57:58 blockdev_general -- common/autotest_common.sh@948 -- # '[' -z 127529 ']' 00:12:09.732 22:57:58 blockdev_general -- common/autotest_common.sh@952 -- # kill -0 127529 00:12:09.732 22:57:58 blockdev_general -- common/autotest_common.sh@953 -- # uname 00:12:09.732 22:57:58 blockdev_general -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:09.732 22:57:58 blockdev_general -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 127529 00:12:09.732 22:57:58 blockdev_general -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:09.732 22:57:58 blockdev_general -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:09.732 killing process with pid 127529 00:12:09.732 22:57:58 blockdev_general -- common/autotest_common.sh@966 -- # echo 'killing process with pid 127529' 00:12:09.732 22:57:58 blockdev_general -- common/autotest_common.sh@967 -- # kill 127529 00:12:09.732 22:57:58 blockdev_general -- common/autotest_common.sh@972 -- # wait 127529 00:12:10.301 22:57:59 blockdev_general -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:10.301 22:57:59 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:12:10.301 22:57:59 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:12:10.301 22:57:59 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:10.301 22:57:59 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:10.301 ************************************ 00:12:10.301 START TEST bdev_hello_world 00:12:10.301 ************************************ 00:12:10.301 22:57:59 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:12:10.301 [2024-07-13 22:57:59.532334] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:12:10.301 [2024-07-13 22:57:59.532656] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127583 ] 00:12:10.301 [2024-07-13 22:57:59.676725] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.560 [2024-07-13 22:57:59.757355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.560 [2024-07-13 22:57:59.897913] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:10.560 [2024-07-13 22:57:59.898061] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:10.560 [2024-07-13 22:57:59.905873] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:10.560 [2024-07-13 22:57:59.905959] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:10.560 [2024-07-13 22:57:59.913927] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:10.560 [2024-07-13 22:57:59.914042] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:10.560 [2024-07-13 22:57:59.914091] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:10.819 [2024-07-13 22:58:00.010886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:10.819 [2024-07-13 22:58:00.011065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.819 [2024-07-13 22:58:00.011107] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:10.819 [2024-07-13 22:58:00.011148] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.819 [2024-07-13 22:58:00.013800] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.819 [2024-07-13 22:58:00.013882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:10.819 [2024-07-13 22:58:00.177617] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:10.819 [2024-07-13 22:58:00.177730] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:12:10.819 [2024-07-13 22:58:00.177843] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:10.819 [2024-07-13 22:58:00.177948] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:10.819 [2024-07-13 22:58:00.178070] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:10.819 [2024-07-13 22:58:00.178132] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:10.819 [2024-07-13 22:58:00.178241] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:10.819 00:12:10.819 [2024-07-13 22:58:00.178294] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:11.384 00:12:11.384 real 0m1.068s 00:12:11.384 user 0m0.595s 00:12:11.384 sys 0m0.326s 00:12:11.384 22:58:00 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:11.384 22:58:00 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:11.384 ************************************ 00:12:11.384 END TEST bdev_hello_world 00:12:11.384 ************************************ 00:12:11.384 22:58:00 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:12:11.384 22:58:00 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:12:11.384 22:58:00 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:11.384 22:58:00 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:11.384 22:58:00 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:11.384 ************************************ 00:12:11.384 START TEST bdev_bounds 00:12:11.384 ************************************ 00:12:11.384 22:58:00 blockdev_general.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:12:11.384 22:58:00 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=127621 00:12:11.384 22:58:00 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:11.384 22:58:00 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 127621' 00:12:11.384 Process bdevio pid: 127621 00:12:11.384 22:58:00 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 127621 00:12:11.384 22:58:00 blockdev_general.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 127621 ']' 00:12:11.384 22:58:00 blockdev_general.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.384 22:58:00 blockdev_general.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:11.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.384 22:58:00 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.384 22:58:00 blockdev_general.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:11.384 22:58:00 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:11.384 22:58:00 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:11.384 [2024-07-13 22:58:00.657525] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:12:11.384 [2024-07-13 22:58:00.658000] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127621 ] 00:12:11.641 [2024-07-13 22:58:00.815218] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:11.641 [2024-07-13 22:58:00.913020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.641 [2024-07-13 22:58:00.913173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.641 [2024-07-13 22:58:00.913171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.899 [2024-07-13 22:58:01.063076] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:11.899 [2024-07-13 22:58:01.063253] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:11.900 [2024-07-13 22:58:01.070960] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:11.900 [2024-07-13 22:58:01.071059] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:11.900 [2024-07-13 22:58:01.079034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:11.900 [2024-07-13 22:58:01.079174] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:11.900 [2024-07-13 22:58:01.079217] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:11.900 [2024-07-13 22:58:01.185304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:11.900 [2024-07-13 22:58:01.185446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.900 [2024-07-13 22:58:01.185542] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:11.900 [2024-07-13 22:58:01.185585] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.900 [2024-07-13 22:58:01.188723] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.900 [2024-07-13 22:58:01.188780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:12.509 22:58:01 blockdev_general.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:12.509 22:58:01 blockdev_general.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:12:12.509 22:58:01 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:12.509 I/O targets: 00:12:12.510 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:12:12.510 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:12:12.510 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:12:12.510 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:12:12.510 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:12:12.510 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:12:12.510 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:12:12.510 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:12:12.510 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:12:12.510 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:12:12.510 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:12:12.510 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:12:12.510 raid0: 131072 blocks of 512 bytes (64 MiB) 00:12:12.510 concat0: 131072 blocks of 512 bytes (64 MiB) 00:12:12.510 raid1: 65536 blocks of 512 bytes (32 MiB) 00:12:12.510 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:12:12.510 00:12:12.510 00:12:12.510 CUnit - A unit testing framework for C - Version 2.1-3 00:12:12.510 http://cunit.sourceforge.net/ 00:12:12.510 00:12:12.510 00:12:12.510 Suite: bdevio tests on: AIO0 00:12:12.510 Test: blockdev write read block ...passed 00:12:12.510 Test: blockdev write zeroes read block ...passed 00:12:12.510 Test: blockdev write zeroes read no split ...passed 00:12:12.510 Test: blockdev write zeroes read split ...passed 00:12:12.510 Test: blockdev write zeroes read split partial ...passed 00:12:12.510 Test: blockdev reset ...passed 00:12:12.510 Test: blockdev write read 8 blocks ...passed 00:12:12.510 Test: blockdev write read size > 128k ...passed 00:12:12.510 Test: blockdev write read invalid size ...passed 00:12:12.510 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:12.510 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:12.510 Test: blockdev write read max offset ...passed 00:12:12.510 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:12.510 Test: blockdev writev readv 8 blocks ...passed 00:12:12.510 Test: blockdev writev readv 30 x 1block ...passed 00:12:12.510 Test: blockdev writev readv block ...passed 00:12:12.510 Test: blockdev writev readv size > 128k ...passed 00:12:12.510 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:12.510 Test: blockdev comparev and writev ...passed 00:12:12.510 Test: blockdev nvme passthru rw ...passed 00:12:12.510 Test: blockdev nvme passthru vendor specific ...passed 00:12:12.510 Test: blockdev nvme admin passthru ...passed 00:12:12.510 Test: blockdev copy ...passed 00:12:12.510 Suite: bdevio tests on: raid1 00:12:12.510 Test: blockdev write read block ...passed 00:12:12.510 Test: blockdev write zeroes read block ...passed 00:12:12.510 Test: blockdev write zeroes read no split ...passed 00:12:12.510 Test: blockdev write zeroes read split ...passed 00:12:12.510 Test: blockdev write zeroes read split partial ...passed 00:12:12.510 Test: blockdev reset ...passed 00:12:12.510 Test: blockdev write read 8 blocks ...passed 00:12:12.510 Test: blockdev write read size > 128k ...passed 00:12:12.510 Test: blockdev write read invalid size ...passed 00:12:12.510 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:12.510 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:12.510 Test: blockdev write read max offset ...passed 00:12:12.510 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:12.510 Test: blockdev writev readv 8 blocks ...passed 00:12:12.510 Test: blockdev writev readv 30 x 1block ...passed 00:12:12.510 Test: blockdev writev readv block ...passed 00:12:12.510 Test: blockdev writev readv size > 128k ...passed 00:12:12.510 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:12.510 Test: blockdev comparev and writev ...passed 00:12:12.510 Test: blockdev nvme passthru rw ...passed 00:12:12.510 Test: blockdev nvme passthru vendor specific ...passed 00:12:12.510 Test: blockdev nvme admin passthru ...passed 00:12:12.510 Test: blockdev copy ...passed 00:12:12.510 Suite: bdevio tests on: concat0 00:12:12.510 Test: blockdev write read block ...passed 00:12:12.510 Test: blockdev write zeroes read block ...passed 00:12:12.510 Test: blockdev write zeroes read no split ...passed 00:12:12.510 Test: blockdev write zeroes read split ...passed 00:12:12.510 Test: blockdev write zeroes read split partial ...passed 00:12:12.510 Test: blockdev reset ...passed 00:12:12.510 Test: blockdev write read 8 blocks ...passed 00:12:12.510 Test: blockdev write read size > 128k ...passed 00:12:12.510 Test: blockdev write read invalid size ...passed 00:12:12.510 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:12.510 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:12.510 Test: blockdev write read max offset ...passed 00:12:12.510 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:12.510 Test: blockdev writev readv 8 blocks ...passed 00:12:12.510 Test: blockdev writev readv 30 x 1block ...passed 00:12:12.510 Test: blockdev writev readv block ...passed 00:12:12.510 Test: blockdev writev readv size > 128k ...passed 00:12:12.510 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:12.510 Test: blockdev comparev and writev ...passed 00:12:12.510 Test: blockdev nvme passthru rw ...passed 00:12:12.510 Test: blockdev nvme passthru vendor specific ...passed 00:12:12.510 Test: blockdev nvme admin passthru ...passed 00:12:12.510 Test: blockdev copy ...passed 00:12:12.510 Suite: bdevio tests on: raid0 00:12:12.510 Test: blockdev write read block ...passed 00:12:12.510 Test: blockdev write zeroes read block ...passed 00:12:12.510 Test: blockdev write zeroes read no split ...passed 00:12:12.510 Test: blockdev write zeroes read split ...passed 00:12:12.510 Test: blockdev write zeroes read split partial ...passed 00:12:12.510 Test: blockdev reset ...passed 00:12:12.510 Test: blockdev write read 8 blocks ...passed 00:12:12.510 Test: blockdev write read size > 128k ...passed 00:12:12.510 Test: blockdev write read invalid size ...passed 00:12:12.510 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:12.510 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:12.510 Test: blockdev write read max offset ...passed 00:12:12.510 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:12.510 Test: blockdev writev readv 8 blocks ...passed 00:12:12.510 Test: blockdev writev readv 30 x 1block ...passed 00:12:12.510 Test: blockdev writev readv block ...passed 00:12:12.510 Test: blockdev writev readv size > 128k ...passed 00:12:12.510 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:12.510 Test: blockdev comparev and writev ...passed 00:12:12.510 Test: blockdev nvme passthru rw ...passed 00:12:12.510 Test: blockdev nvme passthru vendor specific ...passed 00:12:12.510 Test: blockdev nvme admin passthru ...passed 00:12:12.510 Test: blockdev copy ...passed 00:12:12.510 Suite: bdevio tests on: TestPT 00:12:12.510 Test: blockdev write read block ...passed 00:12:12.510 Test: blockdev write zeroes read block ...passed 00:12:12.510 Test: blockdev write zeroes read no split ...passed 00:12:12.510 Test: blockdev write zeroes read split ...passed 00:12:12.510 Test: blockdev write zeroes read split partial ...passed 00:12:12.510 Test: blockdev reset ...passed 00:12:12.510 Test: blockdev write read 8 blocks ...passed 00:12:12.510 Test: blockdev write read size > 128k ...passed 00:12:12.510 Test: blockdev write read invalid size ...passed 00:12:12.510 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:12.510 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:12.510 Test: blockdev write read max offset ...passed 00:12:12.510 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:12.510 Test: blockdev writev readv 8 blocks ...passed 00:12:12.510 Test: blockdev writev readv 30 x 1block ...passed 00:12:12.510 Test: blockdev writev readv block ...passed 00:12:12.510 Test: blockdev writev readv size > 128k ...passed 00:12:12.510 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:12.510 Test: blockdev comparev and writev ...passed 00:12:12.510 Test: blockdev nvme passthru rw ...passed 00:12:12.510 Test: blockdev nvme passthru vendor specific ...passed 00:12:12.510 Test: blockdev nvme admin passthru ...passed 00:12:12.510 Test: blockdev copy ...passed 00:12:12.510 Suite: bdevio tests on: Malloc2p7 00:12:12.510 Test: blockdev write read block ...passed 00:12:12.510 Test: blockdev write zeroes read block ...passed 00:12:12.510 Test: blockdev write zeroes read no split ...passed 00:12:12.510 Test: blockdev write zeroes read split ...passed 00:12:12.510 Test: blockdev write zeroes read split partial ...passed 00:12:12.510 Test: blockdev reset ...passed 00:12:12.510 Test: blockdev write read 8 blocks ...passed 00:12:12.510 Test: blockdev write read size > 128k ...passed 00:12:12.510 Test: blockdev write read invalid size ...passed 00:12:12.510 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:12.510 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:12.510 Test: blockdev write read max offset ...passed 00:12:12.510 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:12.510 Test: blockdev writev readv 8 blocks ...passed 00:12:12.510 Test: blockdev writev readv 30 x 1block ...passed 00:12:12.510 Test: blockdev writev readv block ...passed 00:12:12.510 Test: blockdev writev readv size > 128k ...passed 00:12:12.510 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:12.510 Test: blockdev comparev and writev ...passed 00:12:12.510 Test: blockdev nvme passthru rw ...passed 00:12:12.510 Test: blockdev nvme passthru vendor specific ...passed 00:12:12.510 Test: blockdev nvme admin passthru ...passed 00:12:12.510 Test: blockdev copy ...passed 00:12:12.510 Suite: bdevio tests on: Malloc2p6 00:12:12.510 Test: blockdev write read block ...passed 00:12:12.510 Test: blockdev write zeroes read block ...passed 00:12:12.510 Test: blockdev write zeroes read no split ...passed 00:12:12.769 Test: blockdev write zeroes read split ...passed 00:12:12.769 Test: blockdev write zeroes read split partial ...passed 00:12:12.769 Test: blockdev reset ...passed 00:12:12.769 Test: blockdev write read 8 blocks ...passed 00:12:12.769 Test: blockdev write read size > 128k ...passed 00:12:12.769 Test: blockdev write read invalid size ...passed 00:12:12.769 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:12.769 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:12.769 Test: blockdev write read max offset ...passed 00:12:12.769 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:12.769 Test: blockdev writev readv 8 blocks ...passed 00:12:12.769 Test: blockdev writev readv 30 x 1block ...passed 00:12:12.769 Test: blockdev writev readv block ...passed 00:12:12.769 Test: blockdev writev readv size > 128k ...passed 00:12:12.769 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:12.769 Test: blockdev comparev and writev ...passed 00:12:12.769 Test: blockdev nvme passthru rw ...passed 00:12:12.769 Test: blockdev nvme passthru vendor specific ...passed 00:12:12.769 Test: blockdev nvme admin passthru ...passed 00:12:12.769 Test: blockdev copy ...passed 00:12:12.769 Suite: bdevio tests on: Malloc2p5 00:12:12.769 Test: blockdev write read block ...passed 00:12:12.769 Test: blockdev write zeroes read block ...passed 00:12:12.769 Test: blockdev write zeroes read no split ...passed 00:12:12.769 Test: blockdev write zeroes read split ...passed 00:12:12.769 Test: blockdev write zeroes read split partial ...passed 00:12:12.769 Test: blockdev reset ...passed 00:12:12.769 Test: blockdev write read 8 blocks ...passed 00:12:12.769 Test: blockdev write read size > 128k ...passed 00:12:12.769 Test: blockdev write read invalid size ...passed 00:12:12.769 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:12.769 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:12.770 Test: blockdev write read max offset ...passed 00:12:12.770 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:12.770 Test: blockdev writev readv 8 blocks ...passed 00:12:12.770 Test: blockdev writev readv 30 x 1block ...passed 00:12:12.770 Test: blockdev writev readv block ...passed 00:12:12.770 Test: blockdev writev readv size > 128k ...passed 00:12:12.770 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:12.770 Test: blockdev comparev and writev ...passed 00:12:12.770 Test: blockdev nvme passthru rw ...passed 00:12:12.770 Test: blockdev nvme passthru vendor specific ...passed 00:12:12.770 Test: blockdev nvme admin passthru ...passed 00:12:12.770 Test: blockdev copy ...passed 00:12:12.770 Suite: bdevio tests on: Malloc2p4 00:12:12.770 Test: blockdev write read block ...passed 00:12:12.770 Test: blockdev write zeroes read block ...passed 00:12:12.770 Test: blockdev write zeroes read no split ...passed 00:12:12.770 Test: blockdev write zeroes read split ...passed 00:12:12.770 Test: blockdev write zeroes read split partial ...passed 00:12:12.770 Test: blockdev reset ...passed 00:12:12.770 Test: blockdev write read 8 blocks ...passed 00:12:12.770 Test: blockdev write read size > 128k ...passed 00:12:12.770 Test: blockdev write read invalid size ...passed 00:12:12.770 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:12.770 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:12.770 Test: blockdev write read max offset ...passed 00:12:12.770 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:12.770 Test: blockdev writev readv 8 blocks ...passed 00:12:12.770 Test: blockdev writev readv 30 x 1block ...passed 00:12:12.770 Test: blockdev writev readv block ...passed 00:12:12.770 Test: blockdev writev readv size > 128k ...passed 00:12:12.770 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:12.770 Test: blockdev comparev and writev ...passed 00:12:12.770 Test: blockdev nvme passthru rw ...passed 00:12:12.770 Test: blockdev nvme passthru vendor specific ...passed 00:12:12.770 Test: blockdev nvme admin passthru ...passed 00:12:12.770 Test: blockdev copy ...passed 00:12:12.770 Suite: bdevio tests on: Malloc2p3 00:12:12.770 Test: blockdev write read block ...passed 00:12:12.770 Test: blockdev write zeroes read block ...passed 00:12:12.770 Test: blockdev write zeroes read no split ...passed 00:12:12.770 Test: blockdev write zeroes read split ...passed 00:12:12.770 Test: blockdev write zeroes read split partial ...passed 00:12:12.770 Test: blockdev reset ...passed 00:12:12.770 Test: blockdev write read 8 blocks ...passed 00:12:12.770 Test: blockdev write read size > 128k ...passed 00:12:12.770 Test: blockdev write read invalid size ...passed 00:12:12.770 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:12.770 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:12.770 Test: blockdev write read max offset ...passed 00:12:12.770 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:12.770 Test: blockdev writev readv 8 blocks ...passed 00:12:12.770 Test: blockdev writev readv 30 x 1block ...passed 00:12:12.770 Test: blockdev writev readv block ...passed 00:12:12.770 Test: blockdev writev readv size > 128k ...passed 00:12:12.770 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:12.770 Test: blockdev comparev and writev ...passed 00:12:12.770 Test: blockdev nvme passthru rw ...passed 00:12:12.770 Test: blockdev nvme passthru vendor specific ...passed 00:12:12.770 Test: blockdev nvme admin passthru ...passed 00:12:12.770 Test: blockdev copy ...passed 00:12:12.770 Suite: bdevio tests on: Malloc2p2 00:12:12.770 Test: blockdev write read block ...passed 00:12:12.770 Test: blockdev write zeroes read block ...passed 00:12:12.770 Test: blockdev write zeroes read no split ...passed 00:12:12.770 Test: blockdev write zeroes read split ...passed 00:12:12.770 Test: blockdev write zeroes read split partial ...passed 00:12:12.770 Test: blockdev reset ...passed 00:12:12.770 Test: blockdev write read 8 blocks ...passed 00:12:12.770 Test: blockdev write read size > 128k ...passed 00:12:12.770 Test: blockdev write read invalid size ...passed 00:12:12.770 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:12.770 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:12.770 Test: blockdev write read max offset ...passed 00:12:12.770 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:12.770 Test: blockdev writev readv 8 blocks ...passed 00:12:12.770 Test: blockdev writev readv 30 x 1block ...passed 00:12:12.770 Test: blockdev writev readv block ...passed 00:12:12.770 Test: blockdev writev readv size > 128k ...passed 00:12:12.770 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:12.770 Test: blockdev comparev and writev ...passed 00:12:12.770 Test: blockdev nvme passthru rw ...passed 00:12:12.770 Test: blockdev nvme passthru vendor specific ...passed 00:12:12.770 Test: blockdev nvme admin passthru ...passed 00:12:12.770 Test: blockdev copy ...passed 00:12:12.770 Suite: bdevio tests on: Malloc2p1 00:12:12.770 Test: blockdev write read block ...passed 00:12:12.770 Test: blockdev write zeroes read block ...passed 00:12:12.770 Test: blockdev write zeroes read no split ...passed 00:12:12.770 Test: blockdev write zeroes read split ...passed 00:12:12.770 Test: blockdev write zeroes read split partial ...passed 00:12:12.770 Test: blockdev reset ...passed 00:12:12.770 Test: blockdev write read 8 blocks ...passed 00:12:12.770 Test: blockdev write read size > 128k ...passed 00:12:12.770 Test: blockdev write read invalid size ...passed 00:12:12.770 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:12.770 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:12.770 Test: blockdev write read max offset ...passed 00:12:12.770 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:12.770 Test: blockdev writev readv 8 blocks ...passed 00:12:12.770 Test: blockdev writev readv 30 x 1block ...passed 00:12:12.770 Test: blockdev writev readv block ...passed 00:12:12.770 Test: blockdev writev readv size > 128k ...passed 00:12:12.770 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:12.770 Test: blockdev comparev and writev ...passed 00:12:12.770 Test: blockdev nvme passthru rw ...passed 00:12:12.770 Test: blockdev nvme passthru vendor specific ...passed 00:12:12.770 Test: blockdev nvme admin passthru ...passed 00:12:12.770 Test: blockdev copy ...passed 00:12:12.770 Suite: bdevio tests on: Malloc2p0 00:12:12.770 Test: blockdev write read block ...passed 00:12:12.770 Test: blockdev write zeroes read block ...passed 00:12:12.770 Test: blockdev write zeroes read no split ...passed 00:12:12.770 Test: blockdev write zeroes read split ...passed 00:12:12.770 Test: blockdev write zeroes read split partial ...passed 00:12:12.770 Test: blockdev reset ...passed 00:12:12.770 Test: blockdev write read 8 blocks ...passed 00:12:12.770 Test: blockdev write read size > 128k ...passed 00:12:12.770 Test: blockdev write read invalid size ...passed 00:12:12.770 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:12.770 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:12.770 Test: blockdev write read max offset ...passed 00:12:12.770 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:12.770 Test: blockdev writev readv 8 blocks ...passed 00:12:12.770 Test: blockdev writev readv 30 x 1block ...passed 00:12:12.770 Test: blockdev writev readv block ...passed 00:12:12.770 Test: blockdev writev readv size > 128k ...passed 00:12:12.770 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:12.770 Test: blockdev comparev and writev ...passed 00:12:12.770 Test: blockdev nvme passthru rw ...passed 00:12:12.770 Test: blockdev nvme passthru vendor specific ...passed 00:12:12.770 Test: blockdev nvme admin passthru ...passed 00:12:12.770 Test: blockdev copy ...passed 00:12:12.770 Suite: bdevio tests on: Malloc1p1 00:12:12.770 Test: blockdev write read block ...passed 00:12:12.770 Test: blockdev write zeroes read block ...passed 00:12:12.770 Test: blockdev write zeroes read no split ...passed 00:12:12.770 Test: blockdev write zeroes read split ...passed 00:12:12.770 Test: blockdev write zeroes read split partial ...passed 00:12:12.770 Test: blockdev reset ...passed 00:12:12.770 Test: blockdev write read 8 blocks ...passed 00:12:12.770 Test: blockdev write read size > 128k ...passed 00:12:12.770 Test: blockdev write read invalid size ...passed 00:12:12.770 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:12.770 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:12.770 Test: blockdev write read max offset ...passed 00:12:12.770 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:12.770 Test: blockdev writev readv 8 blocks ...passed 00:12:12.770 Test: blockdev writev readv 30 x 1block ...passed 00:12:12.770 Test: blockdev writev readv block ...passed 00:12:12.770 Test: blockdev writev readv size > 128k ...passed 00:12:12.770 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:12.770 Test: blockdev comparev and writev ...passed 00:12:12.770 Test: blockdev nvme passthru rw ...passed 00:12:12.770 Test: blockdev nvme passthru vendor specific ...passed 00:12:12.770 Test: blockdev nvme admin passthru ...passed 00:12:12.770 Test: blockdev copy ...passed 00:12:12.770 Suite: bdevio tests on: Malloc1p0 00:12:12.770 Test: blockdev write read block ...passed 00:12:12.770 Test: blockdev write zeroes read block ...passed 00:12:12.770 Test: blockdev write zeroes read no split ...passed 00:12:12.770 Test: blockdev write zeroes read split ...passed 00:12:12.770 Test: blockdev write zeroes read split partial ...passed 00:12:12.770 Test: blockdev reset ...passed 00:12:12.770 Test: blockdev write read 8 blocks ...passed 00:12:12.770 Test: blockdev write read size > 128k ...passed 00:12:12.770 Test: blockdev write read invalid size ...passed 00:12:12.770 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:12.770 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:12.770 Test: blockdev write read max offset ...passed 00:12:12.770 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:12.770 Test: blockdev writev readv 8 blocks ...passed 00:12:12.770 Test: blockdev writev readv 30 x 1block ...passed 00:12:12.770 Test: blockdev writev readv block ...passed 00:12:12.770 Test: blockdev writev readv size > 128k ...passed 00:12:12.770 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:12.770 Test: blockdev comparev and writev ...passed 00:12:12.770 Test: blockdev nvme passthru rw ...passed 00:12:12.770 Test: blockdev nvme passthru vendor specific ...passed 00:12:12.770 Test: blockdev nvme admin passthru ...passed 00:12:12.770 Test: blockdev copy ...passed 00:12:12.770 Suite: bdevio tests on: Malloc0 00:12:12.770 Test: blockdev write read block ...passed 00:12:12.770 Test: blockdev write zeroes read block ...passed 00:12:12.770 Test: blockdev write zeroes read no split ...passed 00:12:12.770 Test: blockdev write zeroes read split ...passed 00:12:12.770 Test: blockdev write zeroes read split partial ...passed 00:12:12.771 Test: blockdev reset ...passed 00:12:12.771 Test: blockdev write read 8 blocks ...passed 00:12:12.771 Test: blockdev write read size > 128k ...passed 00:12:12.771 Test: blockdev write read invalid size ...passed 00:12:12.771 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:12.771 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:12.771 Test: blockdev write read max offset ...passed 00:12:12.771 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:12.771 Test: blockdev writev readv 8 blocks ...passed 00:12:12.771 Test: blockdev writev readv 30 x 1block ...passed 00:12:12.771 Test: blockdev writev readv block ...passed 00:12:12.771 Test: blockdev writev readv size > 128k ...passed 00:12:12.771 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:12.771 Test: blockdev comparev and writev ...passed 00:12:12.771 Test: blockdev nvme passthru rw ...passed 00:12:12.771 Test: blockdev nvme passthru vendor specific ...passed 00:12:12.771 Test: blockdev nvme admin passthru ...passed 00:12:12.771 Test: blockdev copy ...passed 00:12:12.771 00:12:12.771 Run Summary: Type Total Ran Passed Failed Inactive 00:12:12.771 suites 16 16 n/a 0 0 00:12:12.771 tests 368 368 368 0 0 00:12:12.771 asserts 2224 2224 2224 0 n/a 00:12:12.771 00:12:12.771 Elapsed time = 0.651 seconds 00:12:12.771 0 00:12:12.771 22:58:02 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 127621 00:12:12.771 22:58:02 blockdev_general.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 127621 ']' 00:12:12.771 22:58:02 blockdev_general.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 127621 00:12:12.771 22:58:02 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:12:12.771 22:58:02 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:12.771 22:58:02 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 127621 00:12:12.771 killing process with pid 127621 00:12:12.771 22:58:02 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:12.771 22:58:02 blockdev_general.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:12.771 22:58:02 blockdev_general.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 127621' 00:12:12.771 22:58:02 blockdev_general.bdev_bounds -- common/autotest_common.sh@967 -- # kill 127621 00:12:12.771 22:58:02 blockdev_general.bdev_bounds -- common/autotest_common.sh@972 -- # wait 127621 00:12:13.029 ************************************ 00:12:13.029 END TEST bdev_bounds 00:12:13.029 ************************************ 00:12:13.029 22:58:02 blockdev_general.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:12:13.029 00:12:13.029 real 0m1.841s 00:12:13.029 user 0m4.344s 00:12:13.029 sys 0m0.502s 00:12:13.029 22:58:02 blockdev_general.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:13.029 22:58:02 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:13.287 22:58:02 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:12:13.287 22:58:02 blockdev_general -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:12:13.287 22:58:02 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:13.287 22:58:02 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:13.287 22:58:02 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:13.287 ************************************ 00:12:13.287 START TEST bdev_nbd 00:12:13.287 ************************************ 00:12:13.287 22:58:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:12:13.287 22:58:02 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:12:13.287 22:58:02 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:12:13.287 22:58:02 blockdev_general.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:13.287 22:58:02 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:13.287 22:58:02 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:13.287 22:58:02 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:12:13.287 22:58:02 blockdev_general.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=16 00:12:13.287 22:58:02 blockdev_general.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:12:13.287 22:58:02 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:13.287 22:58:02 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:12:13.288 22:58:02 blockdev_general.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=16 00:12:13.288 22:58:02 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:13.288 22:58:02 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:12:13.288 22:58:02 blockdev_general.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:13.288 22:58:02 blockdev_general.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:12:13.288 22:58:02 blockdev_general.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=127686 00:12:13.288 22:58:02 blockdev_general.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:13.288 22:58:02 blockdev_general.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:13.288 22:58:02 blockdev_general.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 127686 /var/tmp/spdk-nbd.sock 00:12:13.288 22:58:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 127686 ']' 00:12:13.288 22:58:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:13.288 22:58:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:13.288 22:58:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:13.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:13.288 22:58:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:13.288 22:58:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:13.288 [2024-07-13 22:58:02.570378] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:12:13.288 [2024-07-13 22:58:02.571611] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:13.546 [2024-07-13 22:58:02.724723] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.546 [2024-07-13 22:58:02.798999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.546 [2024-07-13 22:58:02.946460] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:13.546 [2024-07-13 22:58:02.946776] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:13.804 [2024-07-13 22:58:02.954435] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:13.804 [2024-07-13 22:58:02.954647] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:13.804 [2024-07-13 22:58:02.962449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:13.804 [2024-07-13 22:58:02.962670] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:13.804 [2024-07-13 22:58:02.962816] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:13.804 [2024-07-13 22:58:03.066305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:13.804 [2024-07-13 22:58:03.066648] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.804 [2024-07-13 22:58:03.066855] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:13.804 [2024-07-13 22:58:03.067023] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.804 [2024-07-13 22:58:03.069864] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.804 [2024-07-13 22:58:03.070059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:14.368 22:58:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:14.368 22:58:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:12:14.368 22:58:03 blockdev_general.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:12:14.368 22:58:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:14.368 22:58:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:14.368 22:58:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:14.368 22:58:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:12:14.368 22:58:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:14.368 22:58:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:14.368 22:58:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:14.368 22:58:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:14.368 22:58:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:14.368 22:58:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:14.368 22:58:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:14.368 22:58:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:12:14.626 22:58:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:14.626 22:58:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:14.626 22:58:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:14.626 22:58:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:14.626 22:58:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:14.626 22:58:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:14.626 22:58:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:14.626 22:58:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:14.626 22:58:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:14.626 22:58:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:14.626 22:58:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:14.626 22:58:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:14.626 1+0 records in 00:12:14.626 1+0 records out 00:12:14.627 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279365 s, 14.7 MB/s 00:12:14.627 22:58:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.627 22:58:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:14.627 22:58:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.627 22:58:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:14.627 22:58:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:14.627 22:58:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:14.627 22:58:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:14.627 22:58:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:12:14.884 22:58:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:14.884 22:58:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:14.884 22:58:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:14.884 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:14.884 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:14.884 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:14.884 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:14.884 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:14.884 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:14.884 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:14.885 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:14.885 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:14.885 1+0 records in 00:12:14.885 1+0 records out 00:12:14.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368378 s, 11.1 MB/s 00:12:14.885 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.885 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:14.885 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.885 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:14.885 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:14.885 22:58:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:14.885 22:58:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:14.885 22:58:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:12:15.143 22:58:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:15.143 22:58:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:15.143 22:58:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:15.143 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:12:15.143 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:15.143 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:15.143 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:15.143 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:12:15.143 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:15.143 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:15.143 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:15.143 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:15.143 1+0 records in 00:12:15.143 1+0 records out 00:12:15.143 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467707 s, 8.8 MB/s 00:12:15.143 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:15.143 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:15.143 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:15.143 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:15.143 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:15.143 22:58:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:15.143 22:58:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:15.143 22:58:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:12:15.400 22:58:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:15.400 22:58:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:15.400 22:58:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:15.400 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:12:15.400 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:15.400 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:15.400 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:15.400 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:12:15.400 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:15.400 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:15.400 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:15.400 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:15.400 1+0 records in 00:12:15.400 1+0 records out 00:12:15.400 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000543924 s, 7.5 MB/s 00:12:15.400 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:15.400 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:15.400 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:15.400 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:15.400 22:58:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:15.400 22:58:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:15.400 22:58:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:15.400 22:58:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:12:15.658 22:58:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:15.658 22:58:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:15.658 22:58:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:15.658 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:12:15.658 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:15.658 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:15.658 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:15.658 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:12:15.658 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:15.658 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:15.658 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:15.658 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:15.658 1+0 records in 00:12:15.658 1+0 records out 00:12:15.658 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000621579 s, 6.6 MB/s 00:12:15.658 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:15.658 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:15.658 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:15.658 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:15.658 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:15.658 22:58:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:15.658 22:58:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:15.658 22:58:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:16.223 1+0 records in 00:12:16.223 1+0 records out 00:12:16.223 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395103 s, 10.4 MB/s 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:16.223 1+0 records in 00:12:16.223 1+0 records out 00:12:16.223 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000758688 s, 5.4 MB/s 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:16.223 22:58:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:12:16.481 22:58:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:12:16.481 22:58:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:12:16.481 22:58:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:12:16.481 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:12:16.481 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:16.481 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:16.481 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:16.481 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:12:16.481 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:16.481 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:16.481 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:16.481 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:16.481 1+0 records in 00:12:16.481 1+0 records out 00:12:16.481 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000637251 s, 6.4 MB/s 00:12:16.481 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:16.794 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:16.794 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:16.794 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:16.794 22:58:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:16.794 22:58:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:16.794 22:58:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:16.794 22:58:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:12:16.794 22:58:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:12:16.794 22:58:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:12:16.794 22:58:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:12:16.794 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:12:16.794 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:16.794 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:16.794 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:16.794 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:12:16.794 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:16.794 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:16.794 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:16.794 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:16.794 1+0 records in 00:12:16.794 1+0 records out 00:12:16.794 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000903479 s, 4.5 MB/s 00:12:16.794 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.052 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:17.052 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.052 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:17.052 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:17.052 22:58:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:17.052 22:58:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:17.052 22:58:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:12:17.053 22:58:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:12:17.053 22:58:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:12:17.053 22:58:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:12:17.053 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:12:17.053 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:17.053 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:17.053 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:17.053 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:12:17.053 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:17.053 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:17.053 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:17.053 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:17.310 1+0 records in 00:12:17.310 1+0 records out 00:12:17.310 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000681325 s, 6.0 MB/s 00:12:17.310 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.310 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:17.310 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.310 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:17.310 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:17.310 22:58:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:17.310 22:58:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:17.311 22:58:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:12:17.568 22:58:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:12:17.568 22:58:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:12:17.568 22:58:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:12:17.568 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:12:17.568 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:17.568 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:17.568 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:17.568 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:12:17.568 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:17.568 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:17.568 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:17.568 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:17.568 1+0 records in 00:12:17.568 1+0 records out 00:12:17.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000502168 s, 8.2 MB/s 00:12:17.569 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.569 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:17.569 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.569 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:17.569 22:58:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:17.569 22:58:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:17.569 22:58:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:17.569 22:58:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:12:17.827 22:58:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:12:17.827 22:58:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:12:17.827 22:58:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:12:17.827 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:12:17.827 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:17.827 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:17.827 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:17.827 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:12:17.827 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:17.827 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:17.827 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:17.827 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:17.827 1+0 records in 00:12:17.827 1+0 records out 00:12:17.827 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00171692 s, 2.4 MB/s 00:12:17.827 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.827 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:17.827 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.827 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:17.827 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:17.827 22:58:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:17.827 22:58:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:17.827 22:58:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:12:18.085 22:58:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:12:18.085 22:58:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:12:18.085 22:58:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:12:18.085 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:12:18.085 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:18.085 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:18.085 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:18.085 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:12:18.085 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:18.085 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:18.085 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:18.085 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:18.085 1+0 records in 00:12:18.085 1+0 records out 00:12:18.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00260423 s, 1.6 MB/s 00:12:18.085 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.086 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:18.086 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.086 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:18.086 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:18.086 22:58:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:18.086 22:58:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:18.086 22:58:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:12:18.345 22:58:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:12:18.345 22:58:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:12:18.345 22:58:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:12:18.345 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:12:18.345 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:18.345 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:18.345 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:18.345 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:12:18.345 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:18.345 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:18.345 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:18.345 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:18.345 1+0 records in 00:12:18.345 1+0 records out 00:12:18.345 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00296115 s, 1.4 MB/s 00:12:18.345 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.345 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:18.345 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.345 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:18.345 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:18.345 22:58:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:18.345 22:58:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:18.345 22:58:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:12:18.604 22:58:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:12:18.604 22:58:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:12:18.604 22:58:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:12:18.604 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:12:18.604 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:18.604 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:18.604 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:18.604 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:12:18.604 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:18.604 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:18.604 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:18.604 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:18.604 1+0 records in 00:12:18.604 1+0 records out 00:12:18.604 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000871227 s, 4.7 MB/s 00:12:18.604 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.604 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:18.604 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.604 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:18.604 22:58:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:18.604 22:58:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:18.604 22:58:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:18.604 22:58:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:12:18.864 22:58:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:12:18.864 22:58:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:12:18.864 22:58:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:12:18.864 22:58:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:12:18.864 22:58:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:18.864 22:58:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:18.864 22:58:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:18.864 22:58:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:12:18.864 22:58:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:18.864 22:58:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:18.864 22:58:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:18.864 22:58:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:18.864 1+0 records in 00:12:18.864 1+0 records out 00:12:18.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00166492 s, 2.5 MB/s 00:12:18.864 22:58:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.864 22:58:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:18.864 22:58:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.864 22:58:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:18.864 22:58:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:18.864 22:58:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:18.864 22:58:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:18.864 22:58:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:19.123 22:58:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:19.123 { 00:12:19.123 "nbd_device": "/dev/nbd0", 00:12:19.123 "bdev_name": "Malloc0" 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "nbd_device": "/dev/nbd1", 00:12:19.123 "bdev_name": "Malloc1p0" 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "nbd_device": "/dev/nbd2", 00:12:19.123 "bdev_name": "Malloc1p1" 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "nbd_device": "/dev/nbd3", 00:12:19.123 "bdev_name": "Malloc2p0" 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "nbd_device": "/dev/nbd4", 00:12:19.123 "bdev_name": "Malloc2p1" 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "nbd_device": "/dev/nbd5", 00:12:19.123 "bdev_name": "Malloc2p2" 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "nbd_device": "/dev/nbd6", 00:12:19.123 "bdev_name": "Malloc2p3" 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "nbd_device": "/dev/nbd7", 00:12:19.123 "bdev_name": "Malloc2p4" 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "nbd_device": "/dev/nbd8", 00:12:19.123 "bdev_name": "Malloc2p5" 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "nbd_device": "/dev/nbd9", 00:12:19.123 "bdev_name": "Malloc2p6" 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "nbd_device": "/dev/nbd10", 00:12:19.123 "bdev_name": "Malloc2p7" 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "nbd_device": "/dev/nbd11", 00:12:19.123 "bdev_name": "TestPT" 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "nbd_device": "/dev/nbd12", 00:12:19.123 "bdev_name": "raid0" 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "nbd_device": "/dev/nbd13", 00:12:19.123 "bdev_name": "concat0" 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "nbd_device": "/dev/nbd14", 00:12:19.123 "bdev_name": "raid1" 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "nbd_device": "/dev/nbd15", 00:12:19.123 "bdev_name": "AIO0" 00:12:19.123 } 00:12:19.123 ]' 00:12:19.123 22:58:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:19.123 22:58:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:19.123 { 00:12:19.123 "nbd_device": "/dev/nbd0", 00:12:19.123 "bdev_name": "Malloc0" 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "nbd_device": "/dev/nbd1", 00:12:19.123 "bdev_name": "Malloc1p0" 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "nbd_device": "/dev/nbd2", 00:12:19.123 "bdev_name": "Malloc1p1" 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "nbd_device": "/dev/nbd3", 00:12:19.123 "bdev_name": "Malloc2p0" 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "nbd_device": "/dev/nbd4", 00:12:19.123 "bdev_name": "Malloc2p1" 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "nbd_device": "/dev/nbd5", 00:12:19.123 "bdev_name": "Malloc2p2" 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "nbd_device": "/dev/nbd6", 00:12:19.123 "bdev_name": "Malloc2p3" 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "nbd_device": "/dev/nbd7", 00:12:19.123 "bdev_name": "Malloc2p4" 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "nbd_device": "/dev/nbd8", 00:12:19.123 "bdev_name": "Malloc2p5" 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "nbd_device": "/dev/nbd9", 00:12:19.123 "bdev_name": "Malloc2p6" 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "nbd_device": "/dev/nbd10", 00:12:19.123 "bdev_name": "Malloc2p7" 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "nbd_device": "/dev/nbd11", 00:12:19.123 "bdev_name": "TestPT" 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "nbd_device": "/dev/nbd12", 00:12:19.123 "bdev_name": "raid0" 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "nbd_device": "/dev/nbd13", 00:12:19.123 "bdev_name": "concat0" 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "nbd_device": "/dev/nbd14", 00:12:19.123 "bdev_name": "raid1" 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "nbd_device": "/dev/nbd15", 00:12:19.123 "bdev_name": "AIO0" 00:12:19.123 } 00:12:19.123 ]' 00:12:19.123 22:58:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:19.382 22:58:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:12:19.382 22:58:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:19.382 22:58:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:12:19.382 22:58:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:19.382 22:58:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:19.382 22:58:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:19.382 22:58:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:19.641 22:58:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:19.641 22:58:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:19.641 22:58:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:19.641 22:58:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:19.641 22:58:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:19.641 22:58:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:19.641 22:58:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:19.641 22:58:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:19.641 22:58:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:19.641 22:58:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:19.641 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:19.641 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:19.641 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:19.641 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:19.641 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:19.641 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:19.641 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:19.641 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:19.641 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:19.641 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:19.901 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:19.901 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:19.901 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:19.901 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:19.901 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:19.901 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:19.901 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:19.901 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:19.901 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:19.901 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:20.160 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:20.160 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:20.160 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:20.160 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:20.160 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:20.160 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:20.160 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:20.160 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:20.160 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:20.160 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:20.419 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:20.419 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:20.419 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:20.419 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:20.419 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:20.419 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:20.419 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:20.419 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:20.419 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:20.419 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:20.689 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:20.689 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:20.689 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:20.689 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:20.689 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:20.689 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:20.689 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:20.689 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:20.689 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:20.689 22:58:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:20.958 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:20.958 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:20.958 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:20.958 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:20.958 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:20.958 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:20.958 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:20.958 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:20.958 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:20.958 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:12:21.216 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:12:21.216 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:12:21.216 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:12:21.216 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:21.216 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:21.216 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:21.216 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:21.216 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:21.216 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:21.216 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:12:21.475 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:12:21.475 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:12:21.475 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:12:21.475 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:21.475 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:21.475 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:21.475 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:21.475 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:21.475 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:21.475 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:12:21.475 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:12:21.732 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:12:21.732 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:12:21.732 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:21.732 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:21.732 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:21.732 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:21.732 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:21.732 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:21.732 22:58:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:21.732 22:58:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:21.732 22:58:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:21.732 22:58:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:21.732 22:58:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:21.732 22:58:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:21.732 22:58:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:21.732 22:58:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:21.732 22:58:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:21.732 22:58:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:21.732 22:58:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:22.296 22:58:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:22.296 22:58:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:22.296 22:58:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:22.296 22:58:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:22.296 22:58:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:22.296 22:58:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:22.296 22:58:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:22.296 22:58:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:22.296 22:58:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:22.296 22:58:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:22.555 22:58:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:22.555 22:58:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:22.555 22:58:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:22.555 22:58:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:22.555 22:58:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:22.555 22:58:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:22.555 22:58:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:22.555 22:58:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:22.555 22:58:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:22.555 22:58:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:22.813 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:22.813 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:22.813 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:22.813 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:22.813 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:22.813 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:22.813 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:22.813 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:22.813 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:22.813 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:23.071 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:23.071 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:23.071 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:23.071 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:23.071 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:23.071 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:23.071 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:23.071 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:23.071 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:23.071 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:23.071 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:12:23.071 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:12:23.071 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:12:23.071 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:23.071 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:23.071 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:23.330 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:23.330 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:23.330 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:23.330 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:23.331 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:23.589 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:23.590 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:23.590 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:23.590 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:23.590 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:23.590 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:23.590 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:23.590 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:23.590 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:23.590 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:23.590 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:23.590 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:23.590 22:58:12 blockdev_general.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:23.590 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:23.590 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:23.590 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:23.590 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:23.590 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:23.590 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:23.590 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:23.590 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:23.590 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:23.590 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:23.590 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:23.590 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:23.590 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:23.590 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:23.590 22:58:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:23.849 /dev/nbd0 00:12:23.849 22:58:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:23.849 22:58:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:23.849 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:23.849 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:23.849 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:23.849 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:23.849 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:23.849 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:23.849 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:23.849 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:23.849 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:23.849 1+0 records in 00:12:23.849 1+0 records out 00:12:23.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000554135 s, 7.4 MB/s 00:12:23.849 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.849 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:23.849 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.849 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:23.849 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:23.849 22:58:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:23.849 22:58:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:23.849 22:58:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:12:24.109 /dev/nbd1 00:12:24.109 22:58:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:24.109 22:58:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:24.109 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:24.109 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:24.109 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:24.109 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:24.109 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:24.109 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:24.109 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:24.109 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:24.109 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:24.109 1+0 records in 00:12:24.109 1+0 records out 00:12:24.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000607044 s, 6.7 MB/s 00:12:24.109 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.109 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:24.109 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.109 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:24.109 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:24.109 22:58:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:24.109 22:58:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:24.109 22:58:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:12:24.368 /dev/nbd10 00:12:24.368 22:58:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:24.368 22:58:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:24.368 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:12:24.368 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:24.368 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:24.368 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:24.368 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:12:24.368 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:24.368 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:24.368 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:24.368 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:24.368 1+0 records in 00:12:24.368 1+0 records out 00:12:24.368 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000623227 s, 6.6 MB/s 00:12:24.368 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.368 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:24.368 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.368 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:24.368 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:24.368 22:58:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:24.368 22:58:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:24.368 22:58:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:12:24.627 /dev/nbd11 00:12:24.627 22:58:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:24.627 22:58:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:24.627 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:12:24.627 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:24.627 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:24.627 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:24.627 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:12:24.627 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:24.627 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:24.627 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:24.627 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:24.627 1+0 records in 00:12:24.627 1+0 records out 00:12:24.627 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285775 s, 14.3 MB/s 00:12:24.627 22:58:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.627 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:24.627 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.627 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:24.627 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:24.627 22:58:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:24.627 22:58:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:24.627 22:58:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:12:24.885 /dev/nbd12 00:12:24.885 22:58:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:25.144 22:58:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:25.144 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:12:25.145 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:25.145 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:25.145 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:25.145 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:12:25.145 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:25.145 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:25.145 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:25.145 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:25.145 1+0 records in 00:12:25.145 1+0 records out 00:12:25.145 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452213 s, 9.1 MB/s 00:12:25.145 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.145 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:25.145 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.145 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:25.145 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:25.145 22:58:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:25.145 22:58:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:25.145 22:58:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:12:25.145 /dev/nbd13 00:12:25.145 22:58:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:25.145 22:58:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:25.145 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:12:25.145 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:25.145 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:25.145 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:25.145 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:12:25.145 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:25.145 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:25.145 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:25.145 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:25.145 1+0 records in 00:12:25.145 1+0 records out 00:12:25.145 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433157 s, 9.5 MB/s 00:12:25.406 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.406 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:25.406 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.406 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:25.406 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:25.406 22:58:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:25.406 22:58:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:25.406 22:58:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:12:25.664 /dev/nbd14 00:12:25.664 22:58:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:12:25.664 22:58:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:12:25.664 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:12:25.664 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:25.664 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:25.664 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:25.664 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:12:25.664 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:25.664 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:25.664 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:25.664 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:25.664 1+0 records in 00:12:25.664 1+0 records out 00:12:25.664 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389886 s, 10.5 MB/s 00:12:25.664 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.664 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:25.664 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.664 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:25.664 22:58:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:25.664 22:58:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:25.664 22:58:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:25.664 22:58:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:12:25.923 /dev/nbd15 00:12:25.923 22:58:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:12:25.923 22:58:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:12:25.923 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:12:25.923 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:25.923 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:25.923 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:25.923 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:12:25.923 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:25.923 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:25.923 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:25.923 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:25.923 1+0 records in 00:12:25.923 1+0 records out 00:12:25.923 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348561 s, 11.8 MB/s 00:12:25.923 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.923 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:25.923 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.923 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:25.923 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:25.923 22:58:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:25.923 22:58:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:25.923 22:58:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:12:26.187 /dev/nbd2 00:12:26.187 22:58:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:12:26.187 22:58:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:12:26.187 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:12:26.187 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:26.187 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:26.187 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:26.187 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:12:26.187 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:26.187 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:26.187 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:26.187 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:26.187 1+0 records in 00:12:26.187 1+0 records out 00:12:26.187 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0006284 s, 6.5 MB/s 00:12:26.187 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.187 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:26.187 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.187 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:26.187 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:26.187 22:58:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:26.187 22:58:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:26.187 22:58:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:12:26.445 /dev/nbd3 00:12:26.445 22:58:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:12:26.445 22:58:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:12:26.445 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:12:26.445 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:26.445 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:26.445 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:26.445 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:12:26.445 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:26.445 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:26.445 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:26.445 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:26.445 1+0 records in 00:12:26.445 1+0 records out 00:12:26.445 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0006363 s, 6.4 MB/s 00:12:26.445 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.445 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:26.445 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.445 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:26.445 22:58:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:26.445 22:58:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:26.445 22:58:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:26.445 22:58:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:12:26.703 /dev/nbd4 00:12:26.703 22:58:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:12:26.703 22:58:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:12:26.703 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:12:26.703 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:26.703 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:26.703 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:26.703 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:12:26.703 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:26.703 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:26.703 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:26.703 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:26.703 1+0 records in 00:12:26.703 1+0 records out 00:12:26.703 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000744888 s, 5.5 MB/s 00:12:26.703 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.703 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:26.703 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.703 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:26.703 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:26.961 22:58:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:26.961 22:58:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:26.961 22:58:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:12:27.221 /dev/nbd5 00:12:27.221 22:58:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:12:27.221 22:58:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:12:27.221 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:12:27.221 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:27.221 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:27.221 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:27.221 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:12:27.221 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:27.221 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:27.221 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:27.221 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:27.221 1+0 records in 00:12:27.221 1+0 records out 00:12:27.221 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000613833 s, 6.7 MB/s 00:12:27.221 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.221 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:27.221 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.221 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:27.221 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:27.221 22:58:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:27.221 22:58:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:27.221 22:58:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:12:27.480 /dev/nbd6 00:12:27.480 22:58:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:12:27.480 22:58:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:12:27.480 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:12:27.480 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:27.480 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:27.480 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:27.480 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:12:27.480 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:27.480 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:27.480 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:27.480 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:27.480 1+0 records in 00:12:27.480 1+0 records out 00:12:27.480 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0020499 s, 2.0 MB/s 00:12:27.480 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.480 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:27.480 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.480 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:27.480 22:58:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:27.480 22:58:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:27.480 22:58:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:27.480 22:58:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:12:27.739 /dev/nbd7 00:12:27.739 22:58:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:12:27.739 22:58:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:12:27.739 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:12:27.739 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:27.739 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:27.739 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:27.739 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:12:27.739 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:27.739 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:27.739 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:27.739 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:27.739 1+0 records in 00:12:27.739 1+0 records out 00:12:27.739 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000471736 s, 8.7 MB/s 00:12:27.739 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.739 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:27.739 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.739 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:27.739 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:27.739 22:58:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:27.739 22:58:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:27.739 22:58:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:12:27.998 /dev/nbd8 00:12:27.998 22:58:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:12:27.998 22:58:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:12:27.998 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:12:27.998 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:27.998 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:27.998 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:27.998 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:12:27.998 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:27.998 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:27.998 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:27.998 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:27.998 1+0 records in 00:12:27.998 1+0 records out 00:12:27.998 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00174778 s, 2.3 MB/s 00:12:27.998 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.998 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:27.998 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.998 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:27.998 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:27.998 22:58:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:27.998 22:58:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:27.998 22:58:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:12:28.263 /dev/nbd9 00:12:28.263 22:58:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:12:28.263 22:58:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:12:28.263 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:12:28.263 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:28.263 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:28.263 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:28.263 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:12:28.263 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:28.263 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:28.263 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:28.263 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:28.263 1+0 records in 00:12:28.263 1+0 records out 00:12:28.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00119949 s, 3.4 MB/s 00:12:28.263 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.263 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:28.263 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.263 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:28.263 22:58:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:28.263 22:58:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:28.263 22:58:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:28.263 22:58:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:28.263 22:58:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:28.263 22:58:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:28.537 22:58:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:28.537 { 00:12:28.537 "nbd_device": "/dev/nbd0", 00:12:28.537 "bdev_name": "Malloc0" 00:12:28.537 }, 00:12:28.537 { 00:12:28.537 "nbd_device": "/dev/nbd1", 00:12:28.537 "bdev_name": "Malloc1p0" 00:12:28.537 }, 00:12:28.537 { 00:12:28.537 "nbd_device": "/dev/nbd10", 00:12:28.537 "bdev_name": "Malloc1p1" 00:12:28.537 }, 00:12:28.537 { 00:12:28.537 "nbd_device": "/dev/nbd11", 00:12:28.537 "bdev_name": "Malloc2p0" 00:12:28.537 }, 00:12:28.537 { 00:12:28.537 "nbd_device": "/dev/nbd12", 00:12:28.537 "bdev_name": "Malloc2p1" 00:12:28.537 }, 00:12:28.537 { 00:12:28.537 "nbd_device": "/dev/nbd13", 00:12:28.537 "bdev_name": "Malloc2p2" 00:12:28.537 }, 00:12:28.537 { 00:12:28.537 "nbd_device": "/dev/nbd14", 00:12:28.537 "bdev_name": "Malloc2p3" 00:12:28.537 }, 00:12:28.537 { 00:12:28.537 "nbd_device": "/dev/nbd15", 00:12:28.537 "bdev_name": "Malloc2p4" 00:12:28.537 }, 00:12:28.537 { 00:12:28.537 "nbd_device": "/dev/nbd2", 00:12:28.537 "bdev_name": "Malloc2p5" 00:12:28.537 }, 00:12:28.537 { 00:12:28.537 "nbd_device": "/dev/nbd3", 00:12:28.537 "bdev_name": "Malloc2p6" 00:12:28.537 }, 00:12:28.537 { 00:12:28.537 "nbd_device": "/dev/nbd4", 00:12:28.537 "bdev_name": "Malloc2p7" 00:12:28.537 }, 00:12:28.537 { 00:12:28.537 "nbd_device": "/dev/nbd5", 00:12:28.537 "bdev_name": "TestPT" 00:12:28.537 }, 00:12:28.537 { 00:12:28.537 "nbd_device": "/dev/nbd6", 00:12:28.537 "bdev_name": "raid0" 00:12:28.537 }, 00:12:28.537 { 00:12:28.537 "nbd_device": "/dev/nbd7", 00:12:28.537 "bdev_name": "concat0" 00:12:28.537 }, 00:12:28.537 { 00:12:28.537 "nbd_device": "/dev/nbd8", 00:12:28.537 "bdev_name": "raid1" 00:12:28.537 }, 00:12:28.537 { 00:12:28.537 "nbd_device": "/dev/nbd9", 00:12:28.537 "bdev_name": "AIO0" 00:12:28.537 } 00:12:28.537 ]' 00:12:28.537 22:58:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:28.537 22:58:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:28.537 { 00:12:28.537 "nbd_device": "/dev/nbd0", 00:12:28.537 "bdev_name": "Malloc0" 00:12:28.537 }, 00:12:28.537 { 00:12:28.537 "nbd_device": "/dev/nbd1", 00:12:28.537 "bdev_name": "Malloc1p0" 00:12:28.537 }, 00:12:28.537 { 00:12:28.537 "nbd_device": "/dev/nbd10", 00:12:28.537 "bdev_name": "Malloc1p1" 00:12:28.537 }, 00:12:28.537 { 00:12:28.537 "nbd_device": "/dev/nbd11", 00:12:28.537 "bdev_name": "Malloc2p0" 00:12:28.537 }, 00:12:28.537 { 00:12:28.537 "nbd_device": "/dev/nbd12", 00:12:28.537 "bdev_name": "Malloc2p1" 00:12:28.537 }, 00:12:28.537 { 00:12:28.537 "nbd_device": "/dev/nbd13", 00:12:28.537 "bdev_name": "Malloc2p2" 00:12:28.537 }, 00:12:28.537 { 00:12:28.537 "nbd_device": "/dev/nbd14", 00:12:28.537 "bdev_name": "Malloc2p3" 00:12:28.537 }, 00:12:28.537 { 00:12:28.537 "nbd_device": "/dev/nbd15", 00:12:28.537 "bdev_name": "Malloc2p4" 00:12:28.537 }, 00:12:28.537 { 00:12:28.537 "nbd_device": "/dev/nbd2", 00:12:28.537 "bdev_name": "Malloc2p5" 00:12:28.537 }, 00:12:28.537 { 00:12:28.537 "nbd_device": "/dev/nbd3", 00:12:28.537 "bdev_name": "Malloc2p6" 00:12:28.537 }, 00:12:28.537 { 00:12:28.537 "nbd_device": "/dev/nbd4", 00:12:28.537 "bdev_name": "Malloc2p7" 00:12:28.537 }, 00:12:28.537 { 00:12:28.537 "nbd_device": "/dev/nbd5", 00:12:28.537 "bdev_name": "TestPT" 00:12:28.537 }, 00:12:28.537 { 00:12:28.537 "nbd_device": "/dev/nbd6", 00:12:28.537 "bdev_name": "raid0" 00:12:28.537 }, 00:12:28.537 { 00:12:28.537 "nbd_device": "/dev/nbd7", 00:12:28.537 "bdev_name": "concat0" 00:12:28.537 }, 00:12:28.537 { 00:12:28.537 "nbd_device": "/dev/nbd8", 00:12:28.537 "bdev_name": "raid1" 00:12:28.537 }, 00:12:28.537 { 00:12:28.537 "nbd_device": "/dev/nbd9", 00:12:28.537 "bdev_name": "AIO0" 00:12:28.537 } 00:12:28.537 ]' 00:12:28.537 22:58:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:28.537 /dev/nbd1 00:12:28.537 /dev/nbd10 00:12:28.537 /dev/nbd11 00:12:28.537 /dev/nbd12 00:12:28.537 /dev/nbd13 00:12:28.537 /dev/nbd14 00:12:28.537 /dev/nbd15 00:12:28.537 /dev/nbd2 00:12:28.537 /dev/nbd3 00:12:28.537 /dev/nbd4 00:12:28.537 /dev/nbd5 00:12:28.537 /dev/nbd6 00:12:28.537 /dev/nbd7 00:12:28.537 /dev/nbd8 00:12:28.537 /dev/nbd9' 00:12:28.537 22:58:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:28.537 /dev/nbd1 00:12:28.537 /dev/nbd10 00:12:28.537 /dev/nbd11 00:12:28.537 /dev/nbd12 00:12:28.537 /dev/nbd13 00:12:28.537 /dev/nbd14 00:12:28.537 /dev/nbd15 00:12:28.537 /dev/nbd2 00:12:28.537 /dev/nbd3 00:12:28.537 /dev/nbd4 00:12:28.537 /dev/nbd5 00:12:28.537 /dev/nbd6 00:12:28.537 /dev/nbd7 00:12:28.537 /dev/nbd8 00:12:28.537 /dev/nbd9' 00:12:28.537 22:58:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:28.537 22:58:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=16 00:12:28.537 22:58:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 16 00:12:28.537 22:58:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=16 00:12:28.537 22:58:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:12:28.537 22:58:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:12:28.537 22:58:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:28.537 22:58:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:28.537 22:58:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:28.537 22:58:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:28.537 22:58:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:28.537 22:58:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:28.538 256+0 records in 00:12:28.538 256+0 records out 00:12:28.538 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00692577 s, 151 MB/s 00:12:28.538 22:58:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:28.538 22:58:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:28.796 256+0 records in 00:12:28.796 256+0 records out 00:12:28.796 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14766 s, 7.1 MB/s 00:12:28.796 22:58:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:28.796 22:58:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:29.055 256+0 records in 00:12:29.055 256+0 records out 00:12:29.055 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151427 s, 6.9 MB/s 00:12:29.055 22:58:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:29.055 22:58:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:29.055 256+0 records in 00:12:29.055 256+0 records out 00:12:29.055 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.142405 s, 7.4 MB/s 00:12:29.055 22:58:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:29.055 22:58:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:29.314 256+0 records in 00:12:29.314 256+0 records out 00:12:29.314 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149063 s, 7.0 MB/s 00:12:29.314 22:58:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:29.314 22:58:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:29.314 256+0 records in 00:12:29.314 256+0 records out 00:12:29.314 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145492 s, 7.2 MB/s 00:12:29.314 22:58:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:29.314 22:58:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:29.572 256+0 records in 00:12:29.572 256+0 records out 00:12:29.572 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.142047 s, 7.4 MB/s 00:12:29.572 22:58:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:29.572 22:58:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:12:29.572 256+0 records in 00:12:29.572 256+0 records out 00:12:29.572 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147584 s, 7.1 MB/s 00:12:29.572 22:58:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:29.572 22:58:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:12:29.831 256+0 records in 00:12:29.831 256+0 records out 00:12:29.831 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151704 s, 6.9 MB/s 00:12:29.831 22:58:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:29.831 22:58:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:12:30.089 256+0 records in 00:12:30.089 256+0 records out 00:12:30.089 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147752 s, 7.1 MB/s 00:12:30.089 22:58:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:30.089 22:58:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:12:30.089 256+0 records in 00:12:30.089 256+0 records out 00:12:30.089 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144636 s, 7.2 MB/s 00:12:30.089 22:58:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:30.089 22:58:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:12:30.348 256+0 records in 00:12:30.348 256+0 records out 00:12:30.348 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150376 s, 7.0 MB/s 00:12:30.348 22:58:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:30.348 22:58:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:12:30.349 256+0 records in 00:12:30.349 256+0 records out 00:12:30.349 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13622 s, 7.7 MB/s 00:12:30.349 22:58:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:30.349 22:58:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:12:30.608 256+0 records in 00:12:30.608 256+0 records out 00:12:30.608 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.141487 s, 7.4 MB/s 00:12:30.608 22:58:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:30.608 22:58:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:12:30.608 256+0 records in 00:12:30.608 256+0 records out 00:12:30.608 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151304 s, 6.9 MB/s 00:12:30.608 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:30.608 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:12:30.867 256+0 records in 00:12:30.867 256+0 records out 00:12:30.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.160503 s, 6.5 MB/s 00:12:30.867 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:30.867 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:12:31.126 256+0 records in 00:12:31.126 256+0 records out 00:12:31.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.206444 s, 5.1 MB/s 00:12:31.126 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:12:31.126 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:31.126 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:31.126 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:31.126 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:31.126 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:31.126 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:31.126 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:31.126 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:31.126 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:31.126 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:31.126 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:31.126 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:31.126 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:31.126 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:31.126 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:31.126 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:31.126 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:31.126 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:31.126 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:31.126 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:12:31.126 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:31.126 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:12:31.126 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:31.126 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:12:31.126 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:31.126 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:12:31.126 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:31.126 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:12:31.126 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:31.126 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:12:31.126 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:31.126 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:12:31.126 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:31.126 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:12:31.126 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:31.127 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:12:31.127 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:31.127 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:12:31.127 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:31.127 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:31.127 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:31.127 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:31.127 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:31.127 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:31.127 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:31.127 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:31.693 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:31.693 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:31.693 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:31.693 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:31.693 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:31.693 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:31.693 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:31.693 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:31.693 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:31.693 22:58:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:31.693 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:31.952 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:31.952 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:31.952 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:31.952 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:31.952 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:31.952 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:31.952 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:31.952 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:31.952 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:31.952 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:31.952 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:31.952 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:31.952 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:31.952 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:31.952 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:31.952 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:31.952 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:31.952 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:31.952 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:32.519 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:32.519 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:32.519 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:32.519 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:32.519 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:32.519 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:32.519 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:32.519 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:32.519 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:32.519 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:32.519 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:32.519 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:32.519 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:32.519 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:32.519 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:32.519 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:32.519 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:32.519 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:32.519 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:32.519 22:58:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:32.778 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:32.778 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:32.778 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:32.778 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:32.778 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:32.778 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:32.778 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:32.779 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:32.779 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:32.779 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:33.037 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:33.037 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:33.037 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:33.037 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.037 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.037 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:33.037 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:33.037 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.037 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.037 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:33.605 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:12:33.605 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:12:33.605 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:12:33.605 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.605 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.605 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:33.605 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:33.605 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.605 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.605 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:33.605 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:33.605 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:33.605 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:33.605 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.605 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.605 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:33.605 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:33.605 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.605 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.605 22:58:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:33.863 22:58:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:33.864 22:58:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:33.864 22:58:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:33.864 22:58:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.864 22:58:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.864 22:58:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:33.864 22:58:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:33.864 22:58:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.864 22:58:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.864 22:58:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:34.429 22:58:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:34.429 22:58:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:34.429 22:58:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:34.429 22:58:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:34.429 22:58:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:34.429 22:58:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:34.429 22:58:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:34.429 22:58:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:34.429 22:58:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:34.429 22:58:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:34.429 22:58:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:34.429 22:58:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:34.429 22:58:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:34.429 22:58:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:34.429 22:58:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:34.429 22:58:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:34.429 22:58:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:34.429 22:58:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:34.429 22:58:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:34.429 22:58:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:34.688 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:34.688 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:34.688 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:34.688 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:34.688 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:34.688 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:34.946 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:34.946 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:34.946 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:34.946 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:12:35.205 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:12:35.205 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:12:35.205 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:12:35.205 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.205 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.205 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:35.205 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:35.205 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.205 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.205 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:12:35.463 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:12:35.463 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:12:35.463 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:12:35.463 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.463 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.463 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:35.463 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:35.463 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.463 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.463 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:12:35.721 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:12:35.721 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:12:35.721 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:12:35.721 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.721 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.721 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:35.721 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:35.721 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.721 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:35.721 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:35.721 22:58:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:35.721 22:58:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:35.721 22:58:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:35.721 22:58:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:35.979 22:58:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:35.979 22:58:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:35.979 22:58:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:35.979 22:58:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:35.979 22:58:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:35.979 22:58:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:35.979 22:58:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:12:35.979 22:58:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:35.979 22:58:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:12:35.979 22:58:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:35.979 22:58:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:35.979 22:58:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:35.979 22:58:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:12:35.979 22:58:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:12:35.979 22:58:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:36.237 malloc_lvol_verify 00:12:36.237 22:58:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:36.495 c95b4c25-4799-4e9d-860e-6d74462c5f8c 00:12:36.495 22:58:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:36.839 96951600-778f-4788-990d-5bc9cea01547 00:12:36.839 22:58:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:36.839 /dev/nbd0 00:12:37.123 22:58:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:12:37.123 mke2fs 1.46.5 (30-Dec-2021) 00:12:37.123 00:12:37.123 Filesystem too small for a journal 00:12:37.123 Discarding device blocks: 0/1024 done 00:12:37.123 Creating filesystem with 1024 4k blocks and 1024 inodes 00:12:37.123 00:12:37.123 Allocating group tables: 0/1 done 00:12:37.123 Writing inode tables: 0/1 done 00:12:37.123 Writing superblocks and filesystem accounting information: 0/1 done 00:12:37.123 00:12:37.123 22:58:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:12:37.123 22:58:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:37.123 22:58:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:37.123 22:58:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:37.123 22:58:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:37.123 22:58:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:37.123 22:58:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:37.123 22:58:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:37.123 22:58:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:37.123 22:58:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:37.123 22:58:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:37.123 22:58:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:37.123 22:58:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:37.123 22:58:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:37.123 22:58:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:37.123 22:58:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:37.123 22:58:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:12:37.123 22:58:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:12:37.123 22:58:26 blockdev_general.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 127686 00:12:37.123 22:58:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 127686 ']' 00:12:37.123 22:58:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 127686 00:12:37.123 22:58:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:12:37.123 22:58:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:37.123 22:58:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 127686 00:12:37.123 22:58:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:37.123 22:58:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:37.123 22:58:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 127686' 00:12:37.123 killing process with pid 127686 00:12:37.124 22:58:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@967 -- # kill 127686 00:12:37.124 22:58:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@972 -- # wait 127686 00:12:37.691 22:58:26 blockdev_general.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:12:37.691 00:12:37.691 real 0m24.355s 00:12:37.691 user 0m34.603s 00:12:37.691 sys 0m8.944s 00:12:37.691 22:58:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:37.691 22:58:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:37.691 ************************************ 00:12:37.691 END TEST bdev_nbd 00:12:37.691 ************************************ 00:12:37.691 22:58:26 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:12:37.691 22:58:26 blockdev_general -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:12:37.691 22:58:26 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = nvme ']' 00:12:37.691 22:58:26 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = gpt ']' 00:12:37.691 22:58:26 blockdev_general -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:12:37.691 22:58:26 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:37.691 22:58:26 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:37.691 22:58:26 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:37.691 ************************************ 00:12:37.691 START TEST bdev_fio 00:12:37.691 ************************************ 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:37.691 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc0]' 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc0 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p0]' 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p0 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p1]' 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p1 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p0]' 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p0 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p1]' 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p1 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p2]' 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p2 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p3]' 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p3 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p4]' 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p4 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p5]' 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p5 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p6]' 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p6 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p7]' 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p7 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_TestPT]' 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=TestPT 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid0]' 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid0 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_concat0]' 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=concat0 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid1]' 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid1 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_AIO0]' 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=AIO0 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:37.691 22:58:26 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:37.691 ************************************ 00:12:37.691 START TEST bdev_fio_rw_verify 00:12:37.691 ************************************ 00:12:37.692 22:58:26 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:37.692 22:58:26 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:37.692 22:58:26 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:12:37.692 22:58:26 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:37.692 22:58:26 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:12:37.692 22:58:26 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:37.692 22:58:26 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:12:37.692 22:58:26 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:12:37.692 22:58:26 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:37.692 22:58:26 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:37.692 22:58:26 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:12:37.692 22:58:26 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:37.692 22:58:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:12:37.692 22:58:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:12:37.692 22:58:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:12:37.692 22:58:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:37.692 22:58:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:37.950 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:37.950 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:37.950 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:37.950 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:37.950 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:37.950 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:37.950 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:37.950 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:37.950 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:37.950 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:37.950 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:37.950 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:37.950 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:37.950 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:37.950 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:37.950 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:37.950 fio-3.35 00:12:37.950 Starting 16 threads 00:12:50.148 00:12:50.148 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=128835: Sat Jul 13 22:58:38 2024 00:12:50.148 read: IOPS=76.6k, BW=299MiB/s (314MB/s)(2992MiB/10001msec) 00:12:50.148 slat (usec): min=2, max=38858, avg=35.27, stdev=424.70 00:12:50.148 clat (usec): min=10, max=40551, avg=294.22, stdev=1290.96 00:12:50.148 lat (usec): min=28, max=40570, avg=329.49, stdev=1358.69 00:12:50.148 clat percentiles (usec): 00:12:50.148 | 50.000th=[ 174], 99.000th=[ 799], 99.900th=[16319], 99.990th=[24249], 00:12:50.148 | 99.999th=[39060] 00:12:50.148 write: IOPS=122k, BW=476MiB/s (499MB/s)(4707MiB/9881msec); 0 zone resets 00:12:50.148 slat (usec): min=4, max=52061, avg=66.51, stdev=682.68 00:12:50.148 clat (usec): min=9, max=50337, avg=386.11, stdev=1601.15 00:12:50.148 lat (usec): min=25, max=52389, avg=452.62, stdev=1740.14 00:12:50.148 clat percentiles (usec): 00:12:50.148 | 50.000th=[ 217], 99.000th=[ 6456], 99.900th=[20317], 99.990th=[35914], 00:12:50.148 | 99.999th=[49546] 00:12:50.148 bw ( KiB/s): min=301120, max=741720, per=98.77%, avg=481748.82, stdev=8404.06, samples=305 00:12:50.148 iops : min=75280, max=185430, avg=120437.18, stdev=2101.02, samples=305 00:12:50.148 lat (usec) : 10=0.01%, 20=0.01%, 50=0.50%, 100=11.99%, 250=57.21% 00:12:50.148 lat (usec) : 500=26.90%, 750=2.01%, 1000=0.16% 00:12:50.148 lat (msec) : 2=0.16%, 4=0.08%, 10=0.20%, 20=0.69%, 50=0.09% 00:12:50.148 lat (msec) : 100=0.01% 00:12:50.148 cpu : usr=55.80%, sys=2.06%, ctx=223162, majf=2, minf=93774 00:12:50.148 IO depths : 1=11.3%, 2=23.7%, 4=51.9%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:50.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:50.148 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:50.148 issued rwts: total=765896,1204904,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:50.148 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:50.148 00:12:50.148 Run status group 0 (all jobs): 00:12:50.148 READ: bw=299MiB/s (314MB/s), 299MiB/s-299MiB/s (314MB/s-314MB/s), io=2992MiB (3137MB), run=10001-10001msec 00:12:50.148 WRITE: bw=476MiB/s (499MB/s), 476MiB/s-476MiB/s (499MB/s-499MB/s), io=4707MiB (4935MB), run=9881-9881msec 00:12:50.148 ----------------------------------------------------- 00:12:50.148 Suppressions used: 00:12:50.148 count bytes template 00:12:50.148 16 140 /usr/src/fio/parse.c 00:12:50.148 10639 1021344 /usr/src/fio/iolog.c 00:12:50.148 1 904 libcrypto.so 00:12:50.148 ----------------------------------------------------- 00:12:50.148 00:12:50.148 00:12:50.148 real 0m11.940s 00:12:50.148 user 1m32.203s 00:12:50.148 sys 0m4.260s 00:12:50.148 22:58:38 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:50.148 22:58:38 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:12:50.148 ************************************ 00:12:50.148 END TEST bdev_fio_rw_verify 00:12:50.148 ************************************ 00:12:50.148 22:58:38 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:12:50.148 22:58:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:12:50.148 22:58:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:50.148 22:58:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:12:50.148 22:58:38 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:50.148 22:58:38 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:12:50.148 22:58:38 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:12:50.148 22:58:38 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:12:50.148 22:58:38 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:12:50.148 22:58:38 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:50.148 22:58:38 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:12:50.148 22:58:38 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:12:50.148 22:58:38 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:50.148 22:58:38 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:12:50.148 22:58:38 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:12:50.148 22:58:38 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:12:50.148 22:58:38 blockdev_general.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:12:50.148 22:58:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:50.149 22:58:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "d6a634c7-c081-4850-9062-6ec6cb87ecab"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d6a634c7-c081-4850-9062-6ec6cb87ecab",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "106f8ece-21be-5d43-84bf-d85f7b430808"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "106f8ece-21be-5d43-84bf-d85f7b430808",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "b45e48b2-4d78-5d5e-a509-0822a2a54cc3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "b45e48b2-4d78-5d5e-a509-0822a2a54cc3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "3427f0b5-0aea-5b89-96f0-78d2faaf1061"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3427f0b5-0aea-5b89-96f0-78d2faaf1061",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "249e37f0-f744-5245-b184-468a6ecba645"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "249e37f0-f744-5245-b184-468a6ecba645",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "714e76e0-6b68-5619-a04f-f8dfedef7e74"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "714e76e0-6b68-5619-a04f-f8dfedef7e74",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "837fb58e-03b3-52ea-ab3d-fec4e4cf7ed8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "837fb58e-03b3-52ea-ab3d-fec4e4cf7ed8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "c8b87019-c25c-587e-8a14-f50032625978"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c8b87019-c25c-587e-8a14-f50032625978",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "b80c12e8-d175-56f4-8d90-bab22072166a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b80c12e8-d175-56f4-8d90-bab22072166a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "056c029a-8af0-54b5-a974-b0391164dcb9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "056c029a-8af0-54b5-a974-b0391164dcb9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "dabb5577-00ec-553a-8c19-52ec1d5a4d13"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "dabb5577-00ec-553a-8c19-52ec1d5a4d13",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "5ecf1906-ef9b-5de4-8600-f14de505cd49"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "5ecf1906-ef9b-5de4-8600-f14de505cd49",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "d7ea1349-b927-4bf5-a432-356703ea7574"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "d7ea1349-b927-4bf5-a432-356703ea7574",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d7ea1349-b927-4bf5-a432-356703ea7574",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "171b1e06-67fb-49f8-aa8a-011590d26dd6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "78fea3d6-5646-4ef2-9985-7e4904f6ef75",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "47feeb56-7387-4578-a301-b66f60870d3e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "47feeb56-7387-4578-a301-b66f60870d3e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "47feeb56-7387-4578-a301-b66f60870d3e",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "bfc61314-e16e-40d6-96fa-a0edda1a8e57",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "396de916-4c06-4d7b-b833-e9b1602e6f34",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "ceda7f71-794e-4e14-a4b8-fec6e8d240a9"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ceda7f71-794e-4e14-a4b8-fec6e8d240a9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "ceda7f71-794e-4e14-a4b8-fec6e8d240a9",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "a3b4bc8e-6fe8-4d48-b564-8db03d96a7dd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "14c8b8e2-0c8d-4155-aad8-d1cdab4a29fb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "427b3b3d-fed4-4244-81b0-31621926c982"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "427b3b3d-fed4-4244-81b0-31621926c982",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:12:50.149 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n Malloc0 00:12:50.149 Malloc1p0 00:12:50.149 Malloc1p1 00:12:50.149 Malloc2p0 00:12:50.149 Malloc2p1 00:12:50.149 Malloc2p2 00:12:50.149 Malloc2p3 00:12:50.149 Malloc2p4 00:12:50.149 Malloc2p5 00:12:50.149 Malloc2p6 00:12:50.149 Malloc2p7 00:12:50.149 TestPT 00:12:50.149 raid0 00:12:50.149 concat0 ]] 00:12:50.149 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "d6a634c7-c081-4850-9062-6ec6cb87ecab"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d6a634c7-c081-4850-9062-6ec6cb87ecab",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "106f8ece-21be-5d43-84bf-d85f7b430808"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "106f8ece-21be-5d43-84bf-d85f7b430808",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "b45e48b2-4d78-5d5e-a509-0822a2a54cc3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "b45e48b2-4d78-5d5e-a509-0822a2a54cc3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "3427f0b5-0aea-5b89-96f0-78d2faaf1061"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3427f0b5-0aea-5b89-96f0-78d2faaf1061",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "249e37f0-f744-5245-b184-468a6ecba645"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "249e37f0-f744-5245-b184-468a6ecba645",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "714e76e0-6b68-5619-a04f-f8dfedef7e74"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "714e76e0-6b68-5619-a04f-f8dfedef7e74",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "837fb58e-03b3-52ea-ab3d-fec4e4cf7ed8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "837fb58e-03b3-52ea-ab3d-fec4e4cf7ed8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "c8b87019-c25c-587e-8a14-f50032625978"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c8b87019-c25c-587e-8a14-f50032625978",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "b80c12e8-d175-56f4-8d90-bab22072166a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b80c12e8-d175-56f4-8d90-bab22072166a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "056c029a-8af0-54b5-a974-b0391164dcb9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "056c029a-8af0-54b5-a974-b0391164dcb9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "dabb5577-00ec-553a-8c19-52ec1d5a4d13"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "dabb5577-00ec-553a-8c19-52ec1d5a4d13",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "5ecf1906-ef9b-5de4-8600-f14de505cd49"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "5ecf1906-ef9b-5de4-8600-f14de505cd49",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "d7ea1349-b927-4bf5-a432-356703ea7574"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "d7ea1349-b927-4bf5-a432-356703ea7574",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d7ea1349-b927-4bf5-a432-356703ea7574",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "171b1e06-67fb-49f8-aa8a-011590d26dd6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "78fea3d6-5646-4ef2-9985-7e4904f6ef75",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "47feeb56-7387-4578-a301-b66f60870d3e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "47feeb56-7387-4578-a301-b66f60870d3e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "47feeb56-7387-4578-a301-b66f60870d3e",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "bfc61314-e16e-40d6-96fa-a0edda1a8e57",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "396de916-4c06-4d7b-b833-e9b1602e6f34",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "ceda7f71-794e-4e14-a4b8-fec6e8d240a9"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ceda7f71-794e-4e14-a4b8-fec6e8d240a9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "ceda7f71-794e-4e14-a4b8-fec6e8d240a9",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "a3b4bc8e-6fe8-4d48-b564-8db03d96a7dd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "14c8b8e2-0c8d-4155-aad8-d1cdab4a29fb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "427b3b3d-fed4-4244-81b0-31621926c982"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "427b3b3d-fed4-4244-81b0-31621926c982",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc0]' 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc0 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p0]' 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p0 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p1]' 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p1 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p0]' 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p0 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p1]' 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p1 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p2]' 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p2 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p3]' 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p3 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p4]' 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p4 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p5]' 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p5 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p6]' 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p6 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p7]' 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p7 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_TestPT]' 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=TestPT 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_raid0]' 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=raid0 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_concat0]' 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=concat0 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:50.151 22:58:39 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:50.151 ************************************ 00:12:50.151 START TEST bdev_fio_trim 00:12:50.151 ************************************ 00:12:50.151 22:58:39 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:50.151 22:58:39 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:50.151 22:58:39 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:12:50.151 22:58:39 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:50.151 22:58:39 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:12:50.151 22:58:39 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:50.151 22:58:39 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:12:50.151 22:58:39 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:12:50.151 22:58:39 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:50.151 22:58:39 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:12:50.151 22:58:39 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:50.151 22:58:39 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:50.151 22:58:39 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:12:50.151 22:58:39 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:12:50.151 22:58:39 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1347 -- # break 00:12:50.151 22:58:39 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:50.151 22:58:39 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:50.151 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:50.151 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:50.151 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:50.151 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:50.151 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:50.151 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:50.151 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:50.151 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:50.151 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:50.151 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:50.151 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:50.151 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:50.151 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:50.151 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:50.151 fio-3.35 00:12:50.151 Starting 14 threads 00:13:02.342 00:13:02.342 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=129031: Sat Jul 13 22:58:50 2024 00:13:02.342 write: IOPS=126k, BW=493MiB/s (517MB/s)(4928MiB/10005msec); 0 zone resets 00:13:02.342 slat (usec): min=2, max=24179, avg=39.09, stdev=398.25 00:13:02.342 clat (usec): min=26, max=28347, avg=287.42, stdev=1117.28 00:13:02.342 lat (usec): min=38, max=28370, avg=326.51, stdev=1185.22 00:13:02.342 clat percentiles (usec): 00:13:02.342 | 50.000th=[ 192], 99.000th=[ 494], 99.900th=[16319], 99.990th=[20317], 00:13:02.342 | 99.999th=[28181] 00:13:02.342 bw ( KiB/s): min=346528, max=724952, per=99.91%, avg=503965.51, stdev=9288.50, samples=267 00:13:02.342 iops : min=86632, max=181236, avg=125991.28, stdev=2322.12, samples=267 00:13:02.342 trim: IOPS=126k, BW=493MiB/s (517MB/s)(4928MiB/10005msec); 0 zone resets 00:13:02.342 slat (usec): min=4, max=28031, avg=26.35, stdev=340.57 00:13:02.342 clat (usec): min=4, max=28370, avg=305.09, stdev=1137.09 00:13:02.342 lat (usec): min=13, max=28391, avg=331.44, stdev=1186.59 00:13:02.342 clat percentiles (usec): 00:13:02.342 | 50.000th=[ 215], 99.000th=[ 433], 99.900th=[16319], 99.990th=[20317], 00:13:02.342 | 99.999th=[28181] 00:13:02.342 bw ( KiB/s): min=346528, max=725008, per=99.91%, avg=503969.29, stdev=9289.64, samples=267 00:13:02.342 iops : min=86632, max=181250, avg=125992.13, stdev=2322.39, samples=267 00:13:02.342 lat (usec) : 10=0.11%, 20=0.32%, 50=1.31%, 100=6.07%, 250=61.18% 00:13:02.342 lat (usec) : 500=30.11%, 750=0.22%, 1000=0.02% 00:13:02.342 lat (msec) : 2=0.01%, 4=0.01%, 10=0.05%, 20=0.57%, 50=0.02% 00:13:02.342 cpu : usr=68.90%, sys=0.49%, ctx=168987, majf=0, minf=8994 00:13:02.342 IO depths : 1=12.3%, 2=24.5%, 4=50.0%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:02.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.342 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.342 issued rwts: total=0,1261651,1261655,0 short=0,0,0,0 dropped=0,0,0,0 00:13:02.342 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:02.342 00:13:02.342 Run status group 0 (all jobs): 00:13:02.342 WRITE: bw=493MiB/s (517MB/s), 493MiB/s-493MiB/s (517MB/s-517MB/s), io=4928MiB (5168MB), run=10005-10005msec 00:13:02.342 TRIM: bw=493MiB/s (517MB/s), 493MiB/s-493MiB/s (517MB/s-517MB/s), io=4928MiB (5168MB), run=10005-10005msec 00:13:02.342 ----------------------------------------------------- 00:13:02.342 Suppressions used: 00:13:02.342 count bytes template 00:13:02.342 14 129 /usr/src/fio/parse.c 00:13:02.342 1 904 libcrypto.so 00:13:02.342 ----------------------------------------------------- 00:13:02.342 00:13:02.342 00:13:02.342 real 0m11.539s 00:13:02.342 user 1m39.036s 00:13:02.342 sys 0m1.479s 00:13:02.342 22:58:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:02.342 22:58:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:13:02.342 ************************************ 00:13:02.342 END TEST bdev_fio_trim 00:13:02.342 ************************************ 00:13:02.342 22:58:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:13:02.342 22:58:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:13:02.342 22:58:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:02.342 /home/vagrant/spdk_repo/spdk 00:13:02.342 22:58:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:13:02.342 22:58:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:13:02.342 00:13:02.342 real 0m23.777s 00:13:02.342 user 3m11.413s 00:13:02.342 sys 0m5.860s 00:13:02.342 22:58:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:02.342 22:58:50 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:02.342 ************************************ 00:13:02.342 END TEST bdev_fio 00:13:02.342 ************************************ 00:13:02.342 22:58:50 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:13:02.342 22:58:50 blockdev_general -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:02.342 22:58:50 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:02.342 22:58:50 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:13:02.342 22:58:50 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:02.342 22:58:50 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:02.342 ************************************ 00:13:02.342 START TEST bdev_verify 00:13:02.342 ************************************ 00:13:02.342 22:58:50 blockdev_general.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:02.342 [2024-07-13 22:58:50.786419] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:02.342 [2024-07-13 22:58:50.786611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129199 ] 00:13:02.342 [2024-07-13 22:58:50.927610] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:02.342 [2024-07-13 22:58:50.981466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.342 [2024-07-13 22:58:50.981465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.342 [2024-07-13 22:58:51.121530] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:02.342 [2024-07-13 22:58:51.121944] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:02.342 [2024-07-13 22:58:51.129481] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:02.342 [2024-07-13 22:58:51.129681] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:02.342 [2024-07-13 22:58:51.137541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:02.342 [2024-07-13 22:58:51.137768] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:02.342 [2024-07-13 22:58:51.137944] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:02.342 [2024-07-13 22:58:51.245014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:02.342 [2024-07-13 22:58:51.245337] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.342 [2024-07-13 22:58:51.245513] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:02.342 [2024-07-13 22:58:51.245696] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.342 [2024-07-13 22:58:51.248517] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.342 [2024-07-13 22:58:51.248702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:02.342 Running I/O for 5 seconds... 00:13:07.629 00:13:07.629 Latency(us) 00:13:07.630 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.630 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:07.630 Verification LBA range: start 0x0 length 0x1000 00:13:07.630 Malloc0 : 5.13 1421.08 5.55 0.00 0.00 89946.38 547.37 324105.31 00:13:07.630 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:07.630 Verification LBA range: start 0x1000 length 0x1000 00:13:07.630 Malloc0 : 5.06 1390.77 5.43 0.00 0.00 91906.65 513.86 362235.35 00:13:07.630 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:07.630 Verification LBA range: start 0x0 length 0x800 00:13:07.630 Malloc1p0 : 5.18 741.73 2.90 0.00 0.00 171948.13 2919.33 179211.17 00:13:07.630 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:07.630 Verification LBA range: start 0x800 length 0x800 00:13:07.630 Malloc1p0 : 5.06 733.07 2.86 0.00 0.00 173958.15 2889.54 180164.42 00:13:07.630 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:07.630 Verification LBA range: start 0x0 length 0x800 00:13:07.630 Malloc1p1 : 5.18 741.45 2.90 0.00 0.00 171660.81 2725.70 178257.92 00:13:07.630 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:07.630 Verification LBA range: start 0x800 length 0x800 00:13:07.630 Malloc1p1 : 5.07 732.83 2.86 0.00 0.00 173655.34 2725.70 178257.92 00:13:07.630 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:07.630 Verification LBA range: start 0x0 length 0x200 00:13:07.630 Malloc2p0 : 5.18 741.18 2.90 0.00 0.00 171376.29 2695.91 174444.92 00:13:07.630 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:07.630 Verification LBA range: start 0x200 length 0x200 00:13:07.630 Malloc2p0 : 5.20 739.09 2.89 0.00 0.00 171858.33 2681.02 174444.92 00:13:07.630 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:07.630 Verification LBA range: start 0x0 length 0x200 00:13:07.630 Malloc2p1 : 5.18 740.91 2.89 0.00 0.00 171082.20 2770.39 171585.16 00:13:07.630 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:07.630 Verification LBA range: start 0x200 length 0x200 00:13:07.630 Malloc2p1 : 5.20 738.77 2.89 0.00 0.00 171572.10 2770.39 172538.41 00:13:07.630 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:07.630 Verification LBA range: start 0x0 length 0x200 00:13:07.630 Malloc2p2 : 5.18 740.64 2.89 0.00 0.00 170798.78 2681.02 170631.91 00:13:07.630 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:07.630 Verification LBA range: start 0x200 length 0x200 00:13:07.630 Malloc2p2 : 5.20 738.45 2.88 0.00 0.00 171299.93 2681.02 170631.91 00:13:07.630 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:07.630 Verification LBA range: start 0x0 length 0x200 00:13:07.630 Malloc2p3 : 5.19 740.37 2.89 0.00 0.00 170505.74 2740.60 167772.16 00:13:07.630 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:07.630 Verification LBA range: start 0x200 length 0x200 00:13:07.630 Malloc2p3 : 5.20 738.06 2.88 0.00 0.00 171036.00 2725.70 167772.16 00:13:07.630 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:07.630 Verification LBA range: start 0x0 length 0x200 00:13:07.630 Malloc2p4 : 5.19 740.08 2.89 0.00 0.00 170206.10 2710.81 164912.41 00:13:07.630 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:07.630 Verification LBA range: start 0x200 length 0x200 00:13:07.630 Malloc2p4 : 5.21 737.69 2.88 0.00 0.00 170761.81 2710.81 164912.41 00:13:07.630 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:07.630 Verification LBA range: start 0x0 length 0x200 00:13:07.630 Malloc2p5 : 5.19 739.79 2.89 0.00 0.00 169910.11 2681.02 162052.65 00:13:07.630 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:07.630 Verification LBA range: start 0x200 length 0x200 00:13:07.630 Malloc2p5 : 5.21 737.26 2.88 0.00 0.00 170492.70 2695.91 163005.91 00:13:07.630 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:07.630 Verification LBA range: start 0x0 length 0x200 00:13:07.630 Malloc2p6 : 5.19 739.50 2.89 0.00 0.00 169621.89 2710.81 159192.90 00:13:07.630 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:07.630 Verification LBA range: start 0x200 length 0x200 00:13:07.630 Malloc2p6 : 5.21 736.80 2.88 0.00 0.00 170260.28 2710.81 159192.90 00:13:07.630 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:07.630 Verification LBA range: start 0x0 length 0x200 00:13:07.630 Malloc2p7 : 5.19 739.18 2.89 0.00 0.00 169352.01 2681.02 155379.90 00:13:07.630 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:07.630 Verification LBA range: start 0x200 length 0x200 00:13:07.630 Malloc2p7 : 5.21 736.54 2.88 0.00 0.00 169978.96 2681.02 155379.90 00:13:07.630 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:07.630 Verification LBA range: start 0x0 length 0x1000 00:13:07.630 TestPT : 5.21 737.24 2.88 0.00 0.00 169408.28 7983.48 155379.90 00:13:07.630 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:07.630 Verification LBA range: start 0x1000 length 0x1000 00:13:07.630 TestPT : 5.22 711.78 2.78 0.00 0.00 175345.99 8460.10 227826.97 00:13:07.630 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:07.630 Verification LBA range: start 0x0 length 0x2000 00:13:07.630 raid0 : 5.20 738.54 2.88 0.00 0.00 168770.04 2800.17 143940.89 00:13:07.630 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:07.630 Verification LBA range: start 0x2000 length 0x2000 00:13:07.630 raid0 : 5.22 736.08 2.88 0.00 0.00 169356.63 2859.75 140127.88 00:13:07.630 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:07.630 Verification LBA range: start 0x0 length 0x2000 00:13:07.630 concat0 : 5.20 738.14 2.88 0.00 0.00 168483.94 2919.33 139174.63 00:13:07.630 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:07.630 Verification LBA range: start 0x2000 length 0x2000 00:13:07.630 concat0 : 5.22 735.84 2.87 0.00 0.00 169037.35 2919.33 142034.39 00:13:07.630 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:07.630 Verification LBA range: start 0x0 length 0x1000 00:13:07.630 raid1 : 5.20 737.76 2.88 0.00 0.00 168182.02 3395.96 133455.13 00:13:07.630 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:07.630 Verification LBA range: start 0x1000 length 0x1000 00:13:07.630 raid1 : 5.22 735.60 2.87 0.00 0.00 168693.85 3395.96 144894.14 00:13:07.630 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:07.630 Verification LBA range: start 0x0 length 0x4e2 00:13:07.630 AIO0 : 5.21 736.92 2.88 0.00 0.00 167513.79 2636.33 146800.64 00:13:07.630 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:07.630 Verification LBA range: start 0x4e2 length 0x4e2 00:13:07.630 AIO0 : 5.22 735.33 2.87 0.00 0.00 168006.42 562.27 157286.40 00:13:07.630 =================================================================================================================== 00:13:07.630 Total : 24928.46 97.38 0.00 0.00 161628.22 513.86 362235.35 00:13:07.889 00:13:07.889 real 0m6.414s 00:13:07.889 user 0m10.946s 00:13:07.889 sys 0m0.568s 00:13:07.889 22:58:57 blockdev_general.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:07.889 22:58:57 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:07.889 ************************************ 00:13:07.889 END TEST bdev_verify 00:13:07.889 ************************************ 00:13:07.889 22:58:57 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:13:07.889 22:58:57 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:07.889 22:58:57 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:13:07.889 22:58:57 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:07.889 22:58:57 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:07.889 ************************************ 00:13:07.889 START TEST bdev_verify_big_io 00:13:07.889 ************************************ 00:13:07.889 22:58:57 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:07.889 [2024-07-13 22:58:57.257157] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:07.889 [2024-07-13 22:58:57.257395] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129301 ] 00:13:08.148 [2024-07-13 22:58:57.407366] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:08.148 [2024-07-13 22:58:57.463655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.148 [2024-07-13 22:58:57.463661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.407 [2024-07-13 22:58:57.603479] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:08.407 [2024-07-13 22:58:57.603916] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:08.407 [2024-07-13 22:58:57.611401] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:08.407 [2024-07-13 22:58:57.611615] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:08.407 [2024-07-13 22:58:57.619475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:08.407 [2024-07-13 22:58:57.619720] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:08.407 [2024-07-13 22:58:57.619871] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:08.407 [2024-07-13 22:58:57.711905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:08.407 [2024-07-13 22:58:57.712320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.407 [2024-07-13 22:58:57.712495] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:08.407 [2024-07-13 22:58:57.712672] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.407 [2024-07-13 22:58:57.715446] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.407 [2024-07-13 22:58:57.715688] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:08.667 [2024-07-13 22:58:57.897875] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:13:08.667 [2024-07-13 22:58:57.899161] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:13:08.667 [2024-07-13 22:58:57.900834] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:13:08.667 [2024-07-13 22:58:57.902534] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:13:08.667 [2024-07-13 22:58:57.903786] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:13:08.667 [2024-07-13 22:58:57.905544] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:13:08.667 [2024-07-13 22:58:57.906716] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:13:08.667 [2024-07-13 22:58:57.908395] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:13:08.667 [2024-07-13 22:58:57.909637] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:13:08.667 [2024-07-13 22:58:57.911363] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:13:08.667 [2024-07-13 22:58:57.912573] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:13:08.667 [2024-07-13 22:58:57.914301] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:13:08.667 [2024-07-13 22:58:57.915575] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:13:08.667 [2024-07-13 22:58:57.917481] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:13:08.667 [2024-07-13 22:58:57.919229] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:13:08.667 [2024-07-13 22:58:57.920408] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:13:08.667 [2024-07-13 22:58:57.946352] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:13:08.667 [2024-07-13 22:58:57.948808] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:13:08.667 Running I/O for 5 seconds... 00:13:15.240 00:13:15.240 Latency(us) 00:13:15.240 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:15.240 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:15.240 Verification LBA range: start 0x0 length 0x100 00:13:15.240 Malloc0 : 5.79 198.84 12.43 0.00 0.00 635286.14 852.71 1837867.75 00:13:15.240 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:15.240 Verification LBA range: start 0x100 length 0x100 00:13:15.240 Malloc0 : 5.86 196.60 12.29 0.00 0.00 642018.09 793.13 2120030.02 00:13:15.240 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:15.240 Verification LBA range: start 0x0 length 0x80 00:13:15.240 Malloc1p0 : 6.05 110.50 6.91 0.00 0.00 1087184.99 2666.12 2181038.08 00:13:15.240 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:15.240 Verification LBA range: start 0x80 length 0x80 00:13:15.240 Malloc1p0 : 6.50 44.32 2.77 0.00 0.00 2668339.06 1347.96 4331572.13 00:13:15.240 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:15.240 Verification LBA range: start 0x0 length 0x80 00:13:15.240 Malloc1p1 : 6.36 42.80 2.67 0.00 0.00 2695275.70 1370.30 4545100.33 00:13:15.240 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:15.240 Verification LBA range: start 0x80 length 0x80 00:13:15.240 Malloc1p1 : 6.50 44.31 2.77 0.00 0.00 2591541.49 1690.53 4179051.99 00:13:15.240 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:15.240 Verification LBA range: start 0x0 length 0x20 00:13:15.240 Malloc2p0 : 5.98 29.44 1.84 0.00 0.00 980667.50 606.95 1586209.51 00:13:15.240 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:15.240 Verification LBA range: start 0x20 length 0x20 00:13:15.240 Malloc2p0 : 5.99 32.05 2.00 0.00 0.00 902369.35 778.24 1357429.29 00:13:15.240 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:15.240 Verification LBA range: start 0x0 length 0x20 00:13:15.240 Malloc2p1 : 5.98 29.43 1.84 0.00 0.00 973299.38 852.71 1555705.48 00:13:15.240 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:15.240 Verification LBA range: start 0x20 length 0x20 00:13:15.240 Malloc2p1 : 5.99 32.04 2.00 0.00 0.00 894572.83 618.12 1334551.27 00:13:15.240 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:15.240 Verification LBA range: start 0x0 length 0x20 00:13:15.240 Malloc2p2 : 5.98 29.42 1.84 0.00 0.00 965509.89 606.95 1525201.45 00:13:15.240 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:15.240 Verification LBA range: start 0x20 length 0x20 00:13:15.240 Malloc2p2 : 5.99 32.03 2.00 0.00 0.00 887050.66 673.98 1311673.25 00:13:15.240 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:15.240 Verification LBA range: start 0x0 length 0x20 00:13:15.240 Malloc2p3 : 6.05 31.75 1.98 0.00 0.00 897824.05 681.43 1502323.43 00:13:15.240 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:15.240 Verification LBA range: start 0x20 length 0x20 00:13:15.240 Malloc2p3 : 6.00 32.02 2.00 0.00 0.00 880038.95 804.31 1296421.24 00:13:15.240 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:15.240 Verification LBA range: start 0x0 length 0x20 00:13:15.240 Malloc2p4 : 6.05 31.74 1.98 0.00 0.00 891574.72 703.77 1479445.41 00:13:15.240 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:15.240 Verification LBA range: start 0x20 length 0x20 00:13:15.240 Malloc2p4 : 6.00 32.01 2.00 0.00 0.00 873586.26 588.33 1281169.22 00:13:15.240 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:15.240 Verification LBA range: start 0x0 length 0x20 00:13:15.240 Malloc2p5 : 6.05 31.73 1.98 0.00 0.00 884999.71 595.78 1464193.40 00:13:15.240 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:15.240 Verification LBA range: start 0x20 length 0x20 00:13:15.240 Malloc2p5 : 6.00 32.01 2.00 0.00 0.00 867196.60 670.25 1258291.20 00:13:15.240 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:15.240 Verification LBA range: start 0x0 length 0x20 00:13:15.240 Malloc2p6 : 6.05 31.72 1.98 0.00 0.00 879008.59 644.19 1441315.37 00:13:15.240 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:15.240 Verification LBA range: start 0x20 length 0x20 00:13:15.240 Malloc2p6 : 6.10 34.09 2.13 0.00 0.00 812564.15 655.36 1235413.18 00:13:15.240 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:15.240 Verification LBA range: start 0x0 length 0x20 00:13:15.240 Malloc2p7 : 6.05 31.72 1.98 0.00 0.00 873014.16 916.01 1426063.36 00:13:15.240 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:15.240 Verification LBA range: start 0x20 length 0x20 00:13:15.240 Malloc2p7 : 6.10 34.08 2.13 0.00 0.00 805864.58 748.45 1212535.16 00:13:15.240 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:15.240 Verification LBA range: start 0x0 length 0x100 00:13:15.240 TestPT : 6.42 42.96 2.68 0.00 0.00 2466778.49 94848.47 3965523.78 00:13:15.240 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:15.240 Verification LBA range: start 0x100 length 0x100 00:13:15.240 TestPT : 6.51 41.80 2.61 0.00 0.00 2544675.27 69587.32 3751995.58 00:13:15.240 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:15.240 Verification LBA range: start 0x0 length 0x200 00:13:15.240 raid0 : 6.46 47.09 2.94 0.00 0.00 2198742.44 1482.01 4118043.93 00:13:15.240 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:15.240 Verification LBA range: start 0x200 length 0x200 00:13:15.240 raid0 : 6.50 49.21 3.08 0.00 0.00 2106216.11 1638.40 3706239.53 00:13:15.240 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:15.240 Verification LBA range: start 0x0 length 0x200 00:13:15.240 concat0 : 6.46 54.52 3.41 0.00 0.00 1877551.19 1370.30 3965523.78 00:13:15.240 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:15.240 Verification LBA range: start 0x200 length 0x200 00:13:15.240 concat0 : 6.50 54.12 3.38 0.00 0.00 1890470.71 1392.64 3568971.40 00:13:15.240 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:15.240 Verification LBA range: start 0x0 length 0x100 00:13:15.240 raid1 : 6.43 62.20 3.89 0.00 0.00 1618943.07 2085.24 3828255.65 00:13:15.240 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:15.240 Verification LBA range: start 0x100 length 0x100 00:13:15.240 raid1 : 6.48 72.89 4.56 0.00 0.00 1374217.56 2085.24 3431703.27 00:13:15.240 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:13:15.240 Verification LBA range: start 0x0 length 0x4e 00:13:15.240 AIO0 : 6.46 67.95 4.25 0.00 0.00 885558.98 562.27 2287802.18 00:13:15.240 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:13:15.240 Verification LBA range: start 0x4e length 0x4e 00:13:15.240 AIO0 : 6.51 72.83 4.55 0.00 0.00 821928.44 2010.76 1967509.88 00:13:15.240 =================================================================================================================== 00:13:15.240 Total : 1710.22 106.89 0.00 0.00 1260397.50 562.27 4545100.33 00:13:15.807 00:13:15.808 real 0m7.822s 00:13:15.808 user 0m14.363s 00:13:15.808 sys 0m0.489s 00:13:15.808 22:59:05 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:15.808 ************************************ 00:13:15.808 END TEST bdev_verify_big_io 00:13:15.808 ************************************ 00:13:15.808 22:59:05 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.808 22:59:05 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:13:15.808 22:59:05 blockdev_general -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:15.808 22:59:05 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:13:15.808 22:59:05 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:15.808 22:59:05 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:15.808 ************************************ 00:13:15.808 START TEST bdev_write_zeroes 00:13:15.808 ************************************ 00:13:15.808 22:59:05 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:15.808 [2024-07-13 22:59:05.145088] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:15.808 [2024-07-13 22:59:05.145571] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129421 ] 00:13:16.066 [2024-07-13 22:59:05.296434] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.066 [2024-07-13 22:59:05.388187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.325 [2024-07-13 22:59:05.542155] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:16.325 [2024-07-13 22:59:05.542521] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:16.325 [2024-07-13 22:59:05.550102] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:16.325 [2024-07-13 22:59:05.550318] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:16.325 [2024-07-13 22:59:05.558144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:16.325 [2024-07-13 22:59:05.558367] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:16.325 [2024-07-13 22:59:05.558512] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:16.325 [2024-07-13 22:59:05.657384] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:16.325 [2024-07-13 22:59:05.657654] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.325 [2024-07-13 22:59:05.657816] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:16.325 [2024-07-13 22:59:05.657974] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.325 [2024-07-13 22:59:05.660886] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.325 [2024-07-13 22:59:05.661092] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:16.583 Running I/O for 1 seconds... 00:13:17.961 00:13:17.961 Latency(us) 00:13:17.961 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.961 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:17.961 Malloc0 : 1.04 4784.43 18.69 0.00 0.00 26729.46 770.79 49569.05 00:13:17.961 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:17.961 Malloc1p0 : 1.04 4777.92 18.66 0.00 0.00 26717.55 1072.41 48615.80 00:13:17.961 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:17.961 Malloc1p1 : 1.05 4770.95 18.64 0.00 0.00 26692.40 1050.07 47662.55 00:13:17.961 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:17.961 Malloc2p0 : 1.05 4763.88 18.61 0.00 0.00 26671.13 1154.33 46709.29 00:13:17.961 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:17.961 Malloc2p1 : 1.05 4757.12 18.58 0.00 0.00 26645.59 1050.07 45756.04 00:13:17.961 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:17.961 Malloc2p2 : 1.05 4750.11 18.56 0.00 0.00 26619.78 1057.51 44802.79 00:13:17.961 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:17.961 Malloc2p3 : 1.05 4743.31 18.53 0.00 0.00 26594.63 1131.99 43849.54 00:13:17.961 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:17.961 Malloc2p4 : 1.05 4736.52 18.50 0.00 0.00 26573.60 1057.51 42896.29 00:13:17.961 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:17.961 Malloc2p5 : 1.06 4729.50 18.47 0.00 0.00 26542.34 1117.09 41704.73 00:13:17.961 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:17.961 Malloc2p6 : 1.06 4722.31 18.45 0.00 0.00 26512.16 1154.33 40513.16 00:13:17.961 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:17.961 Malloc2p7 : 1.06 4715.44 18.42 0.00 0.00 26492.90 1176.67 39559.91 00:13:17.961 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:17.961 TestPT : 1.06 4708.93 18.39 0.00 0.00 26462.83 1154.33 38368.35 00:13:17.961 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:17.961 raid0 : 1.06 4700.89 18.36 0.00 0.00 26422.07 1980.97 36461.85 00:13:17.961 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:17.961 concat0 : 1.06 4693.47 18.33 0.00 0.00 26348.56 2085.24 34555.35 00:13:17.961 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:17.961 raid1 : 1.07 4775.05 18.65 0.00 0.00 25768.45 3157.64 31457.28 00:13:17.961 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:17.961 AIO0 : 1.07 4762.83 18.60 0.00 0.00 25682.21 1638.40 30265.72 00:13:17.961 =================================================================================================================== 00:13:17.961 Total : 75892.66 296.46 0.00 0.00 26464.93 770.79 49569.05 00:13:18.219 ************************************ 00:13:18.220 END TEST bdev_write_zeroes 00:13:18.220 ************************************ 00:13:18.220 00:13:18.220 real 0m2.331s 00:13:18.220 user 0m1.737s 00:13:18.220 sys 0m0.404s 00:13:18.220 22:59:07 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:18.220 22:59:07 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:18.220 22:59:07 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:13:18.220 22:59:07 blockdev_general -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:18.220 22:59:07 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:13:18.220 22:59:07 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:18.220 22:59:07 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:18.220 ************************************ 00:13:18.220 START TEST bdev_json_nonenclosed 00:13:18.220 ************************************ 00:13:18.220 22:59:07 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:18.220 [2024-07-13 22:59:07.535362] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:18.220 [2024-07-13 22:59:07.535837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129471 ] 00:13:18.478 [2024-07-13 22:59:07.686410] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.478 [2024-07-13 22:59:07.766172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.478 [2024-07-13 22:59:07.766571] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:18.478 [2024-07-13 22:59:07.766733] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:18.478 [2024-07-13 22:59:07.766811] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:18.737 00:13:18.737 real 0m0.410s 00:13:18.737 user 0m0.207s 00:13:18.737 sys 0m0.103s 00:13:18.737 22:59:07 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:13:18.737 22:59:07 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:18.737 22:59:07 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:18.737 ************************************ 00:13:18.737 END TEST bdev_json_nonenclosed 00:13:18.737 ************************************ 00:13:18.737 22:59:07 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:13:18.737 22:59:07 blockdev_general -- bdev/blockdev.sh@782 -- # true 00:13:18.737 22:59:07 blockdev_general -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:18.737 22:59:07 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:13:18.737 22:59:07 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:18.737 22:59:07 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:18.737 ************************************ 00:13:18.737 START TEST bdev_json_nonarray 00:13:18.737 ************************************ 00:13:18.737 22:59:07 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:18.737 [2024-07-13 22:59:08.004500] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:18.737 [2024-07-13 22:59:08.004764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129509 ] 00:13:18.996 [2024-07-13 22:59:08.153487] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.996 [2024-07-13 22:59:08.212093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.996 [2024-07-13 22:59:08.212244] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:18.996 [2024-07-13 22:59:08.212286] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:18.996 [2024-07-13 22:59:08.212324] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:18.996 ************************************ 00:13:18.996 END TEST bdev_json_nonarray 00:13:18.996 ************************************ 00:13:18.996 00:13:18.996 real 0m0.366s 00:13:18.996 user 0m0.143s 00:13:18.996 sys 0m0.123s 00:13:18.996 22:59:08 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:13:18.996 22:59:08 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:18.996 22:59:08 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:18.996 22:59:08 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:13:18.996 22:59:08 blockdev_general -- bdev/blockdev.sh@785 -- # true 00:13:18.996 22:59:08 blockdev_general -- bdev/blockdev.sh@787 -- # [[ bdev == bdev ]] 00:13:18.996 22:59:08 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qos qos_test_suite '' 00:13:18.996 22:59:08 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:18.996 22:59:08 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:18.996 22:59:08 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:18.996 ************************************ 00:13:18.996 START TEST bdev_qos 00:13:18.996 ************************************ 00:13:18.996 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@1123 -- # qos_test_suite '' 00:13:18.996 22:59:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # QOS_PID=129531 00:13:18.996 Process qos testing pid: 129531 00:13:18.996 22:59:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # echo 'Process qos testing pid: 129531' 00:13:18.996 22:59:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:13:18.996 22:59:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:13:18.996 22:59:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@449 -- # waitforlisten 129531 00:13:18.996 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@829 -- # '[' -z 129531 ']' 00:13:18.996 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.996 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:18.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.996 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.996 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:18.996 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:19.255 [2024-07-13 22:59:08.414289] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:19.255 [2024-07-13 22:59:08.414505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129531 ] 00:13:19.255 [2024-07-13 22:59:08.556567] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.255 [2024-07-13 22:59:08.625139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:19.514 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:19.514 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@862 -- # return 0 00:13:19.514 22:59:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:13:19.514 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.514 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:19.514 Malloc_0 00:13:19.514 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.514 22:59:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # waitforbdev Malloc_0 00:13:19.514 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_0 00:13:19.514 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:19.514 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:13:19.514 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:19.514 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:19.514 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:19.514 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.514 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:19.514 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.514 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:13:19.514 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.514 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:19.514 [ 00:13:19.514 { 00:13:19.514 "name": "Malloc_0", 00:13:19.514 "aliases": [ 00:13:19.514 "2fb92efb-7b21-419c-9ff5-de8ad7af72b4" 00:13:19.514 ], 00:13:19.515 "product_name": "Malloc disk", 00:13:19.515 "block_size": 512, 00:13:19.515 "num_blocks": 262144, 00:13:19.515 "uuid": "2fb92efb-7b21-419c-9ff5-de8ad7af72b4", 00:13:19.515 "assigned_rate_limits": { 00:13:19.515 "rw_ios_per_sec": 0, 00:13:19.515 "rw_mbytes_per_sec": 0, 00:13:19.515 "r_mbytes_per_sec": 0, 00:13:19.515 "w_mbytes_per_sec": 0 00:13:19.515 }, 00:13:19.515 "claimed": false, 00:13:19.515 "zoned": false, 00:13:19.515 "supported_io_types": { 00:13:19.515 "read": true, 00:13:19.515 "write": true, 00:13:19.515 "unmap": true, 00:13:19.515 "flush": true, 00:13:19.515 "reset": true, 00:13:19.515 "nvme_admin": false, 00:13:19.515 "nvme_io": false, 00:13:19.515 "nvme_io_md": false, 00:13:19.515 "write_zeroes": true, 00:13:19.515 "zcopy": true, 00:13:19.515 "get_zone_info": false, 00:13:19.515 "zone_management": false, 00:13:19.515 "zone_append": false, 00:13:19.515 "compare": false, 00:13:19.515 "compare_and_write": false, 00:13:19.515 "abort": true, 00:13:19.515 "seek_hole": false, 00:13:19.515 "seek_data": false, 00:13:19.515 "copy": true, 00:13:19.515 "nvme_iov_md": false 00:13:19.515 }, 00:13:19.515 "memory_domains": [ 00:13:19.515 { 00:13:19.515 "dma_device_id": "system", 00:13:19.515 "dma_device_type": 1 00:13:19.515 }, 00:13:19.515 { 00:13:19.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.515 "dma_device_type": 2 00:13:19.515 } 00:13:19.515 ], 00:13:19.515 "driver_specific": {} 00:13:19.515 } 00:13:19.515 ] 00:13:19.515 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.515 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:13:19.515 22:59:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # rpc_cmd bdev_null_create Null_1 128 512 00:13:19.515 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.515 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:19.515 Null_1 00:13:19.515 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.515 22:59:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@454 -- # waitforbdev Null_1 00:13:19.515 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Null_1 00:13:19.515 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:19.515 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:13:19.515 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:19.515 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:19.515 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:19.515 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.515 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:19.515 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.515 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:13:19.515 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.515 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:19.515 [ 00:13:19.515 { 00:13:19.515 "name": "Null_1", 00:13:19.515 "aliases": [ 00:13:19.515 "ecdd76d6-cb9c-4d4a-8874-2220a5fe6de2" 00:13:19.515 ], 00:13:19.515 "product_name": "Null disk", 00:13:19.515 "block_size": 512, 00:13:19.515 "num_blocks": 262144, 00:13:19.515 "uuid": "ecdd76d6-cb9c-4d4a-8874-2220a5fe6de2", 00:13:19.515 "assigned_rate_limits": { 00:13:19.515 "rw_ios_per_sec": 0, 00:13:19.515 "rw_mbytes_per_sec": 0, 00:13:19.515 "r_mbytes_per_sec": 0, 00:13:19.515 "w_mbytes_per_sec": 0 00:13:19.515 }, 00:13:19.515 "claimed": false, 00:13:19.515 "zoned": false, 00:13:19.515 "supported_io_types": { 00:13:19.515 "read": true, 00:13:19.515 "write": true, 00:13:19.515 "unmap": false, 00:13:19.515 "flush": false, 00:13:19.515 "reset": true, 00:13:19.515 "nvme_admin": false, 00:13:19.515 "nvme_io": false, 00:13:19.515 "nvme_io_md": false, 00:13:19.515 "write_zeroes": true, 00:13:19.515 "zcopy": false, 00:13:19.515 "get_zone_info": false, 00:13:19.515 "zone_management": false, 00:13:19.515 "zone_append": false, 00:13:19.515 "compare": false, 00:13:19.515 "compare_and_write": false, 00:13:19.515 "abort": true, 00:13:19.515 "seek_hole": false, 00:13:19.515 "seek_data": false, 00:13:19.515 "copy": false, 00:13:19.515 "nvme_iov_md": false 00:13:19.515 }, 00:13:19.515 "driver_specific": {} 00:13:19.515 } 00:13:19.515 ] 00:13:19.515 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.515 22:59:08 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:13:19.515 22:59:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@457 -- # qos_function_test 00:13:19.515 22:59:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_iops_limit=1000 00:13:19.515 22:59:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local qos_lower_bw_limit=2 00:13:19.515 22:59:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local io_result=0 00:13:19.515 22:59:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:19.515 22:59:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local iops_limit=0 00:13:19.515 22:59:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@414 -- # local bw_limit=0 00:13:19.515 22:59:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # get_io_result IOPS Malloc_0 00:13:19.515 22:59:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:13:19.515 22:59:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:13:19.515 22:59:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:13:19.515 22:59:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:19.515 22:59:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:13:19.515 22:59:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:13:19.773 Running I/O for 60 seconds... 00:13:25.039 22:59:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 67567.22 270268.87 0.00 0.00 272384.00 0.00 0.00 ' 00:13:25.039 22:59:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:13:25.039 22:59:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:13:25.039 22:59:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # iostat_result=67567.22 00:13:25.039 22:59:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 67567 00:13:25.039 22:59:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # io_result=67567 00:13:25.039 22:59:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # iops_limit=16000 00:13:25.039 22:59:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@419 -- # '[' 16000 -gt 1000 ']' 00:13:25.039 22:59:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 16000 Malloc_0 00:13:25.039 22:59:14 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.039 22:59:14 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:25.039 22:59:14 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.039 22:59:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@423 -- # run_test bdev_qos_iops run_qos_test 16000 IOPS Malloc_0 00:13:25.039 22:59:14 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:25.039 22:59:14 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:25.039 22:59:14 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:25.039 ************************************ 00:13:25.039 START TEST bdev_qos_iops 00:13:25.039 ************************************ 00:13:25.039 22:59:14 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1123 -- # run_qos_test 16000 IOPS Malloc_0 00:13:25.039 22:59:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_limit=16000 00:13:25.039 22:59:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@390 -- # local qos_result=0 00:13:25.039 22:59:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # get_io_result IOPS Malloc_0 00:13:25.039 22:59:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:13:25.039 22:59:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:13:25.039 22:59:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # local iostat_result 00:13:25.039 22:59:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:13:25.039 22:59:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:25.039 22:59:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # tail -1 00:13:30.304 22:59:19 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 15998.65 63994.60 0.00 0.00 64896.00 0.00 0.00 ' 00:13:30.304 22:59:19 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:13:30.304 22:59:19 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:13:30.304 22:59:19 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # iostat_result=15998.65 00:13:30.304 22:59:19 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@385 -- # echo 15998 00:13:30.304 22:59:19 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # qos_result=15998 00:13:30.304 22:59:19 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@393 -- # '[' IOPS = BANDWIDTH ']' 00:13:30.304 22:59:19 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # lower_limit=14400 00:13:30.304 22:59:19 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@397 -- # upper_limit=17600 00:13:30.304 22:59:19 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 15998 -lt 14400 ']' 00:13:30.304 22:59:19 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 15998 -gt 17600 ']' 00:13:30.304 00:13:30.304 real 0m5.215s 00:13:30.304 user 0m0.114s 00:13:30.304 sys 0m0.039s 00:13:30.304 22:59:19 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:30.304 22:59:19 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:13:30.304 ************************************ 00:13:30.304 END TEST bdev_qos_iops 00:13:30.304 ************************************ 00:13:30.304 22:59:19 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:13:30.304 22:59:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # get_io_result BANDWIDTH Null_1 00:13:30.304 22:59:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:13:30.304 22:59:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:13:30.304 22:59:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:13:30.304 22:59:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:30.304 22:59:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Null_1 00:13:30.304 22:59:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:13:35.638 22:59:24 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 33207.64 132830.57 0.00 0.00 135168.00 0.00 0.00 ' 00:13:35.638 22:59:24 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:13:35.638 22:59:24 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:35.638 22:59:24 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:13:35.638 22:59:24 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # iostat_result=135168.00 00:13:35.638 22:59:24 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 135168 00:13:35.638 22:59:24 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=135168 00:13:35.638 22:59:24 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # bw_limit=13 00:13:35.638 22:59:24 blockdev_general.bdev_qos -- bdev/blockdev.sh@429 -- # '[' 13 -lt 2 ']' 00:13:35.638 22:59:24 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 13 Null_1 00:13:35.638 22:59:24 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.638 22:59:24 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:35.638 22:59:24 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.638 22:59:24 blockdev_general.bdev_qos -- bdev/blockdev.sh@433 -- # run_test bdev_qos_bw run_qos_test 13 BANDWIDTH Null_1 00:13:35.638 22:59:24 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:35.638 22:59:24 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:35.638 22:59:24 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:35.638 ************************************ 00:13:35.638 START TEST bdev_qos_bw 00:13:35.638 ************************************ 00:13:35.638 22:59:24 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1123 -- # run_qos_test 13 BANDWIDTH Null_1 00:13:35.638 22:59:24 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_limit=13 00:13:35.638 22:59:24 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:13:35.638 22:59:24 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Null_1 00:13:35.638 22:59:24 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:13:35.638 22:59:24 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:13:35.638 22:59:24 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:13:35.638 22:59:24 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:35.638 22:59:24 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # grep Null_1 00:13:35.638 22:59:24 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # tail -1 00:13:40.903 22:59:29 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 3334.34 13337.34 0.00 0.00 13592.00 0.00 0.00 ' 00:13:40.903 22:59:29 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:13:40.903 22:59:29 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:40.903 22:59:29 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:13:40.903 22:59:29 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # iostat_result=13592.00 00:13:40.903 22:59:29 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@385 -- # echo 13592 00:13:40.903 22:59:29 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # qos_result=13592 00:13:40.903 22:59:29 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:40.903 22:59:29 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@394 -- # qos_limit=13312 00:13:40.903 22:59:29 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # lower_limit=11980 00:13:40.903 22:59:29 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@397 -- # upper_limit=14643 00:13:40.903 22:59:29 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 13592 -lt 11980 ']' 00:13:40.903 22:59:29 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 13592 -gt 14643 ']' 00:13:40.903 00:13:40.903 real 0m5.247s 00:13:40.903 user 0m0.128s 00:13:40.903 sys 0m0.022s 00:13:40.903 22:59:29 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:40.903 ************************************ 00:13:40.903 END TEST bdev_qos_bw 00:13:40.903 ************************************ 00:13:40.903 22:59:29 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:13:40.903 22:59:29 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:13:40.903 22:59:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:13:40.903 22:59:29 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.903 22:59:29 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:40.903 22:59:29 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.903 22:59:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@437 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:13:40.903 22:59:29 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:40.903 22:59:29 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:40.903 22:59:29 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:40.903 ************************************ 00:13:40.903 START TEST bdev_qos_ro_bw 00:13:40.903 ************************************ 00:13:40.903 22:59:29 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1123 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:13:40.903 22:59:29 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_limit=2 00:13:40.903 22:59:29 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:13:40.903 22:59:29 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Malloc_0 00:13:40.903 22:59:29 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:13:40.903 22:59:29 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:13:40.904 22:59:29 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:13:40.904 22:59:29 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:40.904 22:59:29 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:13:40.904 22:59:29 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # tail -1 00:13:46.160 22:59:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 512.26 2049.02 0.00 0.00 2068.00 0.00 0.00 ' 00:13:46.160 22:59:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:13:46.160 22:59:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:46.160 22:59:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:13:46.160 22:59:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # iostat_result=2068.00 00:13:46.160 22:59:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@385 -- # echo 2068 00:13:46.160 22:59:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # qos_result=2068 00:13:46.160 22:59:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:46.160 22:59:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@394 -- # qos_limit=2048 00:13:46.160 22:59:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # lower_limit=1843 00:13:46.160 22:59:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@397 -- # upper_limit=2252 00:13:46.160 22:59:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2068 -lt 1843 ']' 00:13:46.160 ************************************ 00:13:46.161 END TEST bdev_qos_ro_bw 00:13:46.161 ************************************ 00:13:46.161 22:59:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2068 -gt 2252 ']' 00:13:46.161 00:13:46.161 real 0m5.172s 00:13:46.161 user 0m0.114s 00:13:46.161 sys 0m0.033s 00:13:46.161 22:59:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:46.161 22:59:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:13:46.161 22:59:35 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:13:46.161 22:59:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:13:46.161 22:59:35 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.161 22:59:35 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:46.418 22:59:35 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.418 22:59:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # rpc_cmd bdev_null_delete Null_1 00:13:46.418 22:59:35 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.418 22:59:35 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:46.418 00:13:46.418 Latency(us) 00:13:46.418 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:46.418 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:46.419 Malloc_0 : 26.69 22878.36 89.37 0.00 0.00 11085.79 2546.97 503316.48 00:13:46.419 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:46.419 Null_1 : 26.79 26338.25 102.88 0.00 0.00 9700.05 837.82 102951.10 00:13:46.419 =================================================================================================================== 00:13:46.419 Total : 49216.61 192.25 0.00 0.00 10342.89 837.82 503316.48 00:13:46.419 0 00:13:46.419 22:59:35 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.419 22:59:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # killprocess 129531 00:13:46.419 22:59:35 blockdev_general.bdev_qos -- common/autotest_common.sh@948 -- # '[' -z 129531 ']' 00:13:46.419 22:59:35 blockdev_general.bdev_qos -- common/autotest_common.sh@952 -- # kill -0 129531 00:13:46.419 22:59:35 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # uname 00:13:46.419 22:59:35 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:46.419 22:59:35 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 129531 00:13:46.419 22:59:35 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:46.419 killing process with pid 129531 00:13:46.419 22:59:35 blockdev_general.bdev_qos -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:46.419 22:59:35 blockdev_general.bdev_qos -- common/autotest_common.sh@966 -- # echo 'killing process with pid 129531' 00:13:46.419 22:59:35 blockdev_general.bdev_qos -- common/autotest_common.sh@967 -- # kill 129531 00:13:46.419 Received shutdown signal, test time was about 26.828175 seconds 00:13:46.419 00:13:46.419 Latency(us) 00:13:46.419 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:46.419 =================================================================================================================== 00:13:46.419 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:46.419 22:59:35 blockdev_general.bdev_qos -- common/autotest_common.sh@972 -- # wait 129531 00:13:46.677 22:59:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@462 -- # trap - SIGINT SIGTERM EXIT 00:13:46.677 00:13:46.677 real 0m27.674s 00:13:46.677 user 0m28.292s 00:13:46.677 sys 0m0.570s 00:13:46.677 ************************************ 00:13:46.677 END TEST bdev_qos 00:13:46.677 ************************************ 00:13:46.677 22:59:36 blockdev_general.bdev_qos -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:46.677 22:59:36 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:46.935 22:59:36 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:13:46.935 22:59:36 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:13:46.935 22:59:36 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:46.935 22:59:36 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:46.935 22:59:36 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:46.935 ************************************ 00:13:46.935 START TEST bdev_qd_sampling 00:13:46.935 ************************************ 00:13:46.935 22:59:36 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1123 -- # qd_sampling_test_suite '' 00:13:46.935 22:59:36 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@538 -- # QD_DEV=Malloc_QD 00:13:46.935 22:59:36 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # QD_PID=129987 00:13:46.935 Process bdev QD sampling period testing pid: 129987 00:13:46.935 22:59:36 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # echo 'Process bdev QD sampling period testing pid: 129987' 00:13:46.935 22:59:36 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:13:46.935 22:59:36 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:13:46.935 22:59:36 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@544 -- # waitforlisten 129987 00:13:46.935 22:59:36 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@829 -- # '[' -z 129987 ']' 00:13:46.935 22:59:36 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.935 22:59:36 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:46.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.935 22:59:36 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.935 22:59:36 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:46.935 22:59:36 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:46.935 [2024-07-13 22:59:36.157833] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:46.935 [2024-07-13 22:59:36.158119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129987 ] 00:13:46.935 [2024-07-13 22:59:36.311503] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:47.192 [2024-07-13 22:59:36.399202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.192 [2024-07-13 22:59:36.399209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.757 22:59:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:47.757 22:59:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@862 -- # return 0 00:13:47.757 22:59:37 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:13:47.757 22:59:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.757 22:59:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:48.014 Malloc_QD 00:13:48.014 22:59:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.014 22:59:37 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@547 -- # waitforbdev Malloc_QD 00:13:48.014 22:59:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_QD 00:13:48.014 22:59:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:48.014 22:59:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@899 -- # local i 00:13:48.014 22:59:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:48.014 22:59:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:48.014 22:59:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:48.014 22:59:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.014 22:59:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:48.015 22:59:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.015 22:59:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:13:48.015 22:59:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.015 22:59:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:48.015 [ 00:13:48.015 { 00:13:48.015 "name": "Malloc_QD", 00:13:48.015 "aliases": [ 00:13:48.015 "294510f3-0c18-4c8f-b1f9-e616407448ea" 00:13:48.015 ], 00:13:48.015 "product_name": "Malloc disk", 00:13:48.015 "block_size": 512, 00:13:48.015 "num_blocks": 262144, 00:13:48.015 "uuid": "294510f3-0c18-4c8f-b1f9-e616407448ea", 00:13:48.015 "assigned_rate_limits": { 00:13:48.015 "rw_ios_per_sec": 0, 00:13:48.015 "rw_mbytes_per_sec": 0, 00:13:48.015 "r_mbytes_per_sec": 0, 00:13:48.015 "w_mbytes_per_sec": 0 00:13:48.015 }, 00:13:48.015 "claimed": false, 00:13:48.015 "zoned": false, 00:13:48.015 "supported_io_types": { 00:13:48.015 "read": true, 00:13:48.015 "write": true, 00:13:48.015 "unmap": true, 00:13:48.015 "flush": true, 00:13:48.015 "reset": true, 00:13:48.015 "nvme_admin": false, 00:13:48.015 "nvme_io": false, 00:13:48.015 "nvme_io_md": false, 00:13:48.015 "write_zeroes": true, 00:13:48.015 "zcopy": true, 00:13:48.015 "get_zone_info": false, 00:13:48.015 "zone_management": false, 00:13:48.015 "zone_append": false, 00:13:48.015 "compare": false, 00:13:48.015 "compare_and_write": false, 00:13:48.015 "abort": true, 00:13:48.015 "seek_hole": false, 00:13:48.015 "seek_data": false, 00:13:48.015 "copy": true, 00:13:48.015 "nvme_iov_md": false 00:13:48.015 }, 00:13:48.015 "memory_domains": [ 00:13:48.015 { 00:13:48.015 "dma_device_id": "system", 00:13:48.015 "dma_device_type": 1 00:13:48.015 }, 00:13:48.015 { 00:13:48.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.015 "dma_device_type": 2 00:13:48.015 } 00:13:48.015 ], 00:13:48.015 "driver_specific": {} 00:13:48.015 } 00:13:48.015 ] 00:13:48.015 22:59:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.015 22:59:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@905 -- # return 0 00:13:48.015 22:59:37 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # sleep 2 00:13:48.015 22:59:37 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:48.015 Running I/O for 5 seconds... 00:13:49.913 22:59:39 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@551 -- # qd_sampling_function_test Malloc_QD 00:13:49.913 22:59:39 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local bdev_name=Malloc_QD 00:13:49.913 22:59:39 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local sampling_period=10 00:13:49.913 22:59:39 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@521 -- # local iostats 00:13:49.913 22:59:39 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:13:49.913 22:59:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.913 22:59:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:49.913 22:59:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.913 22:59:39 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:13:49.913 22:59:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.913 22:59:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:49.913 22:59:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.913 22:59:39 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # iostats='{ 00:13:49.913 "tick_rate": 2200000000, 00:13:49.913 "ticks": 1615580193138, 00:13:49.913 "bdevs": [ 00:13:49.913 { 00:13:49.913 "name": "Malloc_QD", 00:13:49.913 "bytes_read": 923832832, 00:13:49.913 "num_read_ops": 225539, 00:13:49.913 "bytes_written": 0, 00:13:49.913 "num_write_ops": 0, 00:13:49.913 "bytes_unmapped": 0, 00:13:49.913 "num_unmap_ops": 0, 00:13:49.913 "bytes_copied": 0, 00:13:49.913 "num_copy_ops": 0, 00:13:49.913 "read_latency_ticks": 2146902480109, 00:13:49.913 "max_read_latency_ticks": 12134353, 00:13:49.913 "min_read_latency_ticks": 442466, 00:13:49.913 "write_latency_ticks": 0, 00:13:49.913 "max_write_latency_ticks": 0, 00:13:49.913 "min_write_latency_ticks": 0, 00:13:49.913 "unmap_latency_ticks": 0, 00:13:49.913 "max_unmap_latency_ticks": 0, 00:13:49.913 "min_unmap_latency_ticks": 0, 00:13:49.913 "copy_latency_ticks": 0, 00:13:49.913 "max_copy_latency_ticks": 0, 00:13:49.913 "min_copy_latency_ticks": 0, 00:13:49.913 "io_error": {}, 00:13:49.913 "queue_depth_polling_period": 10, 00:13:49.913 "queue_depth": 512, 00:13:49.913 "io_time": 30, 00:13:49.913 "weighted_io_time": 15360 00:13:49.913 } 00:13:49.913 ] 00:13:49.913 }' 00:13:49.913 22:59:39 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:13:50.171 22:59:39 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # qd_sampling_period=10 00:13:50.171 22:59:39 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 == null ']' 00:13:50.171 22:59:39 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 -ne 10 ']' 00:13:50.171 22:59:39 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:13:50.171 22:59:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.171 22:59:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:50.171 00:13:50.171 Latency(us) 00:13:50.171 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.171 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:13:50.171 Malloc_QD : 1.98 58905.69 230.10 0.00 0.00 4334.67 1087.30 5898.24 00:13:50.171 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:50.171 Malloc_QD : 1.98 59121.57 230.94 0.00 0.00 4319.14 774.52 5540.77 00:13:50.171 =================================================================================================================== 00:13:50.171 Total : 118027.26 461.04 0.00 0.00 4326.89 774.52 5898.24 00:13:50.171 0 00:13:50.171 22:59:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.171 22:59:39 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # killprocess 129987 00:13:50.171 22:59:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@948 -- # '[' -z 129987 ']' 00:13:50.171 22:59:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@952 -- # kill -0 129987 00:13:50.171 22:59:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # uname 00:13:50.171 22:59:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:50.171 22:59:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 129987 00:13:50.171 22:59:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:50.171 22:59:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:50.171 22:59:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@966 -- # echo 'killing process with pid 129987' 00:13:50.171 killing process with pid 129987 00:13:50.171 22:59:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@967 -- # kill 129987 00:13:50.171 Received shutdown signal, test time was about 2.035042 seconds 00:13:50.171 00:13:50.171 Latency(us) 00:13:50.171 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.171 =================================================================================================================== 00:13:50.171 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:50.171 22:59:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@972 -- # wait 129987 00:13:50.430 22:59:39 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@555 -- # trap - SIGINT SIGTERM EXIT 00:13:50.430 00:13:50.430 real 0m3.556s 00:13:50.430 user 0m6.983s 00:13:50.430 sys 0m0.364s 00:13:50.430 22:59:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:50.430 22:59:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:50.430 ************************************ 00:13:50.430 END TEST bdev_qd_sampling 00:13:50.430 ************************************ 00:13:50.430 22:59:39 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:13:50.430 22:59:39 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_error error_test_suite '' 00:13:50.430 22:59:39 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:50.430 22:59:39 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:50.430 22:59:39 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:50.430 ************************************ 00:13:50.430 START TEST bdev_error 00:13:50.430 ************************************ 00:13:50.430 22:59:39 blockdev_general.bdev_error -- common/autotest_common.sh@1123 -- # error_test_suite '' 00:13:50.430 22:59:39 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_1=Dev_1 00:13:50.430 22:59:39 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # DEV_2=Dev_2 00:13:50.430 22:59:39 blockdev_general.bdev_error -- bdev/blockdev.sh@468 -- # ERR_DEV=EE_Dev_1 00:13:50.430 22:59:39 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # ERR_PID=130069 00:13:50.430 22:59:39 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # echo 'Process error testing pid: 130069' 00:13:50.430 Process error testing pid: 130069 00:13:50.430 22:59:39 blockdev_general.bdev_error -- bdev/blockdev.sh@474 -- # waitforlisten 130069 00:13:50.430 22:59:39 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:13:50.430 22:59:39 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 130069 ']' 00:13:50.430 22:59:39 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.430 22:59:39 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:50.430 22:59:39 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.430 22:59:39 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:50.430 22:59:39 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:50.430 [2024-07-13 22:59:39.767981] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:50.430 [2024-07-13 22:59:39.768588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130069 ] 00:13:50.688 [2024-07-13 22:59:39.916280] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.688 [2024-07-13 22:59:39.973590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.622 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:51.622 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:13:51.622 22:59:40 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:51.623 Dev_1 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.623 22:59:40 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # waitforbdev Dev_1 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:51.623 [ 00:13:51.623 { 00:13:51.623 "name": "Dev_1", 00:13:51.623 "aliases": [ 00:13:51.623 "00a90de2-a717-4093-9220-3b15e29f91d4" 00:13:51.623 ], 00:13:51.623 "product_name": "Malloc disk", 00:13:51.623 "block_size": 512, 00:13:51.623 "num_blocks": 262144, 00:13:51.623 "uuid": "00a90de2-a717-4093-9220-3b15e29f91d4", 00:13:51.623 "assigned_rate_limits": { 00:13:51.623 "rw_ios_per_sec": 0, 00:13:51.623 "rw_mbytes_per_sec": 0, 00:13:51.623 "r_mbytes_per_sec": 0, 00:13:51.623 "w_mbytes_per_sec": 0 00:13:51.623 }, 00:13:51.623 "claimed": false, 00:13:51.623 "zoned": false, 00:13:51.623 "supported_io_types": { 00:13:51.623 "read": true, 00:13:51.623 "write": true, 00:13:51.623 "unmap": true, 00:13:51.623 "flush": true, 00:13:51.623 "reset": true, 00:13:51.623 "nvme_admin": false, 00:13:51.623 "nvme_io": false, 00:13:51.623 "nvme_io_md": false, 00:13:51.623 "write_zeroes": true, 00:13:51.623 "zcopy": true, 00:13:51.623 "get_zone_info": false, 00:13:51.623 "zone_management": false, 00:13:51.623 "zone_append": false, 00:13:51.623 "compare": false, 00:13:51.623 "compare_and_write": false, 00:13:51.623 "abort": true, 00:13:51.623 "seek_hole": false, 00:13:51.623 "seek_data": false, 00:13:51.623 "copy": true, 00:13:51.623 "nvme_iov_md": false 00:13:51.623 }, 00:13:51.623 "memory_domains": [ 00:13:51.623 { 00:13:51.623 "dma_device_id": "system", 00:13:51.623 "dma_device_type": 1 00:13:51.623 }, 00:13:51.623 { 00:13:51.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.623 "dma_device_type": 2 00:13:51.623 } 00:13:51.623 ], 00:13:51.623 "driver_specific": {} 00:13:51.623 } 00:13:51.623 ] 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:13:51.623 22:59:40 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_error_create Dev_1 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:51.623 true 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.623 22:59:40 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:51.623 Dev_2 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.623 22:59:40 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # waitforbdev Dev_2 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:51.623 [ 00:13:51.623 { 00:13:51.623 "name": "Dev_2", 00:13:51.623 "aliases": [ 00:13:51.623 "7296617b-da77-48f4-8d29-e3c33a0c53a3" 00:13:51.623 ], 00:13:51.623 "product_name": "Malloc disk", 00:13:51.623 "block_size": 512, 00:13:51.623 "num_blocks": 262144, 00:13:51.623 "uuid": "7296617b-da77-48f4-8d29-e3c33a0c53a3", 00:13:51.623 "assigned_rate_limits": { 00:13:51.623 "rw_ios_per_sec": 0, 00:13:51.623 "rw_mbytes_per_sec": 0, 00:13:51.623 "r_mbytes_per_sec": 0, 00:13:51.623 "w_mbytes_per_sec": 0 00:13:51.623 }, 00:13:51.623 "claimed": false, 00:13:51.623 "zoned": false, 00:13:51.623 "supported_io_types": { 00:13:51.623 "read": true, 00:13:51.623 "write": true, 00:13:51.623 "unmap": true, 00:13:51.623 "flush": true, 00:13:51.623 "reset": true, 00:13:51.623 "nvme_admin": false, 00:13:51.623 "nvme_io": false, 00:13:51.623 "nvme_io_md": false, 00:13:51.623 "write_zeroes": true, 00:13:51.623 "zcopy": true, 00:13:51.623 "get_zone_info": false, 00:13:51.623 "zone_management": false, 00:13:51.623 "zone_append": false, 00:13:51.623 "compare": false, 00:13:51.623 "compare_and_write": false, 00:13:51.623 "abort": true, 00:13:51.623 "seek_hole": false, 00:13:51.623 "seek_data": false, 00:13:51.623 "copy": true, 00:13:51.623 "nvme_iov_md": false 00:13:51.623 }, 00:13:51.623 "memory_domains": [ 00:13:51.623 { 00:13:51.623 "dma_device_id": "system", 00:13:51.623 "dma_device_type": 1 00:13:51.623 }, 00:13:51.623 { 00:13:51.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.623 "dma_device_type": 2 00:13:51.623 } 00:13:51.623 ], 00:13:51.623 "driver_specific": {} 00:13:51.623 } 00:13:51.623 ] 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:13:51.623 22:59:40 blockdev_general.bdev_error -- bdev/blockdev.sh@481 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:51.623 22:59:40 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.623 22:59:40 blockdev_general.bdev_error -- bdev/blockdev.sh@484 -- # sleep 1 00:13:51.623 22:59:40 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:13:51.881 Running I/O for 5 seconds... 00:13:52.817 22:59:41 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # kill -0 130069 00:13:52.817 22:59:41 blockdev_general.bdev_error -- bdev/blockdev.sh@488 -- # echo 'Process is existed as continue on error is set. Pid: 130069' 00:13:52.817 Process is existed as continue on error is set. Pid: 130069 00:13:52.817 22:59:41 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:13:52.817 22:59:41 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.817 22:59:41 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:52.817 22:59:41 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.817 22:59:41 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # rpc_cmd bdev_malloc_delete Dev_1 00:13:52.817 22:59:41 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.817 22:59:41 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:52.817 22:59:41 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.817 22:59:41 blockdev_general.bdev_error -- bdev/blockdev.sh@497 -- # sleep 5 00:13:52.817 Timeout while waiting for response: 00:13:52.817 00:13:52.817 00:13:57.020 00:13:57.020 Latency(us) 00:13:57.020 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.020 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:57.020 EE_Dev_1 : 0.90 42502.41 166.03 5.54 0.00 373.52 181.53 856.44 00:13:57.020 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:57.020 Dev_2 : 5.00 94264.51 368.22 0.00 0.00 166.91 67.03 23950.43 00:13:57.020 =================================================================================================================== 00:13:57.020 Total : 136766.92 534.25 5.54 0.00 182.47 67.03 23950.43 00:13:57.954 22:59:46 blockdev_general.bdev_error -- bdev/blockdev.sh@499 -- # killprocess 130069 00:13:57.954 22:59:46 blockdev_general.bdev_error -- common/autotest_common.sh@948 -- # '[' -z 130069 ']' 00:13:57.954 22:59:46 blockdev_general.bdev_error -- common/autotest_common.sh@952 -- # kill -0 130069 00:13:57.955 22:59:46 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # uname 00:13:57.955 22:59:46 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:57.955 22:59:46 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 130069 00:13:57.955 killing process with pid 130069 00:13:57.955 Received shutdown signal, test time was about 5.000000 seconds 00:13:57.955 00:13:57.955 Latency(us) 00:13:57.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.955 =================================================================================================================== 00:13:57.955 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:57.955 22:59:47 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:57.955 22:59:47 blockdev_general.bdev_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:57.955 22:59:47 blockdev_general.bdev_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 130069' 00:13:57.955 22:59:47 blockdev_general.bdev_error -- common/autotest_common.sh@967 -- # kill 130069 00:13:57.955 22:59:47 blockdev_general.bdev_error -- common/autotest_common.sh@972 -- # wait 130069 00:13:57.955 22:59:47 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # ERR_PID=130172 00:13:57.955 22:59:47 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:13:57.955 22:59:47 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # echo 'Process error testing pid: 130172' 00:13:57.955 Process error testing pid: 130172 00:13:57.955 22:59:47 blockdev_general.bdev_error -- bdev/blockdev.sh@505 -- # waitforlisten 130172 00:13:57.955 22:59:47 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 130172 ']' 00:13:57.955 22:59:47 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.955 22:59:47 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:57.955 22:59:47 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.955 22:59:47 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:57.955 22:59:47 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:57.955 [2024-07-13 22:59:47.351610] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:57.955 [2024-07-13 22:59:47.352127] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130172 ] 00:13:58.213 [2024-07-13 22:59:47.498438] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.213 [2024-07-13 22:59:47.574918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:13:59.149 22:59:48 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:59.149 Dev_1 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.149 22:59:48 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # waitforbdev Dev_1 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:59.149 [ 00:13:59.149 { 00:13:59.149 "name": "Dev_1", 00:13:59.149 "aliases": [ 00:13:59.149 "b6ab4f6b-47f0-4861-a9a0-aad292bd6dec" 00:13:59.149 ], 00:13:59.149 "product_name": "Malloc disk", 00:13:59.149 "block_size": 512, 00:13:59.149 "num_blocks": 262144, 00:13:59.149 "uuid": "b6ab4f6b-47f0-4861-a9a0-aad292bd6dec", 00:13:59.149 "assigned_rate_limits": { 00:13:59.149 "rw_ios_per_sec": 0, 00:13:59.149 "rw_mbytes_per_sec": 0, 00:13:59.149 "r_mbytes_per_sec": 0, 00:13:59.149 "w_mbytes_per_sec": 0 00:13:59.149 }, 00:13:59.149 "claimed": false, 00:13:59.149 "zoned": false, 00:13:59.149 "supported_io_types": { 00:13:59.149 "read": true, 00:13:59.149 "write": true, 00:13:59.149 "unmap": true, 00:13:59.149 "flush": true, 00:13:59.149 "reset": true, 00:13:59.149 "nvme_admin": false, 00:13:59.149 "nvme_io": false, 00:13:59.149 "nvme_io_md": false, 00:13:59.149 "write_zeroes": true, 00:13:59.149 "zcopy": true, 00:13:59.149 "get_zone_info": false, 00:13:59.149 "zone_management": false, 00:13:59.149 "zone_append": false, 00:13:59.149 "compare": false, 00:13:59.149 "compare_and_write": false, 00:13:59.149 "abort": true, 00:13:59.149 "seek_hole": false, 00:13:59.149 "seek_data": false, 00:13:59.149 "copy": true, 00:13:59.149 "nvme_iov_md": false 00:13:59.149 }, 00:13:59.149 "memory_domains": [ 00:13:59.149 { 00:13:59.149 "dma_device_id": "system", 00:13:59.149 "dma_device_type": 1 00:13:59.149 }, 00:13:59.149 { 00:13:59.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.149 "dma_device_type": 2 00:13:59.149 } 00:13:59.149 ], 00:13:59.149 "driver_specific": {} 00:13:59.149 } 00:13:59.149 ] 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:13:59.149 22:59:48 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_error_create Dev_1 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:59.149 true 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.149 22:59:48 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:59.149 Dev_2 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.149 22:59:48 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # waitforbdev Dev_2 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.149 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:59.149 [ 00:13:59.149 { 00:13:59.149 "name": "Dev_2", 00:13:59.149 "aliases": [ 00:13:59.149 "275b44ca-9f30-4758-82aa-d5faccc86430" 00:13:59.149 ], 00:13:59.149 "product_name": "Malloc disk", 00:13:59.149 "block_size": 512, 00:13:59.149 "num_blocks": 262144, 00:13:59.149 "uuid": "275b44ca-9f30-4758-82aa-d5faccc86430", 00:13:59.149 "assigned_rate_limits": { 00:13:59.149 "rw_ios_per_sec": 0, 00:13:59.149 "rw_mbytes_per_sec": 0, 00:13:59.149 "r_mbytes_per_sec": 0, 00:13:59.150 "w_mbytes_per_sec": 0 00:13:59.150 }, 00:13:59.150 "claimed": false, 00:13:59.150 "zoned": false, 00:13:59.150 "supported_io_types": { 00:13:59.150 "read": true, 00:13:59.150 "write": true, 00:13:59.150 "unmap": true, 00:13:59.150 "flush": true, 00:13:59.150 "reset": true, 00:13:59.150 "nvme_admin": false, 00:13:59.150 "nvme_io": false, 00:13:59.150 "nvme_io_md": false, 00:13:59.150 "write_zeroes": true, 00:13:59.150 "zcopy": true, 00:13:59.150 "get_zone_info": false, 00:13:59.150 "zone_management": false, 00:13:59.150 "zone_append": false, 00:13:59.150 "compare": false, 00:13:59.150 "compare_and_write": false, 00:13:59.150 "abort": true, 00:13:59.150 "seek_hole": false, 00:13:59.150 "seek_data": false, 00:13:59.150 "copy": true, 00:13:59.150 "nvme_iov_md": false 00:13:59.150 }, 00:13:59.150 "memory_domains": [ 00:13:59.150 { 00:13:59.150 "dma_device_id": "system", 00:13:59.150 "dma_device_type": 1 00:13:59.150 }, 00:13:59.150 { 00:13:59.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.150 "dma_device_type": 2 00:13:59.150 } 00:13:59.150 ], 00:13:59.150 "driver_specific": {} 00:13:59.150 } 00:13:59.150 ] 00:13:59.150 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.150 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:13:59.150 22:59:48 blockdev_general.bdev_error -- bdev/blockdev.sh@512 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:13:59.150 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.150 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:59.150 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.150 22:59:48 blockdev_general.bdev_error -- bdev/blockdev.sh@515 -- # NOT wait 130172 00:13:59.150 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@648 -- # local es=0 00:13:59.150 22:59:48 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:13:59.150 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # valid_exec_arg wait 130172 00:13:59.150 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@636 -- # local arg=wait 00:13:59.150 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:59.150 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # type -t wait 00:13:59.150 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:59.150 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # wait 130172 00:13:59.409 Running I/O for 5 seconds... 00:13:59.409 task offset: 18312 on job bdev=EE_Dev_1 fails 00:13:59.409 00:13:59.409 Latency(us) 00:13:59.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:59.409 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:59.409 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:13:59.409 EE_Dev_1 : 0.00 23379.38 91.33 5313.50 0.00 462.75 194.56 848.99 00:13:59.409 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:59.409 Dev_2 : 0.00 18099.55 70.70 0.00 0.00 569.31 164.77 1012.83 00:13:59.409 =================================================================================================================== 00:13:59.409 Total : 41478.93 162.03 5313.50 0.00 520.54 164.77 1012.83 00:13:59.409 [2024-07-13 22:59:48.581598] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:59.409 request: 00:13:59.409 { 00:13:59.409 "method": "perform_tests", 00:13:59.409 "req_id": 1 00:13:59.409 } 00:13:59.409 Got JSON-RPC error response 00:13:59.409 response: 00:13:59.409 { 00:13:59.409 "code": -32603, 00:13:59.409 "message": "bdevperf failed with error Operation not permitted" 00:13:59.409 } 00:13:59.668 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # es=255 00:13:59.668 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:59.668 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@660 -- # es=127 00:13:59.668 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # case "$es" in 00:13:59.668 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@668 -- # es=1 00:13:59.668 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:59.668 00:13:59.668 real 0m9.210s 00:13:59.668 user 0m9.571s 00:13:59.668 sys 0m0.725s 00:13:59.668 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:59.668 22:59:48 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:59.668 ************************************ 00:13:59.668 END TEST bdev_error 00:13:59.668 ************************************ 00:13:59.668 22:59:48 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:13:59.668 22:59:48 blockdev_general -- bdev/blockdev.sh@791 -- # run_test bdev_stat stat_test_suite '' 00:13:59.668 22:59:48 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:59.668 22:59:48 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:59.668 22:59:48 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:59.668 ************************************ 00:13:59.668 START TEST bdev_stat 00:13:59.668 ************************************ 00:13:59.668 22:59:48 blockdev_general.bdev_stat -- common/autotest_common.sh@1123 -- # stat_test_suite '' 00:13:59.668 22:59:48 blockdev_general.bdev_stat -- bdev/blockdev.sh@592 -- # STAT_DEV=Malloc_STAT 00:13:59.668 22:59:48 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # STAT_PID=130218 00:13:59.668 22:59:48 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # echo 'Process Bdev IO statistics testing pid: 130218' 00:13:59.668 Process Bdev IO statistics testing pid: 130218 00:13:59.668 22:59:48 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:13:59.668 22:59:48 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:13:59.668 22:59:48 blockdev_general.bdev_stat -- bdev/blockdev.sh@599 -- # waitforlisten 130218 00:13:59.668 22:59:48 blockdev_general.bdev_stat -- common/autotest_common.sh@829 -- # '[' -z 130218 ']' 00:13:59.668 22:59:48 blockdev_general.bdev_stat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.668 22:59:48 blockdev_general.bdev_stat -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:59.668 22:59:48 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.668 22:59:48 blockdev_general.bdev_stat -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:59.668 22:59:48 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:59.668 [2024-07-13 22:59:49.025776] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:59.668 [2024-07-13 22:59:49.026528] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130218 ] 00:13:59.927 [2024-07-13 22:59:49.171309] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:59.927 [2024-07-13 22:59:49.253781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.927 [2024-07-13 22:59:49.253788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.860 22:59:50 blockdev_general.bdev_stat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:00.860 22:59:50 blockdev_general.bdev_stat -- common/autotest_common.sh@862 -- # return 0 00:14:00.860 22:59:50 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:14:00.860 22:59:50 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.860 22:59:50 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:00.860 Malloc_STAT 00:14:00.860 22:59:50 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.860 22:59:50 blockdev_general.bdev_stat -- bdev/blockdev.sh@602 -- # waitforbdev Malloc_STAT 00:14:00.860 22:59:50 blockdev_general.bdev_stat -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_STAT 00:14:00.860 22:59:50 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:00.860 22:59:50 blockdev_general.bdev_stat -- common/autotest_common.sh@899 -- # local i 00:14:00.860 22:59:50 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:00.860 22:59:50 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:00.860 22:59:50 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:14:00.860 22:59:50 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.860 22:59:50 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:00.860 22:59:50 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.860 22:59:50 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:14:00.860 22:59:50 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.860 22:59:50 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:00.860 [ 00:14:00.860 { 00:14:00.860 "name": "Malloc_STAT", 00:14:00.860 "aliases": [ 00:14:00.860 "02836b50-8823-4a55-98d1-91d935fd2bd0" 00:14:00.860 ], 00:14:00.860 "product_name": "Malloc disk", 00:14:00.860 "block_size": 512, 00:14:00.860 "num_blocks": 262144, 00:14:00.860 "uuid": "02836b50-8823-4a55-98d1-91d935fd2bd0", 00:14:00.860 "assigned_rate_limits": { 00:14:00.860 "rw_ios_per_sec": 0, 00:14:00.860 "rw_mbytes_per_sec": 0, 00:14:00.860 "r_mbytes_per_sec": 0, 00:14:00.860 "w_mbytes_per_sec": 0 00:14:00.860 }, 00:14:00.860 "claimed": false, 00:14:00.860 "zoned": false, 00:14:00.860 "supported_io_types": { 00:14:00.860 "read": true, 00:14:00.860 "write": true, 00:14:00.860 "unmap": true, 00:14:00.860 "flush": true, 00:14:00.860 "reset": true, 00:14:00.860 "nvme_admin": false, 00:14:00.860 "nvme_io": false, 00:14:00.860 "nvme_io_md": false, 00:14:00.860 "write_zeroes": true, 00:14:00.860 "zcopy": true, 00:14:00.860 "get_zone_info": false, 00:14:00.860 "zone_management": false, 00:14:00.860 "zone_append": false, 00:14:00.860 "compare": false, 00:14:00.860 "compare_and_write": false, 00:14:00.860 "abort": true, 00:14:00.860 "seek_hole": false, 00:14:00.860 "seek_data": false, 00:14:00.860 "copy": true, 00:14:00.860 "nvme_iov_md": false 00:14:00.860 }, 00:14:00.860 "memory_domains": [ 00:14:00.860 { 00:14:00.860 "dma_device_id": "system", 00:14:00.860 "dma_device_type": 1 00:14:00.860 }, 00:14:00.860 { 00:14:00.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.860 "dma_device_type": 2 00:14:00.860 } 00:14:00.860 ], 00:14:00.860 "driver_specific": {} 00:14:00.860 } 00:14:00.860 ] 00:14:00.860 22:59:50 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.860 22:59:50 blockdev_general.bdev_stat -- common/autotest_common.sh@905 -- # return 0 00:14:00.860 22:59:50 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # sleep 2 00:14:00.860 22:59:50 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:00.860 Running I/O for 10 seconds... 00:14:02.761 22:59:52 blockdev_general.bdev_stat -- bdev/blockdev.sh@606 -- # stat_function_test Malloc_STAT 00:14:02.761 22:59:52 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local bdev_name=Malloc_STAT 00:14:02.761 22:59:52 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local iostats 00:14:02.761 22:59:52 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count1 00:14:02.761 22:59:52 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local io_count2 00:14:02.761 22:59:52 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local iostats_per_channel 00:14:02.761 22:59:52 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel1 00:14:02.761 22:59:52 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel2 00:14:02.761 22:59:52 blockdev_general.bdev_stat -- bdev/blockdev.sh@566 -- # local io_count_per_channel_all=0 00:14:02.761 22:59:52 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:14:02.761 22:59:52 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.761 22:59:52 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:02.761 22:59:52 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.761 22:59:52 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # iostats='{ 00:14:02.761 "tick_rate": 2200000000, 00:14:02.761 "ticks": 1643782305022, 00:14:02.761 "bdevs": [ 00:14:02.761 { 00:14:02.761 "name": "Malloc_STAT", 00:14:02.761 "bytes_read": 660640256, 00:14:02.761 "num_read_ops": 161283, 00:14:02.761 "bytes_written": 0, 00:14:02.761 "num_write_ops": 0, 00:14:02.761 "bytes_unmapped": 0, 00:14:02.761 "num_unmap_ops": 0, 00:14:02.761 "bytes_copied": 0, 00:14:02.761 "num_copy_ops": 0, 00:14:02.761 "read_latency_ticks": 2143294678968, 00:14:02.761 "max_read_latency_ticks": 19932822, 00:14:02.761 "min_read_latency_ticks": 403636, 00:14:02.761 "write_latency_ticks": 0, 00:14:02.761 "max_write_latency_ticks": 0, 00:14:02.761 "min_write_latency_ticks": 0, 00:14:02.761 "unmap_latency_ticks": 0, 00:14:02.761 "max_unmap_latency_ticks": 0, 00:14:02.761 "min_unmap_latency_ticks": 0, 00:14:02.761 "copy_latency_ticks": 0, 00:14:02.761 "max_copy_latency_ticks": 0, 00:14:02.761 "min_copy_latency_ticks": 0, 00:14:02.761 "io_error": {} 00:14:02.761 } 00:14:02.761 ] 00:14:02.761 }' 00:14:02.761 22:59:52 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # jq -r '.bdevs[0].num_read_ops' 00:14:02.762 22:59:52 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # io_count1=161283 00:14:02.762 22:59:52 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:14:02.762 22:59:52 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.762 22:59:52 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:03.021 22:59:52 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.021 22:59:52 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # iostats_per_channel='{ 00:14:03.021 "tick_rate": 2200000000, 00:14:03.021 "ticks": 1643938322450, 00:14:03.021 "name": "Malloc_STAT", 00:14:03.021 "channels": [ 00:14:03.021 { 00:14:03.021 "thread_id": 2, 00:14:03.021 "bytes_read": 332398592, 00:14:03.021 "num_read_ops": 81152, 00:14:03.021 "bytes_written": 0, 00:14:03.021 "num_write_ops": 0, 00:14:03.021 "bytes_unmapped": 0, 00:14:03.021 "num_unmap_ops": 0, 00:14:03.021 "bytes_copied": 0, 00:14:03.021 "num_copy_ops": 0, 00:14:03.021 "read_latency_ticks": 1109966622570, 00:14:03.021 "max_read_latency_ticks": 19932822, 00:14:03.021 "min_read_latency_ticks": 8257382, 00:14:03.021 "write_latency_ticks": 0, 00:14:03.021 "max_write_latency_ticks": 0, 00:14:03.021 "min_write_latency_ticks": 0, 00:14:03.021 "unmap_latency_ticks": 0, 00:14:03.021 "max_unmap_latency_ticks": 0, 00:14:03.021 "min_unmap_latency_ticks": 0, 00:14:03.021 "copy_latency_ticks": 0, 00:14:03.021 "max_copy_latency_ticks": 0, 00:14:03.021 "min_copy_latency_ticks": 0 00:14:03.021 }, 00:14:03.021 { 00:14:03.021 "thread_id": 3, 00:14:03.021 "bytes_read": 348127232, 00:14:03.021 "num_read_ops": 84992, 00:14:03.021 "bytes_written": 0, 00:14:03.021 "num_write_ops": 0, 00:14:03.021 "bytes_unmapped": 0, 00:14:03.021 "num_unmap_ops": 0, 00:14:03.021 "bytes_copied": 0, 00:14:03.021 "num_copy_ops": 0, 00:14:03.021 "read_latency_ticks": 1113512300176, 00:14:03.021 "max_read_latency_ticks": 19840738, 00:14:03.021 "min_read_latency_ticks": 8253718, 00:14:03.021 "write_latency_ticks": 0, 00:14:03.021 "max_write_latency_ticks": 0, 00:14:03.021 "min_write_latency_ticks": 0, 00:14:03.021 "unmap_latency_ticks": 0, 00:14:03.021 "max_unmap_latency_ticks": 0, 00:14:03.021 "min_unmap_latency_ticks": 0, 00:14:03.021 "copy_latency_ticks": 0, 00:14:03.021 "max_copy_latency_ticks": 0, 00:14:03.021 "min_copy_latency_ticks": 0 00:14:03.021 } 00:14:03.021 ] 00:14:03.021 }' 00:14:03.021 22:59:52 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # jq -r '.channels[0].num_read_ops' 00:14:03.021 22:59:52 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel1=81152 00:14:03.021 22:59:52 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=81152 00:14:03.021 22:59:52 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # jq -r '.channels[1].num_read_ops' 00:14:03.021 22:59:52 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel2=84992 00:14:03.021 22:59:52 blockdev_general.bdev_stat -- bdev/blockdev.sh@575 -- # io_count_per_channel_all=166144 00:14:03.021 22:59:52 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:14:03.021 22:59:52 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.021 22:59:52 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:03.021 22:59:52 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.021 22:59:52 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # iostats='{ 00:14:03.021 "tick_rate": 2200000000, 00:14:03.021 "ticks": 1644208923580, 00:14:03.021 "bdevs": [ 00:14:03.021 { 00:14:03.021 "name": "Malloc_STAT", 00:14:03.021 "bytes_read": 714117632, 00:14:03.021 "num_read_ops": 174339, 00:14:03.021 "bytes_written": 0, 00:14:03.021 "num_write_ops": 0, 00:14:03.021 "bytes_unmapped": 0, 00:14:03.021 "num_unmap_ops": 0, 00:14:03.021 "bytes_copied": 0, 00:14:03.021 "num_copy_ops": 0, 00:14:03.021 "read_latency_ticks": 2359499376859, 00:14:03.021 "max_read_latency_ticks": 20861362, 00:14:03.021 "min_read_latency_ticks": 403636, 00:14:03.021 "write_latency_ticks": 0, 00:14:03.021 "max_write_latency_ticks": 0, 00:14:03.021 "min_write_latency_ticks": 0, 00:14:03.021 "unmap_latency_ticks": 0, 00:14:03.021 "max_unmap_latency_ticks": 0, 00:14:03.021 "min_unmap_latency_ticks": 0, 00:14:03.021 "copy_latency_ticks": 0, 00:14:03.021 "max_copy_latency_ticks": 0, 00:14:03.021 "min_copy_latency_ticks": 0, 00:14:03.021 "io_error": {} 00:14:03.021 } 00:14:03.021 ] 00:14:03.021 }' 00:14:03.021 22:59:52 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # jq -r '.bdevs[0].num_read_ops' 00:14:03.021 22:59:52 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # io_count2=174339 00:14:03.021 22:59:52 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 166144 -lt 161283 ']' 00:14:03.021 22:59:52 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 166144 -gt 174339 ']' 00:14:03.021 22:59:52 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:14:03.021 22:59:52 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.021 22:59:52 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:03.021 00:14:03.021 Latency(us) 00:14:03.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:03.021 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:14:03.021 Malloc_STAT : 2.18 40339.91 157.58 0.00 0.00 6328.91 2040.55 9532.51 00:14:03.021 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:03.021 Malloc_STAT : 2.18 42208.97 164.88 0.00 0.00 6049.98 1630.95 9055.88 00:14:03.021 =================================================================================================================== 00:14:03.021 Total : 82548.88 322.46 0.00 0.00 6186.27 1630.95 9532.51 00:14:03.021 0 00:14:03.021 22:59:52 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.021 22:59:52 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # killprocess 130218 00:14:03.021 22:59:52 blockdev_general.bdev_stat -- common/autotest_common.sh@948 -- # '[' -z 130218 ']' 00:14:03.021 22:59:52 blockdev_general.bdev_stat -- common/autotest_common.sh@952 -- # kill -0 130218 00:14:03.021 22:59:52 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # uname 00:14:03.021 22:59:52 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:03.021 22:59:52 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 130218 00:14:03.021 22:59:52 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:03.021 killing process with pid 130218 00:14:03.021 22:59:52 blockdev_general.bdev_stat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:03.021 22:59:52 blockdev_general.bdev_stat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 130218' 00:14:03.021 Received shutdown signal, test time was about 2.238578 seconds 00:14:03.021 00:14:03.021 Latency(us) 00:14:03.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:03.021 =================================================================================================================== 00:14:03.021 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:03.021 22:59:52 blockdev_general.bdev_stat -- common/autotest_common.sh@967 -- # kill 130218 00:14:03.021 22:59:52 blockdev_general.bdev_stat -- common/autotest_common.sh@972 -- # wait 130218 00:14:03.588 22:59:52 blockdev_general.bdev_stat -- bdev/blockdev.sh@610 -- # trap - SIGINT SIGTERM EXIT 00:14:03.588 00:14:03.588 real 0m3.727s 00:14:03.588 user 0m7.441s 00:14:03.588 sys 0m0.374s 00:14:03.588 22:59:52 blockdev_general.bdev_stat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:03.588 22:59:52 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:03.588 ************************************ 00:14:03.588 END TEST bdev_stat 00:14:03.588 ************************************ 00:14:03.588 22:59:52 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:14:03.588 22:59:52 blockdev_general -- bdev/blockdev.sh@794 -- # [[ bdev == gpt ]] 00:14:03.588 22:59:52 blockdev_general -- bdev/blockdev.sh@798 -- # [[ bdev == crypto_sw ]] 00:14:03.588 22:59:52 blockdev_general -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:14:03.588 22:59:52 blockdev_general -- bdev/blockdev.sh@811 -- # cleanup 00:14:03.588 22:59:52 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:03.588 22:59:52 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:03.588 22:59:52 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:14:03.588 22:59:52 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:14:03.588 22:59:52 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:14:03.588 22:59:52 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:14:03.588 00:14:03.588 real 1m55.807s 00:14:03.588 user 5m13.535s 00:14:03.588 sys 0m20.345s 00:14:03.588 ************************************ 00:14:03.588 END TEST blockdev_general 00:14:03.588 ************************************ 00:14:03.588 22:59:52 blockdev_general -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:03.588 22:59:52 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:03.588 22:59:52 -- common/autotest_common.sh@1142 -- # return 0 00:14:03.588 22:59:52 -- spdk/autotest.sh@190 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:14:03.588 22:59:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:03.588 22:59:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:03.588 22:59:52 -- common/autotest_common.sh@10 -- # set +x 00:14:03.588 ************************************ 00:14:03.588 START TEST bdev_raid 00:14:03.588 ************************************ 00:14:03.588 22:59:52 bdev_raid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:14:03.588 * Looking for test storage... 00:14:03.588 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:03.588 22:59:52 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:03.588 22:59:52 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:14:03.588 22:59:52 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:14:03.588 22:59:52 bdev_raid -- bdev/bdev_raid.sh@851 -- # mkdir -p /raidtest 00:14:03.588 22:59:52 bdev_raid -- bdev/bdev_raid.sh@852 -- # trap 'cleanup; exit 1' EXIT 00:14:03.588 22:59:52 bdev_raid -- bdev/bdev_raid.sh@854 -- # base_blocklen=512 00:14:03.588 22:59:52 bdev_raid -- bdev/bdev_raid.sh@856 -- # uname -s 00:14:03.588 22:59:52 bdev_raid -- bdev/bdev_raid.sh@856 -- # '[' Linux = Linux ']' 00:14:03.588 22:59:52 bdev_raid -- bdev/bdev_raid.sh@856 -- # modprobe -n nbd 00:14:03.588 22:59:52 bdev_raid -- bdev/bdev_raid.sh@857 -- # has_nbd=true 00:14:03.588 22:59:52 bdev_raid -- bdev/bdev_raid.sh@858 -- # modprobe nbd 00:14:03.588 22:59:52 bdev_raid -- bdev/bdev_raid.sh@859 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:14:03.589 22:59:52 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:03.589 22:59:52 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:03.589 22:59:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:03.589 ************************************ 00:14:03.589 START TEST raid_function_test_raid0 00:14:03.589 ************************************ 00:14:03.589 22:59:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1123 -- # raid_function_test raid0 00:14:03.589 22:59:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@80 -- # local raid_level=raid0 00:14:03.589 22:59:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:14:03.589 22:59:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:14:03.589 22:59:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # raid_pid=130365 00:14:03.589 22:59:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 130365' 00:14:03.589 Process raid pid: 130365 00:14:03.589 22:59:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@87 -- # waitforlisten 130365 /var/tmp/spdk-raid.sock 00:14:03.589 22:59:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@829 -- # '[' -z 130365 ']' 00:14:03.589 22:59:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:03.589 22:59:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:03.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:03.589 22:59:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:03.589 22:59:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:03.589 22:59:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:14:03.589 22:59:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:03.589 [2024-07-13 22:59:52.972267] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:03.589 [2024-07-13 22:59:52.972739] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:03.848 [2024-07-13 22:59:53.118929] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.848 [2024-07-13 22:59:53.198320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.848 [2024-07-13 22:59:53.253146] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:04.781 22:59:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:04.781 22:59:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@862 -- # return 0 00:14:04.781 22:59:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev raid0 00:14:04.781 22:59:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_level=raid0 00:14:04.781 22:59:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@67 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:04.781 22:59:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # cat 00:14:04.781 22:59:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:14:05.040 [2024-07-13 22:59:54.306475] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:05.040 [2024-07-13 22:59:54.308721] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:05.040 [2024-07-13 22:59:54.308815] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:14:05.040 [2024-07-13 22:59:54.308830] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:05.040 [2024-07-13 22:59:54.309035] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:14:05.040 [2024-07-13 22:59:54.309445] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:14:05.040 [2024-07-13 22:59:54.309469] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000006080 00:14:05.040 [2024-07-13 22:59:54.309670] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.040 Base_1 00:14:05.040 Base_2 00:14:05.040 22:59:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@76 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:05.040 22:59:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:05.040 22:59:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:14:05.298 22:59:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:14:05.298 22:59:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:14:05.298 22:59:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:14:05.298 22:59:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:05.298 22:59:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:14:05.298 22:59:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:05.298 22:59:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:05.298 22:59:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:05.298 22:59:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:14:05.298 22:59:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:05.298 22:59:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:05.298 22:59:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:14:05.557 [2024-07-13 22:59:54.770603] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:14:05.557 /dev/nbd0 00:14:05.557 22:59:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:05.557 22:59:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:05.557 22:59:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:14:05.557 22:59:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@867 -- # local i 00:14:05.557 22:59:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:05.557 22:59:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:05.557 22:59:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:14:05.558 22:59:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # break 00:14:05.558 22:59:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:05.558 22:59:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:05.558 22:59:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:05.558 1+0 records in 00:14:05.558 1+0 records out 00:14:05.558 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360026 s, 11.4 MB/s 00:14:05.558 22:59:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.558 22:59:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # size=4096 00:14:05.558 22:59:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.558 22:59:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:05.558 22:59:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # return 0 00:14:05.558 22:59:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:05.558 22:59:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:05.558 22:59:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:05.558 22:59:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:05.558 22:59:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:05.816 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:05.816 { 00:14:05.816 "nbd_device": "/dev/nbd0", 00:14:05.816 "bdev_name": "raid" 00:14:05.816 } 00:14:05.816 ]' 00:14:05.816 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:05.816 { 00:14:05.816 "nbd_device": "/dev/nbd0", 00:14:05.816 "bdev_name": "raid" 00:14:05.816 } 00:14:05.816 ]' 00:14:05.816 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:05.816 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:05.816 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:05.816 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:05.816 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:14:05.816 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:14:05.817 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # count=1 00:14:05.817 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:14:05.817 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:14:05.817 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:14:05.817 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:14:05.817 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:05.817 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local blksize 00:14:05.817 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:14:05.817 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:14:05.817 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:14:05.817 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # blksize=512 00:14:05.817 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:14:05.817 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:14:05.817 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:14:05.817 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:14:05.817 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:14:05.817 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:14:05.817 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:14:05.817 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:14:05.817 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:14:05.817 4096+0 records in 00:14:05.817 4096+0 records out 00:14:05.817 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0271912 s, 77.1 MB/s 00:14:05.817 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:14:06.075 4096+0 records in 00:14:06.075 4096+0 records out 00:14:06.075 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.243899 s, 8.6 MB/s 00:14:06.075 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:14:06.075 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:06.075 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:14:06.075 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:06.075 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:14:06.075 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:14:06.075 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:14:06.075 128+0 records in 00:14:06.075 128+0 records out 00:14:06.075 65536 bytes (66 kB, 64 KiB) copied, 0.000600983 s, 109 MB/s 00:14:06.075 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:14:06.075 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:06.075 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:06.075 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:06.075 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:06.075 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:14:06.075 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:14:06.075 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:14:06.334 2035+0 records in 00:14:06.334 2035+0 records out 00:14:06.334 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00655357 s, 159 MB/s 00:14:06.334 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:14:06.334 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:06.334 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:06.334 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:06.334 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:06.334 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:14:06.334 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:14:06.334 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:14:06.334 456+0 records in 00:14:06.334 456+0 records out 00:14:06.334 233472 bytes (233 kB, 228 KiB) copied, 0.00142428 s, 164 MB/s 00:14:06.334 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:14:06.334 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:06.334 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:06.334 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:06.334 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:06.334 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@54 -- # return 0 00:14:06.334 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:06.334 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:06.334 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:06.334 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:06.334 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:14:06.334 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:06.334 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:06.334 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:06.334 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:06.334 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:06.334 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:06.334 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:06.334 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:06.334 [2024-07-13 22:59:55.737976] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.334 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:14:06.334 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:14:06.334 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:06.592 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:06.592 22:59:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:06.850 22:59:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:06.850 22:59:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:06.850 22:59:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:06.850 22:59:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:06.850 22:59:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:06.850 22:59:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:06.850 22:59:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:14:06.850 22:59:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:14:06.850 22:59:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:06.850 22:59:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # count=0 00:14:06.850 22:59:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:14:06.850 22:59:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@110 -- # killprocess 130365 00:14:06.850 22:59:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@948 -- # '[' -z 130365 ']' 00:14:06.850 22:59:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # kill -0 130365 00:14:06.850 22:59:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@953 -- # uname 00:14:06.850 22:59:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:06.850 22:59:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 130365 00:14:06.850 22:59:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:06.850 killing process with pid 130365 00:14:06.850 22:59:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:06.850 22:59:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 130365' 00:14:06.850 22:59:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@967 -- # kill 130365 00:14:06.850 [2024-07-13 22:59:56.110197] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:06.850 22:59:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # wait 130365 00:14:06.850 [2024-07-13 22:59:56.110344] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:06.850 [2024-07-13 22:59:56.110410] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:06.850 [2024-07-13 22:59:56.110424] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name raid, state offline 00:14:06.850 [2024-07-13 22:59:56.130338] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:07.108 22:59:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@112 -- # return 0 00:14:07.108 00:14:07.108 real 0m3.441s 00:14:07.108 user 0m4.829s 00:14:07.108 sys 0m0.907s 00:14:07.108 22:59:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:07.108 22:59:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:14:07.108 ************************************ 00:14:07.108 END TEST raid_function_test_raid0 00:14:07.108 ************************************ 00:14:07.108 22:59:56 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:07.108 22:59:56 bdev_raid -- bdev/bdev_raid.sh@860 -- # run_test raid_function_test_concat raid_function_test concat 00:14:07.108 22:59:56 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:07.108 22:59:56 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:07.108 22:59:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:07.108 ************************************ 00:14:07.108 START TEST raid_function_test_concat 00:14:07.108 ************************************ 00:14:07.108 22:59:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1123 -- # raid_function_test concat 00:14:07.108 22:59:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@80 -- # local raid_level=concat 00:14:07.108 22:59:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:14:07.108 22:59:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:14:07.108 22:59:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # raid_pid=130511 00:14:07.108 Process raid pid: 130511 00:14:07.108 22:59:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:07.108 22:59:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 130511' 00:14:07.108 22:59:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@87 -- # waitforlisten 130511 /var/tmp/spdk-raid.sock 00:14:07.108 22:59:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@829 -- # '[' -z 130511 ']' 00:14:07.108 22:59:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:07.108 22:59:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:07.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:07.108 22:59:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:07.108 22:59:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:07.108 22:59:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:14:07.108 [2024-07-13 22:59:56.470312] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:07.108 [2024-07-13 22:59:56.470568] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.365 [2024-07-13 22:59:56.614033] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.365 [2024-07-13 22:59:56.690848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.365 [2024-07-13 22:59:56.744207] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:08.299 22:59:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:08.299 22:59:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@862 -- # return 0 00:14:08.299 22:59:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev concat 00:14:08.299 22:59:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_level=concat 00:14:08.299 22:59:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@67 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:08.299 22:59:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # cat 00:14:08.299 22:59:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:14:08.557 [2024-07-13 22:59:57.719827] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:08.557 [2024-07-13 22:59:57.721909] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:08.557 [2024-07-13 22:59:57.722008] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:14:08.557 [2024-07-13 22:59:57.722024] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:08.557 [2024-07-13 22:59:57.722169] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:14:08.557 [2024-07-13 22:59:57.722567] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:14:08.557 [2024-07-13 22:59:57.722592] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000006080 00:14:08.557 [2024-07-13 22:59:57.722784] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.557 Base_1 00:14:08.557 Base_2 00:14:08.557 22:59:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@76 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:08.557 22:59:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:08.557 22:59:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:14:08.815 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:14:08.815 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:14:08.815 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:14:08.815 22:59:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:08.815 22:59:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:14:08.815 22:59:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:08.815 22:59:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:08.815 22:59:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:08.815 22:59:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:14:08.815 22:59:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:08.815 22:59:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:08.815 22:59:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:14:08.815 [2024-07-13 22:59:58.199924] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:14:09.074 /dev/nbd0 00:14:09.074 22:59:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:09.074 22:59:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:09.074 22:59:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:14:09.074 22:59:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@867 -- # local i 00:14:09.074 22:59:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:09.074 22:59:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:09.074 22:59:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:14:09.074 22:59:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # break 00:14:09.074 22:59:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:09.074 22:59:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:09.074 22:59:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:09.074 1+0 records in 00:14:09.074 1+0 records out 00:14:09.074 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021507 s, 19.0 MB/s 00:14:09.074 22:59:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:09.074 22:59:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # size=4096 00:14:09.074 22:59:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:09.074 22:59:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:09.074 22:59:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # return 0 00:14:09.074 22:59:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:09.074 22:59:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:09.074 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:09.074 22:59:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:09.074 22:59:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:09.332 22:59:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:09.332 { 00:14:09.332 "nbd_device": "/dev/nbd0", 00:14:09.332 "bdev_name": "raid" 00:14:09.332 } 00:14:09.332 ]' 00:14:09.332 22:59:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:09.332 { 00:14:09.332 "nbd_device": "/dev/nbd0", 00:14:09.332 "bdev_name": "raid" 00:14:09.332 } 00:14:09.332 ]' 00:14:09.332 22:59:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:09.332 22:59:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:09.332 22:59:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:09.332 22:59:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:09.332 22:59:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:14:09.332 22:59:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:14:09.332 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # count=1 00:14:09.332 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:14:09.332 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:14:09.332 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:14:09.332 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:14:09.332 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:09.332 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local blksize 00:14:09.332 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:14:09.332 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:14:09.332 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:14:09.332 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # blksize=512 00:14:09.332 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:14:09.332 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:14:09.332 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:14:09.332 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:14:09.332 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:14:09.332 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:14:09.332 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:14:09.332 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:14:09.332 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:14:09.332 4096+0 records in 00:14:09.332 4096+0 records out 00:14:09.332 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0283249 s, 74.0 MB/s 00:14:09.332 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:14:09.591 4096+0 records in 00:14:09.591 4096+0 records out 00:14:09.591 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.297282 s, 7.1 MB/s 00:14:09.591 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:14:09.591 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:09.591 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:14:09.591 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:09.591 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:14:09.591 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:14:09.591 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:14:09.591 128+0 records in 00:14:09.591 128+0 records out 00:14:09.591 65536 bytes (66 kB, 64 KiB) copied, 0.000779594 s, 84.1 MB/s 00:14:09.591 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:14:09.591 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:09.591 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:09.591 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:09.591 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:09.591 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:14:09.591 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:14:09.591 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:14:09.591 2035+0 records in 00:14:09.591 2035+0 records out 00:14:09.591 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00568214 s, 183 MB/s 00:14:09.591 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:14:09.591 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:09.591 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:09.591 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:09.591 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:09.591 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:14:09.591 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:14:09.591 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:14:09.591 456+0 records in 00:14:09.591 456+0 records out 00:14:09.591 233472 bytes (233 kB, 228 KiB) copied, 0.00166767 s, 140 MB/s 00:14:09.591 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:14:09.591 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:09.591 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:09.591 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:09.591 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:09.591 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@54 -- # return 0 00:14:09.591 22:59:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:09.591 22:59:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:09.591 22:59:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:09.591 22:59:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:09.591 22:59:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:14:09.591 22:59:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:09.591 22:59:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:09.850 22:59:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:09.850 [2024-07-13 22:59:59.209785] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:09.850 22:59:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:09.850 22:59:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:09.850 22:59:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:09.850 22:59:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:09.850 22:59:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:09.850 22:59:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:14:09.850 22:59:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:14:09.850 22:59:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:09.850 22:59:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:09.850 22:59:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:10.108 22:59:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:10.108 22:59:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:10.108 22:59:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:10.366 22:59:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:10.366 22:59:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:14:10.366 22:59:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:10.366 22:59:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:14:10.366 22:59:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:14:10.366 22:59:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:14:10.366 22:59:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # count=0 00:14:10.366 22:59:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:14:10.366 22:59:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@110 -- # killprocess 130511 00:14:10.366 22:59:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@948 -- # '[' -z 130511 ']' 00:14:10.366 22:59:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # kill -0 130511 00:14:10.366 22:59:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@953 -- # uname 00:14:10.366 22:59:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:10.366 22:59:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 130511 00:14:10.366 22:59:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:10.366 22:59:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:10.366 killing process with pid 130511 00:14:10.366 22:59:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 130511' 00:14:10.366 22:59:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@967 -- # kill 130511 00:14:10.366 [2024-07-13 22:59:59.567416] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:10.366 22:59:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # wait 130511 00:14:10.366 [2024-07-13 22:59:59.567559] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:10.366 [2024-07-13 22:59:59.567626] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:10.366 [2024-07-13 22:59:59.567642] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name raid, state offline 00:14:10.366 [2024-07-13 22:59:59.592550] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:10.625 22:59:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@112 -- # return 0 00:14:10.625 00:14:10.625 real 0m3.479s 00:14:10.625 user 0m4.862s 00:14:10.625 sys 0m0.805s 00:14:10.625 22:59:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:10.625 22:59:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:14:10.625 ************************************ 00:14:10.625 END TEST raid_function_test_concat 00:14:10.625 ************************************ 00:14:10.625 22:59:59 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:10.625 22:59:59 bdev_raid -- bdev/bdev_raid.sh@863 -- # run_test raid0_resize_test raid0_resize_test 00:14:10.625 22:59:59 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:10.625 22:59:59 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:10.625 22:59:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:10.625 ************************************ 00:14:10.625 START TEST raid0_resize_test 00:14:10.625 ************************************ 00:14:10.625 22:59:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1123 -- # raid0_resize_test 00:14:10.625 22:59:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local blksize=512 00:14:10.625 22:59:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local bdev_size_mb=32 00:14:10.625 22:59:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local new_bdev_size_mb=64 00:14:10.625 22:59:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local blkcnt 00:14:10.625 22:59:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local raid_size_mb 00:14:10.625 22:59:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local new_raid_size_mb 00:14:10.625 22:59:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@355 -- # raid_pid=130661 00:14:10.625 Process raid pid: 130661 00:14:10.625 22:59:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # echo 'Process raid pid: 130661' 00:14:10.625 22:59:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # waitforlisten 130661 /var/tmp/spdk-raid.sock 00:14:10.625 22:59:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:10.625 22:59:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@829 -- # '[' -z 130661 ']' 00:14:10.625 22:59:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:10.625 22:59:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:10.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:10.625 22:59:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:10.625 22:59:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:10.625 22:59:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.625 [2024-07-13 23:00:00.006836] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:10.625 [2024-07-13 23:00:00.007194] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.883 [2024-07-13 23:00:00.146078] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.883 [2024-07-13 23:00:00.215990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.883 [2024-07-13 23:00:00.286619] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.141 23:00:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:11.141 23:00:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # return 0 00:14:11.141 23:00:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:14:11.399 Base_1 00:14:11.399 23:00:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:14:11.658 Base_2 00:14:11.658 23:00:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:14:11.916 [2024-07-13 23:00:01.128932] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:11.916 [2024-07-13 23:00:01.130955] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:11.916 [2024-07-13 23:00:01.131022] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:14:11.916 [2024-07-13 23:00:01.131037] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:11.916 [2024-07-13 23:00:01.131212] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001de0 00:14:11.916 [2024-07-13 23:00:01.131589] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:14:11.916 [2024-07-13 23:00:01.131612] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000006080 00:14:11.916 [2024-07-13 23:00:01.131779] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.916 23:00:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:14:12.175 [2024-07-13 23:00:01.340987] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:12.175 [2024-07-13 23:00:01.341012] bdev_raid.c:2275:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:14:12.175 true 00:14:12.175 23:00:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:12.175 23:00:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # jq '.[].num_blocks' 00:14:12.432 [2024-07-13 23:00:01.585087] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:12.432 23:00:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # blkcnt=131072 00:14:12.432 23:00:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # raid_size_mb=64 00:14:12.432 23:00:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@370 -- # '[' 64 '!=' 64 ']' 00:14:12.432 23:00:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:14:12.432 [2024-07-13 23:00:01.829015] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:12.432 [2024-07-13 23:00:01.829043] bdev_raid.c:2275:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:14:12.432 [2024-07-13 23:00:01.829088] bdev_raid.c:2289:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:14:12.432 true 00:14:12.689 23:00:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:12.689 23:00:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # jq '.[].num_blocks' 00:14:12.689 [2024-07-13 23:00:02.033199] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:12.689 23:00:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # blkcnt=262144 00:14:12.689 23:00:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # raid_size_mb=128 00:14:12.689 23:00:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 128 '!=' 128 ']' 00:14:12.689 23:00:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@386 -- # killprocess 130661 00:14:12.689 23:00:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@948 -- # '[' -z 130661 ']' 00:14:12.689 23:00:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # kill -0 130661 00:14:12.689 23:00:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # uname 00:14:12.689 23:00:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:12.689 23:00:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 130661 00:14:12.689 23:00:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:12.689 23:00:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:12.689 killing process with pid 130661 00:14:12.689 23:00:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 130661' 00:14:12.689 23:00:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@967 -- # kill 130661 00:14:12.689 [2024-07-13 23:00:02.074392] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:12.689 23:00:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # wait 130661 00:14:12.690 [2024-07-13 23:00:02.074519] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:12.690 [2024-07-13 23:00:02.074585] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:12.690 [2024-07-13 23:00:02.074602] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Raid, state offline 00:14:12.690 [2024-07-13 23:00:02.075109] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:13.255 23:00:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@388 -- # return 0 00:14:13.256 00:14:13.256 real 0m2.415s 00:14:13.256 user 0m3.860s 00:14:13.256 sys 0m0.522s 00:14:13.256 23:00:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:13.256 23:00:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.256 ************************************ 00:14:13.256 END TEST raid0_resize_test 00:14:13.256 ************************************ 00:14:13.256 23:00:02 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:13.256 23:00:02 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:14:13.256 23:00:02 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:14:13.256 23:00:02 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:14:13.256 23:00:02 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:13.256 23:00:02 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:13.256 23:00:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:13.256 ************************************ 00:14:13.256 START TEST raid_state_function_test 00:14:13.256 ************************************ 00:14:13.256 23:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 false 00:14:13.256 23:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:14:13.256 23:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:14:13.256 23:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:14:13.256 23:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:13.256 23:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:13.256 23:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:13.256 23:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:14:13.256 23:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:13.256 23:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:13.256 23:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:14:13.256 23:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:13.256 23:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:13.256 23:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:13.256 23:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:13.256 23:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:13.256 23:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:13.256 23:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:13.256 23:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:13.256 23:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:14:13.256 23:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:13.256 23:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:13.256 23:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:14:13.256 23:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:14:13.256 23:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=130729 00:14:13.256 23:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 130729' 00:14:13.256 Process raid pid: 130729 00:14:13.256 23:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:13.256 23:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 130729 /var/tmp/spdk-raid.sock 00:14:13.256 23:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 130729 ']' 00:14:13.256 23:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:13.256 23:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:13.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:13.256 23:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:13.256 23:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:13.256 23:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.256 [2024-07-13 23:00:02.481334] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:13.256 [2024-07-13 23:00:02.481608] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:13.256 [2024-07-13 23:00:02.619904] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.514 [2024-07-13 23:00:02.700039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.514 [2024-07-13 23:00:02.771907] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:14.078 23:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:14.078 23:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:14:14.078 23:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:14.334 [2024-07-13 23:00:03.577838] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:14.334 [2024-07-13 23:00:03.577953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:14.334 [2024-07-13 23:00:03.577971] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:14.334 [2024-07-13 23:00:03.577993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:14.334 23:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:14.334 23:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:14.334 23:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:14.334 23:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:14.334 23:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:14.334 23:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:14.334 23:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:14.334 23:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:14.334 23:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:14.334 23:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:14.334 23:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.334 23:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.592 23:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:14.592 "name": "Existed_Raid", 00:14:14.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.592 "strip_size_kb": 64, 00:14:14.592 "state": "configuring", 00:14:14.592 "raid_level": "raid0", 00:14:14.592 "superblock": false, 00:14:14.592 "num_base_bdevs": 2, 00:14:14.592 "num_base_bdevs_discovered": 0, 00:14:14.592 "num_base_bdevs_operational": 2, 00:14:14.592 "base_bdevs_list": [ 00:14:14.592 { 00:14:14.592 "name": "BaseBdev1", 00:14:14.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.592 "is_configured": false, 00:14:14.592 "data_offset": 0, 00:14:14.592 "data_size": 0 00:14:14.592 }, 00:14:14.592 { 00:14:14.592 "name": "BaseBdev2", 00:14:14.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.592 "is_configured": false, 00:14:14.592 "data_offset": 0, 00:14:14.592 "data_size": 0 00:14:14.592 } 00:14:14.592 ] 00:14:14.592 }' 00:14:14.592 23:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:14.592 23:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.157 23:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:15.415 [2024-07-13 23:00:04.730088] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:15.415 [2024-07-13 23:00:04.730175] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:14:15.415 23:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:15.672 [2024-07-13 23:00:04.998125] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:15.672 [2024-07-13 23:00:04.998221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:15.672 [2024-07-13 23:00:04.998241] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:15.672 [2024-07-13 23:00:04.998275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:15.672 23:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:15.945 [2024-07-13 23:00:05.220041] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:15.945 BaseBdev1 00:14:15.945 23:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:15.945 23:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:15.945 23:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:15.945 23:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:15.945 23:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:15.945 23:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:15.945 23:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:16.249 23:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:16.249 [ 00:14:16.249 { 00:14:16.249 "name": "BaseBdev1", 00:14:16.249 "aliases": [ 00:14:16.249 "def75b13-d9e7-47ed-a392-006e89c15eef" 00:14:16.249 ], 00:14:16.249 "product_name": "Malloc disk", 00:14:16.249 "block_size": 512, 00:14:16.249 "num_blocks": 65536, 00:14:16.249 "uuid": "def75b13-d9e7-47ed-a392-006e89c15eef", 00:14:16.249 "assigned_rate_limits": { 00:14:16.249 "rw_ios_per_sec": 0, 00:14:16.249 "rw_mbytes_per_sec": 0, 00:14:16.249 "r_mbytes_per_sec": 0, 00:14:16.250 "w_mbytes_per_sec": 0 00:14:16.250 }, 00:14:16.250 "claimed": true, 00:14:16.250 "claim_type": "exclusive_write", 00:14:16.250 "zoned": false, 00:14:16.250 "supported_io_types": { 00:14:16.250 "read": true, 00:14:16.250 "write": true, 00:14:16.250 "unmap": true, 00:14:16.250 "flush": true, 00:14:16.250 "reset": true, 00:14:16.250 "nvme_admin": false, 00:14:16.250 "nvme_io": false, 00:14:16.250 "nvme_io_md": false, 00:14:16.250 "write_zeroes": true, 00:14:16.250 "zcopy": true, 00:14:16.250 "get_zone_info": false, 00:14:16.250 "zone_management": false, 00:14:16.250 "zone_append": false, 00:14:16.250 "compare": false, 00:14:16.250 "compare_and_write": false, 00:14:16.250 "abort": true, 00:14:16.250 "seek_hole": false, 00:14:16.250 "seek_data": false, 00:14:16.250 "copy": true, 00:14:16.250 "nvme_iov_md": false 00:14:16.250 }, 00:14:16.250 "memory_domains": [ 00:14:16.250 { 00:14:16.250 "dma_device_id": "system", 00:14:16.250 "dma_device_type": 1 00:14:16.250 }, 00:14:16.250 { 00:14:16.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.250 "dma_device_type": 2 00:14:16.250 } 00:14:16.250 ], 00:14:16.250 "driver_specific": {} 00:14:16.250 } 00:14:16.250 ] 00:14:16.250 23:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:16.250 23:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:16.250 23:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:16.250 23:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:16.250 23:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:16.250 23:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:16.250 23:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:16.250 23:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:16.250 23:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:16.250 23:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:16.250 23:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:16.250 23:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:16.250 23:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.508 23:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:16.508 "name": "Existed_Raid", 00:14:16.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.508 "strip_size_kb": 64, 00:14:16.508 "state": "configuring", 00:14:16.508 "raid_level": "raid0", 00:14:16.508 "superblock": false, 00:14:16.508 "num_base_bdevs": 2, 00:14:16.508 "num_base_bdevs_discovered": 1, 00:14:16.508 "num_base_bdevs_operational": 2, 00:14:16.508 "base_bdevs_list": [ 00:14:16.508 { 00:14:16.508 "name": "BaseBdev1", 00:14:16.508 "uuid": "def75b13-d9e7-47ed-a392-006e89c15eef", 00:14:16.508 "is_configured": true, 00:14:16.508 "data_offset": 0, 00:14:16.508 "data_size": 65536 00:14:16.508 }, 00:14:16.508 { 00:14:16.508 "name": "BaseBdev2", 00:14:16.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.508 "is_configured": false, 00:14:16.508 "data_offset": 0, 00:14:16.508 "data_size": 0 00:14:16.508 } 00:14:16.508 ] 00:14:16.508 }' 00:14:16.508 23:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:16.508 23:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.441 23:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:17.441 [2024-07-13 23:00:06.740352] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:17.441 [2024-07-13 23:00:06.740459] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:14:17.441 23:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:17.698 [2024-07-13 23:00:07.012442] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:17.698 [2024-07-13 23:00:07.014666] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:17.698 [2024-07-13 23:00:07.014736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:17.698 23:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:17.698 23:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:17.698 23:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:17.698 23:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:17.698 23:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:17.698 23:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:17.698 23:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:17.698 23:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:17.698 23:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:17.698 23:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:17.698 23:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:17.698 23:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:17.698 23:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.698 23:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.955 23:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:17.955 "name": "Existed_Raid", 00:14:17.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.955 "strip_size_kb": 64, 00:14:17.955 "state": "configuring", 00:14:17.955 "raid_level": "raid0", 00:14:17.955 "superblock": false, 00:14:17.955 "num_base_bdevs": 2, 00:14:17.955 "num_base_bdevs_discovered": 1, 00:14:17.955 "num_base_bdevs_operational": 2, 00:14:17.955 "base_bdevs_list": [ 00:14:17.955 { 00:14:17.956 "name": "BaseBdev1", 00:14:17.956 "uuid": "def75b13-d9e7-47ed-a392-006e89c15eef", 00:14:17.956 "is_configured": true, 00:14:17.956 "data_offset": 0, 00:14:17.956 "data_size": 65536 00:14:17.956 }, 00:14:17.956 { 00:14:17.956 "name": "BaseBdev2", 00:14:17.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.956 "is_configured": false, 00:14:17.956 "data_offset": 0, 00:14:17.956 "data_size": 0 00:14:17.956 } 00:14:17.956 ] 00:14:17.956 }' 00:14:17.956 23:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:17.956 23:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.520 23:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:18.777 [2024-07-13 23:00:08.177662] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:18.777 [2024-07-13 23:00:08.177723] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:14:18.777 [2024-07-13 23:00:08.177735] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:18.777 [2024-07-13 23:00:08.177898] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:14:18.777 [2024-07-13 23:00:08.178369] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:14:18.777 [2024-07-13 23:00:08.178393] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:14:18.777 [2024-07-13 23:00:08.178692] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.777 BaseBdev2 00:14:19.034 23:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:19.034 23:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:19.034 23:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:19.034 23:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:19.034 23:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:19.034 23:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:19.034 23:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:19.292 23:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:19.292 [ 00:14:19.292 { 00:14:19.292 "name": "BaseBdev2", 00:14:19.292 "aliases": [ 00:14:19.292 "981dd1ae-005d-4c68-8667-8f6be2273b55" 00:14:19.292 ], 00:14:19.292 "product_name": "Malloc disk", 00:14:19.292 "block_size": 512, 00:14:19.292 "num_blocks": 65536, 00:14:19.292 "uuid": "981dd1ae-005d-4c68-8667-8f6be2273b55", 00:14:19.292 "assigned_rate_limits": { 00:14:19.292 "rw_ios_per_sec": 0, 00:14:19.292 "rw_mbytes_per_sec": 0, 00:14:19.292 "r_mbytes_per_sec": 0, 00:14:19.292 "w_mbytes_per_sec": 0 00:14:19.292 }, 00:14:19.292 "claimed": true, 00:14:19.292 "claim_type": "exclusive_write", 00:14:19.292 "zoned": false, 00:14:19.292 "supported_io_types": { 00:14:19.292 "read": true, 00:14:19.292 "write": true, 00:14:19.292 "unmap": true, 00:14:19.292 "flush": true, 00:14:19.292 "reset": true, 00:14:19.292 "nvme_admin": false, 00:14:19.292 "nvme_io": false, 00:14:19.292 "nvme_io_md": false, 00:14:19.292 "write_zeroes": true, 00:14:19.292 "zcopy": true, 00:14:19.292 "get_zone_info": false, 00:14:19.292 "zone_management": false, 00:14:19.292 "zone_append": false, 00:14:19.292 "compare": false, 00:14:19.292 "compare_and_write": false, 00:14:19.292 "abort": true, 00:14:19.293 "seek_hole": false, 00:14:19.293 "seek_data": false, 00:14:19.293 "copy": true, 00:14:19.293 "nvme_iov_md": false 00:14:19.293 }, 00:14:19.293 "memory_domains": [ 00:14:19.293 { 00:14:19.293 "dma_device_id": "system", 00:14:19.293 "dma_device_type": 1 00:14:19.293 }, 00:14:19.293 { 00:14:19.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.293 "dma_device_type": 2 00:14:19.293 } 00:14:19.293 ], 00:14:19.293 "driver_specific": {} 00:14:19.293 } 00:14:19.293 ] 00:14:19.293 23:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:19.293 23:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:19.293 23:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:19.293 23:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:19.293 23:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:19.293 23:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:19.293 23:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:19.293 23:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:19.293 23:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:19.293 23:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:19.293 23:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:19.293 23:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:19.293 23:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:19.293 23:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.293 23:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.562 23:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:19.562 "name": "Existed_Raid", 00:14:19.562 "uuid": "86c11454-1a6b-47a3-aee5-817916e83524", 00:14:19.562 "strip_size_kb": 64, 00:14:19.562 "state": "online", 00:14:19.562 "raid_level": "raid0", 00:14:19.562 "superblock": false, 00:14:19.562 "num_base_bdevs": 2, 00:14:19.562 "num_base_bdevs_discovered": 2, 00:14:19.562 "num_base_bdevs_operational": 2, 00:14:19.562 "base_bdevs_list": [ 00:14:19.562 { 00:14:19.562 "name": "BaseBdev1", 00:14:19.562 "uuid": "def75b13-d9e7-47ed-a392-006e89c15eef", 00:14:19.562 "is_configured": true, 00:14:19.562 "data_offset": 0, 00:14:19.562 "data_size": 65536 00:14:19.562 }, 00:14:19.562 { 00:14:19.562 "name": "BaseBdev2", 00:14:19.562 "uuid": "981dd1ae-005d-4c68-8667-8f6be2273b55", 00:14:19.562 "is_configured": true, 00:14:19.562 "data_offset": 0, 00:14:19.562 "data_size": 65536 00:14:19.562 } 00:14:19.562 ] 00:14:19.562 }' 00:14:19.562 23:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:19.562 23:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.496 23:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:20.496 23:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:20.496 23:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:20.496 23:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:20.496 23:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:20.496 23:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:20.496 23:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:20.496 23:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:20.496 [2024-07-13 23:00:09.802234] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:20.496 23:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:20.496 "name": "Existed_Raid", 00:14:20.496 "aliases": [ 00:14:20.496 "86c11454-1a6b-47a3-aee5-817916e83524" 00:14:20.496 ], 00:14:20.496 "product_name": "Raid Volume", 00:14:20.496 "block_size": 512, 00:14:20.496 "num_blocks": 131072, 00:14:20.496 "uuid": "86c11454-1a6b-47a3-aee5-817916e83524", 00:14:20.496 "assigned_rate_limits": { 00:14:20.496 "rw_ios_per_sec": 0, 00:14:20.496 "rw_mbytes_per_sec": 0, 00:14:20.496 "r_mbytes_per_sec": 0, 00:14:20.497 "w_mbytes_per_sec": 0 00:14:20.497 }, 00:14:20.497 "claimed": false, 00:14:20.497 "zoned": false, 00:14:20.497 "supported_io_types": { 00:14:20.497 "read": true, 00:14:20.497 "write": true, 00:14:20.497 "unmap": true, 00:14:20.497 "flush": true, 00:14:20.497 "reset": true, 00:14:20.497 "nvme_admin": false, 00:14:20.497 "nvme_io": false, 00:14:20.497 "nvme_io_md": false, 00:14:20.497 "write_zeroes": true, 00:14:20.497 "zcopy": false, 00:14:20.497 "get_zone_info": false, 00:14:20.497 "zone_management": false, 00:14:20.497 "zone_append": false, 00:14:20.497 "compare": false, 00:14:20.497 "compare_and_write": false, 00:14:20.497 "abort": false, 00:14:20.497 "seek_hole": false, 00:14:20.497 "seek_data": false, 00:14:20.497 "copy": false, 00:14:20.497 "nvme_iov_md": false 00:14:20.497 }, 00:14:20.497 "memory_domains": [ 00:14:20.497 { 00:14:20.497 "dma_device_id": "system", 00:14:20.497 "dma_device_type": 1 00:14:20.497 }, 00:14:20.497 { 00:14:20.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.497 "dma_device_type": 2 00:14:20.497 }, 00:14:20.497 { 00:14:20.497 "dma_device_id": "system", 00:14:20.497 "dma_device_type": 1 00:14:20.497 }, 00:14:20.497 { 00:14:20.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.497 "dma_device_type": 2 00:14:20.497 } 00:14:20.497 ], 00:14:20.497 "driver_specific": { 00:14:20.497 "raid": { 00:14:20.497 "uuid": "86c11454-1a6b-47a3-aee5-817916e83524", 00:14:20.497 "strip_size_kb": 64, 00:14:20.497 "state": "online", 00:14:20.497 "raid_level": "raid0", 00:14:20.497 "superblock": false, 00:14:20.497 "num_base_bdevs": 2, 00:14:20.497 "num_base_bdevs_discovered": 2, 00:14:20.497 "num_base_bdevs_operational": 2, 00:14:20.497 "base_bdevs_list": [ 00:14:20.497 { 00:14:20.497 "name": "BaseBdev1", 00:14:20.497 "uuid": "def75b13-d9e7-47ed-a392-006e89c15eef", 00:14:20.497 "is_configured": true, 00:14:20.497 "data_offset": 0, 00:14:20.497 "data_size": 65536 00:14:20.497 }, 00:14:20.497 { 00:14:20.497 "name": "BaseBdev2", 00:14:20.497 "uuid": "981dd1ae-005d-4c68-8667-8f6be2273b55", 00:14:20.497 "is_configured": true, 00:14:20.497 "data_offset": 0, 00:14:20.497 "data_size": 65536 00:14:20.497 } 00:14:20.497 ] 00:14:20.497 } 00:14:20.497 } 00:14:20.497 }' 00:14:20.497 23:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:20.497 23:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:20.497 BaseBdev2' 00:14:20.497 23:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:20.497 23:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:20.497 23:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:20.755 23:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:20.755 "name": "BaseBdev1", 00:14:20.755 "aliases": [ 00:14:20.755 "def75b13-d9e7-47ed-a392-006e89c15eef" 00:14:20.755 ], 00:14:20.755 "product_name": "Malloc disk", 00:14:20.755 "block_size": 512, 00:14:20.755 "num_blocks": 65536, 00:14:20.755 "uuid": "def75b13-d9e7-47ed-a392-006e89c15eef", 00:14:20.755 "assigned_rate_limits": { 00:14:20.755 "rw_ios_per_sec": 0, 00:14:20.755 "rw_mbytes_per_sec": 0, 00:14:20.755 "r_mbytes_per_sec": 0, 00:14:20.755 "w_mbytes_per_sec": 0 00:14:20.755 }, 00:14:20.755 "claimed": true, 00:14:20.755 "claim_type": "exclusive_write", 00:14:20.755 "zoned": false, 00:14:20.755 "supported_io_types": { 00:14:20.755 "read": true, 00:14:20.755 "write": true, 00:14:20.755 "unmap": true, 00:14:20.755 "flush": true, 00:14:20.755 "reset": true, 00:14:20.755 "nvme_admin": false, 00:14:20.755 "nvme_io": false, 00:14:20.755 "nvme_io_md": false, 00:14:20.755 "write_zeroes": true, 00:14:20.755 "zcopy": true, 00:14:20.755 "get_zone_info": false, 00:14:20.755 "zone_management": false, 00:14:20.755 "zone_append": false, 00:14:20.755 "compare": false, 00:14:20.755 "compare_and_write": false, 00:14:20.755 "abort": true, 00:14:20.755 "seek_hole": false, 00:14:20.755 "seek_data": false, 00:14:20.755 "copy": true, 00:14:20.755 "nvme_iov_md": false 00:14:20.755 }, 00:14:20.755 "memory_domains": [ 00:14:20.755 { 00:14:20.755 "dma_device_id": "system", 00:14:20.755 "dma_device_type": 1 00:14:20.755 }, 00:14:20.755 { 00:14:20.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.755 "dma_device_type": 2 00:14:20.755 } 00:14:20.755 ], 00:14:20.755 "driver_specific": {} 00:14:20.755 }' 00:14:20.755 23:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:21.012 23:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:21.012 23:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:21.012 23:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:21.012 23:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:21.012 23:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:21.012 23:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:21.012 23:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:21.269 23:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:21.269 23:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:21.269 23:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:21.269 23:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:21.269 23:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:21.269 23:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:21.269 23:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:21.526 23:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:21.526 "name": "BaseBdev2", 00:14:21.526 "aliases": [ 00:14:21.526 "981dd1ae-005d-4c68-8667-8f6be2273b55" 00:14:21.526 ], 00:14:21.526 "product_name": "Malloc disk", 00:14:21.526 "block_size": 512, 00:14:21.526 "num_blocks": 65536, 00:14:21.526 "uuid": "981dd1ae-005d-4c68-8667-8f6be2273b55", 00:14:21.526 "assigned_rate_limits": { 00:14:21.526 "rw_ios_per_sec": 0, 00:14:21.526 "rw_mbytes_per_sec": 0, 00:14:21.526 "r_mbytes_per_sec": 0, 00:14:21.526 "w_mbytes_per_sec": 0 00:14:21.526 }, 00:14:21.526 "claimed": true, 00:14:21.526 "claim_type": "exclusive_write", 00:14:21.526 "zoned": false, 00:14:21.526 "supported_io_types": { 00:14:21.526 "read": true, 00:14:21.526 "write": true, 00:14:21.526 "unmap": true, 00:14:21.526 "flush": true, 00:14:21.526 "reset": true, 00:14:21.526 "nvme_admin": false, 00:14:21.526 "nvme_io": false, 00:14:21.526 "nvme_io_md": false, 00:14:21.526 "write_zeroes": true, 00:14:21.526 "zcopy": true, 00:14:21.526 "get_zone_info": false, 00:14:21.526 "zone_management": false, 00:14:21.526 "zone_append": false, 00:14:21.526 "compare": false, 00:14:21.526 "compare_and_write": false, 00:14:21.526 "abort": true, 00:14:21.526 "seek_hole": false, 00:14:21.526 "seek_data": false, 00:14:21.526 "copy": true, 00:14:21.526 "nvme_iov_md": false 00:14:21.526 }, 00:14:21.526 "memory_domains": [ 00:14:21.526 { 00:14:21.526 "dma_device_id": "system", 00:14:21.526 "dma_device_type": 1 00:14:21.526 }, 00:14:21.526 { 00:14:21.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.526 "dma_device_type": 2 00:14:21.526 } 00:14:21.526 ], 00:14:21.526 "driver_specific": {} 00:14:21.526 }' 00:14:21.526 23:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:21.526 23:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:21.526 23:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:21.526 23:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:21.784 23:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:21.784 23:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:21.784 23:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:21.784 23:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:21.784 23:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:21.784 23:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:21.784 23:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:22.043 23:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:22.043 23:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:22.302 [2024-07-13 23:00:11.454432] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:22.302 [2024-07-13 23:00:11.454504] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:22.302 [2024-07-13 23:00:11.454600] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:22.302 23:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:22.302 23:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:14:22.302 23:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:22.302 23:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:22.302 23:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:22.302 23:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:22.302 23:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:22.302 23:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:22.302 23:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:22.302 23:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:22.302 23:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:22.302 23:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:22.302 23:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:22.302 23:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:22.302 23:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:22.302 23:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.302 23:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.302 23:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:22.302 "name": "Existed_Raid", 00:14:22.302 "uuid": "86c11454-1a6b-47a3-aee5-817916e83524", 00:14:22.302 "strip_size_kb": 64, 00:14:22.302 "state": "offline", 00:14:22.302 "raid_level": "raid0", 00:14:22.302 "superblock": false, 00:14:22.302 "num_base_bdevs": 2, 00:14:22.302 "num_base_bdevs_discovered": 1, 00:14:22.302 "num_base_bdevs_operational": 1, 00:14:22.302 "base_bdevs_list": [ 00:14:22.302 { 00:14:22.302 "name": null, 00:14:22.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.302 "is_configured": false, 00:14:22.302 "data_offset": 0, 00:14:22.302 "data_size": 65536 00:14:22.302 }, 00:14:22.302 { 00:14:22.302 "name": "BaseBdev2", 00:14:22.302 "uuid": "981dd1ae-005d-4c68-8667-8f6be2273b55", 00:14:22.302 "is_configured": true, 00:14:22.302 "data_offset": 0, 00:14:22.302 "data_size": 65536 00:14:22.302 } 00:14:22.302 ] 00:14:22.302 }' 00:14:22.302 23:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:22.302 23:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.236 23:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:23.236 23:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:23.236 23:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:23.236 23:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:23.236 23:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:23.236 23:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:23.236 23:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:23.494 [2024-07-13 23:00:12.815269] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:23.494 [2024-07-13 23:00:12.815402] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:14:23.494 23:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:23.494 23:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:23.494 23:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:23.494 23:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:23.751 23:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:23.751 23:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:23.751 23:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:14:23.751 23:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 130729 00:14:23.751 23:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 130729 ']' 00:14:23.751 23:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 130729 00:14:23.751 23:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:14:23.751 23:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:23.751 23:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 130729 00:14:23.751 killing process with pid 130729 00:14:23.751 23:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:23.751 23:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:23.751 23:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 130729' 00:14:23.751 23:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 130729 00:14:23.751 23:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 130729 00:14:23.752 [2024-07-13 23:00:13.122712] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:23.752 [2024-07-13 23:00:13.122815] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:24.325 ************************************ 00:14:24.325 END TEST raid_state_function_test 00:14:24.325 ************************************ 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:14:24.325 00:14:24.325 real 0m11.001s 00:14:24.325 user 0m20.320s 00:14:24.325 sys 0m1.274s 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.325 23:00:13 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:24.325 23:00:13 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:14:24.325 23:00:13 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:24.325 23:00:13 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:24.325 23:00:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:24.325 ************************************ 00:14:24.325 START TEST raid_state_function_test_sb 00:14:24.325 ************************************ 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 true 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=131105 00:14:24.325 Process raid pid: 131105 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 131105' 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 131105 /var/tmp/spdk-raid.sock 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 131105 ']' 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:24.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:24.325 23:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.325 [2024-07-13 23:00:13.530493] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:24.325 [2024-07-13 23:00:13.530706] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:24.325 [2024-07-13 23:00:13.664990] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.583 [2024-07-13 23:00:13.738941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.583 [2024-07-13 23:00:13.812165] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:25.149 23:00:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:25.149 23:00:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:14:25.149 23:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:25.407 [2024-07-13 23:00:14.665932] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:25.407 [2024-07-13 23:00:14.666042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:25.407 [2024-07-13 23:00:14.666059] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:25.407 [2024-07-13 23:00:14.666082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:25.407 23:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:25.407 23:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:25.407 23:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:25.407 23:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:25.407 23:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:25.407 23:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:25.407 23:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:25.407 23:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:25.407 23:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:25.407 23:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:25.407 23:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.407 23:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.672 23:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:25.672 "name": "Existed_Raid", 00:14:25.672 "uuid": "5c341f6a-f4d8-420a-b219-aca6db28161a", 00:14:25.672 "strip_size_kb": 64, 00:14:25.672 "state": "configuring", 00:14:25.672 "raid_level": "raid0", 00:14:25.672 "superblock": true, 00:14:25.672 "num_base_bdevs": 2, 00:14:25.672 "num_base_bdevs_discovered": 0, 00:14:25.672 "num_base_bdevs_operational": 2, 00:14:25.672 "base_bdevs_list": [ 00:14:25.672 { 00:14:25.672 "name": "BaseBdev1", 00:14:25.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.672 "is_configured": false, 00:14:25.672 "data_offset": 0, 00:14:25.672 "data_size": 0 00:14:25.672 }, 00:14:25.672 { 00:14:25.672 "name": "BaseBdev2", 00:14:25.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.672 "is_configured": false, 00:14:25.672 "data_offset": 0, 00:14:25.672 "data_size": 0 00:14:25.672 } 00:14:25.672 ] 00:14:25.672 }' 00:14:25.672 23:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:25.672 23:00:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.237 23:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:26.494 [2024-07-13 23:00:15.849962] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:26.494 [2024-07-13 23:00:15.850023] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:14:26.494 23:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:26.751 [2024-07-13 23:00:16.118590] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:26.751 [2024-07-13 23:00:16.118699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:26.752 [2024-07-13 23:00:16.118714] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:26.752 [2024-07-13 23:00:16.118743] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:26.752 23:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:27.009 [2024-07-13 23:00:16.345498] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:27.009 BaseBdev1 00:14:27.009 23:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:27.009 23:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:27.009 23:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:27.009 23:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:27.009 23:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:27.009 23:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:27.009 23:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:27.267 23:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:27.525 [ 00:14:27.525 { 00:14:27.525 "name": "BaseBdev1", 00:14:27.525 "aliases": [ 00:14:27.525 "386eb40f-2c2a-4a06-b76a-abe3582d645d" 00:14:27.525 ], 00:14:27.525 "product_name": "Malloc disk", 00:14:27.525 "block_size": 512, 00:14:27.525 "num_blocks": 65536, 00:14:27.525 "uuid": "386eb40f-2c2a-4a06-b76a-abe3582d645d", 00:14:27.525 "assigned_rate_limits": { 00:14:27.525 "rw_ios_per_sec": 0, 00:14:27.525 "rw_mbytes_per_sec": 0, 00:14:27.525 "r_mbytes_per_sec": 0, 00:14:27.525 "w_mbytes_per_sec": 0 00:14:27.525 }, 00:14:27.525 "claimed": true, 00:14:27.525 "claim_type": "exclusive_write", 00:14:27.525 "zoned": false, 00:14:27.526 "supported_io_types": { 00:14:27.526 "read": true, 00:14:27.526 "write": true, 00:14:27.526 "unmap": true, 00:14:27.526 "flush": true, 00:14:27.526 "reset": true, 00:14:27.526 "nvme_admin": false, 00:14:27.526 "nvme_io": false, 00:14:27.526 "nvme_io_md": false, 00:14:27.526 "write_zeroes": true, 00:14:27.526 "zcopy": true, 00:14:27.526 "get_zone_info": false, 00:14:27.526 "zone_management": false, 00:14:27.526 "zone_append": false, 00:14:27.526 "compare": false, 00:14:27.526 "compare_and_write": false, 00:14:27.526 "abort": true, 00:14:27.526 "seek_hole": false, 00:14:27.526 "seek_data": false, 00:14:27.526 "copy": true, 00:14:27.526 "nvme_iov_md": false 00:14:27.526 }, 00:14:27.526 "memory_domains": [ 00:14:27.526 { 00:14:27.526 "dma_device_id": "system", 00:14:27.526 "dma_device_type": 1 00:14:27.526 }, 00:14:27.526 { 00:14:27.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.526 "dma_device_type": 2 00:14:27.526 } 00:14:27.526 ], 00:14:27.526 "driver_specific": {} 00:14:27.526 } 00:14:27.526 ] 00:14:27.526 23:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:27.526 23:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:27.526 23:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:27.526 23:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:27.526 23:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:27.526 23:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:27.526 23:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:27.526 23:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:27.526 23:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:27.526 23:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:27.526 23:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:27.526 23:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.526 23:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.785 23:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:27.785 "name": "Existed_Raid", 00:14:27.785 "uuid": "dbcda054-31fd-4006-a848-83305e319606", 00:14:27.785 "strip_size_kb": 64, 00:14:27.785 "state": "configuring", 00:14:27.785 "raid_level": "raid0", 00:14:27.785 "superblock": true, 00:14:27.785 "num_base_bdevs": 2, 00:14:27.785 "num_base_bdevs_discovered": 1, 00:14:27.785 "num_base_bdevs_operational": 2, 00:14:27.785 "base_bdevs_list": [ 00:14:27.785 { 00:14:27.785 "name": "BaseBdev1", 00:14:27.785 "uuid": "386eb40f-2c2a-4a06-b76a-abe3582d645d", 00:14:27.785 "is_configured": true, 00:14:27.785 "data_offset": 2048, 00:14:27.785 "data_size": 63488 00:14:27.785 }, 00:14:27.785 { 00:14:27.785 "name": "BaseBdev2", 00:14:27.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.785 "is_configured": false, 00:14:27.785 "data_offset": 0, 00:14:27.785 "data_size": 0 00:14:27.785 } 00:14:27.785 ] 00:14:27.785 }' 00:14:27.785 23:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:27.785 23:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.351 23:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:28.609 [2024-07-13 23:00:17.854043] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:28.609 [2024-07-13 23:00:17.854108] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:14:28.609 23:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:28.867 [2024-07-13 23:00:18.118152] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:28.867 [2024-07-13 23:00:18.120374] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:28.867 [2024-07-13 23:00:18.120431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:28.867 23:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:28.867 23:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:28.867 23:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:28.867 23:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:28.867 23:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:28.867 23:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:28.867 23:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:28.867 23:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:28.867 23:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:28.867 23:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:28.867 23:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:28.867 23:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:28.867 23:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.867 23:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.124 23:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:29.124 "name": "Existed_Raid", 00:14:29.124 "uuid": "845e142b-2515-4873-92b3-66904f27f88b", 00:14:29.124 "strip_size_kb": 64, 00:14:29.124 "state": "configuring", 00:14:29.124 "raid_level": "raid0", 00:14:29.124 "superblock": true, 00:14:29.124 "num_base_bdevs": 2, 00:14:29.124 "num_base_bdevs_discovered": 1, 00:14:29.124 "num_base_bdevs_operational": 2, 00:14:29.124 "base_bdevs_list": [ 00:14:29.124 { 00:14:29.124 "name": "BaseBdev1", 00:14:29.124 "uuid": "386eb40f-2c2a-4a06-b76a-abe3582d645d", 00:14:29.124 "is_configured": true, 00:14:29.124 "data_offset": 2048, 00:14:29.124 "data_size": 63488 00:14:29.124 }, 00:14:29.124 { 00:14:29.124 "name": "BaseBdev2", 00:14:29.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.124 "is_configured": false, 00:14:29.124 "data_offset": 0, 00:14:29.124 "data_size": 0 00:14:29.124 } 00:14:29.124 ] 00:14:29.124 }' 00:14:29.124 23:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:29.124 23:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.689 23:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:29.947 [2024-07-13 23:00:19.300684] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:29.947 [2024-07-13 23:00:19.300994] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:14:29.947 [2024-07-13 23:00:19.301011] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:29.947 [2024-07-13 23:00:19.301191] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:14:29.947 [2024-07-13 23:00:19.301633] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:14:29.947 [2024-07-13 23:00:19.301648] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:14:29.947 BaseBdev2 00:14:29.947 [2024-07-13 23:00:19.301789] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.947 23:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:29.947 23:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:29.947 23:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:29.947 23:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:29.947 23:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:29.947 23:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:29.947 23:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:30.205 23:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:30.463 [ 00:14:30.463 { 00:14:30.463 "name": "BaseBdev2", 00:14:30.463 "aliases": [ 00:14:30.463 "b2084de3-20ab-48af-b84f-0c6ded010e12" 00:14:30.463 ], 00:14:30.463 "product_name": "Malloc disk", 00:14:30.463 "block_size": 512, 00:14:30.463 "num_blocks": 65536, 00:14:30.463 "uuid": "b2084de3-20ab-48af-b84f-0c6ded010e12", 00:14:30.463 "assigned_rate_limits": { 00:14:30.463 "rw_ios_per_sec": 0, 00:14:30.463 "rw_mbytes_per_sec": 0, 00:14:30.463 "r_mbytes_per_sec": 0, 00:14:30.463 "w_mbytes_per_sec": 0 00:14:30.463 }, 00:14:30.463 "claimed": true, 00:14:30.463 "claim_type": "exclusive_write", 00:14:30.463 "zoned": false, 00:14:30.463 "supported_io_types": { 00:14:30.463 "read": true, 00:14:30.463 "write": true, 00:14:30.463 "unmap": true, 00:14:30.463 "flush": true, 00:14:30.463 "reset": true, 00:14:30.463 "nvme_admin": false, 00:14:30.463 "nvme_io": false, 00:14:30.463 "nvme_io_md": false, 00:14:30.463 "write_zeroes": true, 00:14:30.463 "zcopy": true, 00:14:30.463 "get_zone_info": false, 00:14:30.463 "zone_management": false, 00:14:30.463 "zone_append": false, 00:14:30.463 "compare": false, 00:14:30.463 "compare_and_write": false, 00:14:30.463 "abort": true, 00:14:30.463 "seek_hole": false, 00:14:30.463 "seek_data": false, 00:14:30.463 "copy": true, 00:14:30.463 "nvme_iov_md": false 00:14:30.463 }, 00:14:30.463 "memory_domains": [ 00:14:30.463 { 00:14:30.463 "dma_device_id": "system", 00:14:30.463 "dma_device_type": 1 00:14:30.463 }, 00:14:30.463 { 00:14:30.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.463 "dma_device_type": 2 00:14:30.463 } 00:14:30.463 ], 00:14:30.463 "driver_specific": {} 00:14:30.463 } 00:14:30.464 ] 00:14:30.464 23:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:30.464 23:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:30.464 23:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:30.464 23:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:30.464 23:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:30.464 23:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:30.464 23:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:30.464 23:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:30.464 23:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:30.464 23:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:30.464 23:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:30.464 23:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:30.464 23:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:30.464 23:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:30.464 23:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.722 23:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:30.722 "name": "Existed_Raid", 00:14:30.722 "uuid": "845e142b-2515-4873-92b3-66904f27f88b", 00:14:30.722 "strip_size_kb": 64, 00:14:30.722 "state": "online", 00:14:30.722 "raid_level": "raid0", 00:14:30.722 "superblock": true, 00:14:30.722 "num_base_bdevs": 2, 00:14:30.722 "num_base_bdevs_discovered": 2, 00:14:30.722 "num_base_bdevs_operational": 2, 00:14:30.722 "base_bdevs_list": [ 00:14:30.722 { 00:14:30.722 "name": "BaseBdev1", 00:14:30.722 "uuid": "386eb40f-2c2a-4a06-b76a-abe3582d645d", 00:14:30.722 "is_configured": true, 00:14:30.722 "data_offset": 2048, 00:14:30.722 "data_size": 63488 00:14:30.722 }, 00:14:30.722 { 00:14:30.722 "name": "BaseBdev2", 00:14:30.722 "uuid": "b2084de3-20ab-48af-b84f-0c6ded010e12", 00:14:30.722 "is_configured": true, 00:14:30.722 "data_offset": 2048, 00:14:30.722 "data_size": 63488 00:14:30.722 } 00:14:30.722 ] 00:14:30.722 }' 00:14:30.722 23:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:30.722 23:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.289 23:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:31.289 23:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:31.289 23:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:31.289 23:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:31.289 23:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:31.289 23:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:31.289 23:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:31.289 23:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:31.557 [2024-07-13 23:00:20.885468] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:31.557 23:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:31.557 "name": "Existed_Raid", 00:14:31.557 "aliases": [ 00:14:31.557 "845e142b-2515-4873-92b3-66904f27f88b" 00:14:31.557 ], 00:14:31.557 "product_name": "Raid Volume", 00:14:31.557 "block_size": 512, 00:14:31.557 "num_blocks": 126976, 00:14:31.557 "uuid": "845e142b-2515-4873-92b3-66904f27f88b", 00:14:31.557 "assigned_rate_limits": { 00:14:31.557 "rw_ios_per_sec": 0, 00:14:31.557 "rw_mbytes_per_sec": 0, 00:14:31.557 "r_mbytes_per_sec": 0, 00:14:31.557 "w_mbytes_per_sec": 0 00:14:31.557 }, 00:14:31.557 "claimed": false, 00:14:31.557 "zoned": false, 00:14:31.557 "supported_io_types": { 00:14:31.557 "read": true, 00:14:31.557 "write": true, 00:14:31.557 "unmap": true, 00:14:31.557 "flush": true, 00:14:31.557 "reset": true, 00:14:31.557 "nvme_admin": false, 00:14:31.557 "nvme_io": false, 00:14:31.557 "nvme_io_md": false, 00:14:31.557 "write_zeroes": true, 00:14:31.557 "zcopy": false, 00:14:31.557 "get_zone_info": false, 00:14:31.557 "zone_management": false, 00:14:31.557 "zone_append": false, 00:14:31.557 "compare": false, 00:14:31.557 "compare_and_write": false, 00:14:31.557 "abort": false, 00:14:31.557 "seek_hole": false, 00:14:31.557 "seek_data": false, 00:14:31.557 "copy": false, 00:14:31.557 "nvme_iov_md": false 00:14:31.557 }, 00:14:31.557 "memory_domains": [ 00:14:31.557 { 00:14:31.557 "dma_device_id": "system", 00:14:31.557 "dma_device_type": 1 00:14:31.557 }, 00:14:31.557 { 00:14:31.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.557 "dma_device_type": 2 00:14:31.557 }, 00:14:31.557 { 00:14:31.557 "dma_device_id": "system", 00:14:31.557 "dma_device_type": 1 00:14:31.557 }, 00:14:31.557 { 00:14:31.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.557 "dma_device_type": 2 00:14:31.557 } 00:14:31.557 ], 00:14:31.557 "driver_specific": { 00:14:31.557 "raid": { 00:14:31.557 "uuid": "845e142b-2515-4873-92b3-66904f27f88b", 00:14:31.557 "strip_size_kb": 64, 00:14:31.557 "state": "online", 00:14:31.557 "raid_level": "raid0", 00:14:31.557 "superblock": true, 00:14:31.557 "num_base_bdevs": 2, 00:14:31.557 "num_base_bdevs_discovered": 2, 00:14:31.557 "num_base_bdevs_operational": 2, 00:14:31.557 "base_bdevs_list": [ 00:14:31.557 { 00:14:31.557 "name": "BaseBdev1", 00:14:31.557 "uuid": "386eb40f-2c2a-4a06-b76a-abe3582d645d", 00:14:31.557 "is_configured": true, 00:14:31.557 "data_offset": 2048, 00:14:31.557 "data_size": 63488 00:14:31.557 }, 00:14:31.557 { 00:14:31.557 "name": "BaseBdev2", 00:14:31.557 "uuid": "b2084de3-20ab-48af-b84f-0c6ded010e12", 00:14:31.557 "is_configured": true, 00:14:31.557 "data_offset": 2048, 00:14:31.557 "data_size": 63488 00:14:31.557 } 00:14:31.557 ] 00:14:31.557 } 00:14:31.557 } 00:14:31.557 }' 00:14:31.557 23:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:31.557 23:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:31.557 BaseBdev2' 00:14:31.557 23:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:31.838 23:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:31.838 23:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:31.838 23:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:31.838 "name": "BaseBdev1", 00:14:31.838 "aliases": [ 00:14:31.838 "386eb40f-2c2a-4a06-b76a-abe3582d645d" 00:14:31.838 ], 00:14:31.838 "product_name": "Malloc disk", 00:14:31.838 "block_size": 512, 00:14:31.838 "num_blocks": 65536, 00:14:31.838 "uuid": "386eb40f-2c2a-4a06-b76a-abe3582d645d", 00:14:31.838 "assigned_rate_limits": { 00:14:31.838 "rw_ios_per_sec": 0, 00:14:31.838 "rw_mbytes_per_sec": 0, 00:14:31.838 "r_mbytes_per_sec": 0, 00:14:31.838 "w_mbytes_per_sec": 0 00:14:31.838 }, 00:14:31.838 "claimed": true, 00:14:31.838 "claim_type": "exclusive_write", 00:14:31.838 "zoned": false, 00:14:31.838 "supported_io_types": { 00:14:31.838 "read": true, 00:14:31.838 "write": true, 00:14:31.838 "unmap": true, 00:14:31.838 "flush": true, 00:14:31.838 "reset": true, 00:14:31.838 "nvme_admin": false, 00:14:31.838 "nvme_io": false, 00:14:31.838 "nvme_io_md": false, 00:14:31.838 "write_zeroes": true, 00:14:31.838 "zcopy": true, 00:14:31.838 "get_zone_info": false, 00:14:31.838 "zone_management": false, 00:14:31.838 "zone_append": false, 00:14:31.838 "compare": false, 00:14:31.838 "compare_and_write": false, 00:14:31.838 "abort": true, 00:14:31.838 "seek_hole": false, 00:14:31.838 "seek_data": false, 00:14:31.838 "copy": true, 00:14:31.838 "nvme_iov_md": false 00:14:31.838 }, 00:14:31.838 "memory_domains": [ 00:14:31.838 { 00:14:31.838 "dma_device_id": "system", 00:14:31.838 "dma_device_type": 1 00:14:31.838 }, 00:14:31.838 { 00:14:31.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.838 "dma_device_type": 2 00:14:31.838 } 00:14:31.838 ], 00:14:31.838 "driver_specific": {} 00:14:31.838 }' 00:14:31.838 23:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:32.097 23:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:32.097 23:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:32.097 23:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:32.097 23:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:32.097 23:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:32.097 23:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:32.097 23:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:32.355 23:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:32.355 23:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:32.355 23:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:32.355 23:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:32.355 23:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:32.355 23:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:32.355 23:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:32.613 23:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:32.613 "name": "BaseBdev2", 00:14:32.613 "aliases": [ 00:14:32.613 "b2084de3-20ab-48af-b84f-0c6ded010e12" 00:14:32.613 ], 00:14:32.613 "product_name": "Malloc disk", 00:14:32.613 "block_size": 512, 00:14:32.613 "num_blocks": 65536, 00:14:32.613 "uuid": "b2084de3-20ab-48af-b84f-0c6ded010e12", 00:14:32.613 "assigned_rate_limits": { 00:14:32.613 "rw_ios_per_sec": 0, 00:14:32.613 "rw_mbytes_per_sec": 0, 00:14:32.613 "r_mbytes_per_sec": 0, 00:14:32.613 "w_mbytes_per_sec": 0 00:14:32.613 }, 00:14:32.613 "claimed": true, 00:14:32.613 "claim_type": "exclusive_write", 00:14:32.613 "zoned": false, 00:14:32.613 "supported_io_types": { 00:14:32.613 "read": true, 00:14:32.613 "write": true, 00:14:32.613 "unmap": true, 00:14:32.613 "flush": true, 00:14:32.613 "reset": true, 00:14:32.613 "nvme_admin": false, 00:14:32.613 "nvme_io": false, 00:14:32.613 "nvme_io_md": false, 00:14:32.613 "write_zeroes": true, 00:14:32.613 "zcopy": true, 00:14:32.613 "get_zone_info": false, 00:14:32.613 "zone_management": false, 00:14:32.613 "zone_append": false, 00:14:32.613 "compare": false, 00:14:32.613 "compare_and_write": false, 00:14:32.613 "abort": true, 00:14:32.613 "seek_hole": false, 00:14:32.613 "seek_data": false, 00:14:32.613 "copy": true, 00:14:32.613 "nvme_iov_md": false 00:14:32.613 }, 00:14:32.613 "memory_domains": [ 00:14:32.613 { 00:14:32.613 "dma_device_id": "system", 00:14:32.613 "dma_device_type": 1 00:14:32.613 }, 00:14:32.613 { 00:14:32.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.613 "dma_device_type": 2 00:14:32.613 } 00:14:32.613 ], 00:14:32.613 "driver_specific": {} 00:14:32.613 }' 00:14:32.613 23:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:32.613 23:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:32.613 23:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:32.613 23:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:32.871 23:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:32.871 23:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:32.871 23:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:32.871 23:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:32.871 23:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:32.871 23:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:32.871 23:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:33.129 23:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:33.129 23:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:33.387 [2024-07-13 23:00:22.565771] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:33.387 [2024-07-13 23:00:22.565820] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:33.387 [2024-07-13 23:00:22.565924] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:33.387 23:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:33.387 23:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:14:33.387 23:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:33.387 23:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:14:33.387 23:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:33.387 23:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:33.387 23:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:33.387 23:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:33.387 23:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:33.387 23:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:33.387 23:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:33.387 23:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:33.387 23:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:33.387 23:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:33.387 23:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:33.387 23:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:33.387 23:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.646 23:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:33.646 "name": "Existed_Raid", 00:14:33.646 "uuid": "845e142b-2515-4873-92b3-66904f27f88b", 00:14:33.646 "strip_size_kb": 64, 00:14:33.646 "state": "offline", 00:14:33.646 "raid_level": "raid0", 00:14:33.646 "superblock": true, 00:14:33.646 "num_base_bdevs": 2, 00:14:33.646 "num_base_bdevs_discovered": 1, 00:14:33.646 "num_base_bdevs_operational": 1, 00:14:33.646 "base_bdevs_list": [ 00:14:33.646 { 00:14:33.646 "name": null, 00:14:33.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.646 "is_configured": false, 00:14:33.646 "data_offset": 2048, 00:14:33.646 "data_size": 63488 00:14:33.646 }, 00:14:33.646 { 00:14:33.646 "name": "BaseBdev2", 00:14:33.646 "uuid": "b2084de3-20ab-48af-b84f-0c6ded010e12", 00:14:33.646 "is_configured": true, 00:14:33.646 "data_offset": 2048, 00:14:33.646 "data_size": 63488 00:14:33.646 } 00:14:33.646 ] 00:14:33.646 }' 00:14:33.646 23:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:33.646 23:00:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.212 23:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:34.212 23:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:34.212 23:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:34.212 23:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:34.471 23:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:34.471 23:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:34.471 23:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:34.730 [2024-07-13 23:00:24.032003] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:34.730 [2024-07-13 23:00:24.032121] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:14:34.730 23:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:34.730 23:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:34.730 23:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:34.730 23:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:34.988 23:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:34.989 23:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:34.989 23:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:14:34.989 23:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 131105 00:14:34.989 23:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 131105 ']' 00:14:34.989 23:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 131105 00:14:34.989 23:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:14:34.989 23:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:34.989 23:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 131105 00:14:34.989 killing process with pid 131105 00:14:34.989 23:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:34.989 23:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:34.989 23:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 131105' 00:14:34.989 23:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 131105 00:14:34.989 23:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 131105 00:14:34.989 [2024-07-13 23:00:24.355302] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:34.989 [2024-07-13 23:00:24.355433] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:35.556 ************************************ 00:14:35.556 END TEST raid_state_function_test_sb 00:14:35.556 ************************************ 00:14:35.556 23:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:14:35.556 00:14:35.556 real 0m11.191s 00:14:35.556 user 0m20.454s 00:14:35.556 sys 0m1.488s 00:14:35.556 23:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:35.556 23:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.556 23:00:24 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:35.556 23:00:24 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:14:35.556 23:00:24 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:35.556 23:00:24 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:35.556 23:00:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:35.556 ************************************ 00:14:35.556 START TEST raid_superblock_test 00:14:35.556 ************************************ 00:14:35.556 23:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 2 00:14:35.556 23:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:14:35.556 23:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:14:35.556 23:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:14:35.556 23:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:14:35.556 23:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:14:35.556 23:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:14:35.556 23:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:14:35.556 23:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:14:35.556 23:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:14:35.556 23:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:14:35.556 23:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:14:35.556 23:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:14:35.556 23:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:14:35.556 23:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:14:35.556 23:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:14:35.556 23:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:14:35.556 23:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=131482 00:14:35.556 23:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:35.556 23:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 131482 /var/tmp/spdk-raid.sock 00:14:35.556 23:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 131482 ']' 00:14:35.556 23:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:35.556 23:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:35.556 23:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:35.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:35.556 23:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:35.556 23:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.556 [2024-07-13 23:00:24.773870] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:35.556 [2024-07-13 23:00:24.774118] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131482 ] 00:14:35.556 [2024-07-13 23:00:24.904996] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.815 [2024-07-13 23:00:24.977404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.815 [2024-07-13 23:00:25.048996] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:35.815 23:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:35.815 23:00:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:14:35.815 23:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:14:35.815 23:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:35.815 23:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:14:35.815 23:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:14:35.815 23:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:35.815 23:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:35.815 23:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:35.815 23:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:35.815 23:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:36.074 malloc1 00:14:36.074 23:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:36.333 [2024-07-13 23:00:25.618748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:36.333 [2024-07-13 23:00:25.618931] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.333 [2024-07-13 23:00:25.618976] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:14:36.333 [2024-07-13 23:00:25.619024] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.333 [2024-07-13 23:00:25.621663] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.333 [2024-07-13 23:00:25.621739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:36.333 pt1 00:14:36.333 23:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:36.333 23:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:36.333 23:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:14:36.333 23:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:14:36.333 23:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:36.333 23:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:36.333 23:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:36.333 23:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:36.333 23:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:36.591 malloc2 00:14:36.591 23:00:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:36.850 [2024-07-13 23:00:26.149680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:36.850 [2024-07-13 23:00:26.149823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.850 [2024-07-13 23:00:26.149879] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:36.850 [2024-07-13 23:00:26.149943] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.850 [2024-07-13 23:00:26.152554] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.850 [2024-07-13 23:00:26.152623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:36.850 pt2 00:14:36.850 23:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:36.850 23:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:36.850 23:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:14:37.109 [2024-07-13 23:00:26.429758] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:37.109 [2024-07-13 23:00:26.431859] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:37.109 [2024-07-13 23:00:26.432072] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006c80 00:14:37.109 [2024-07-13 23:00:26.432090] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:37.109 [2024-07-13 23:00:26.432227] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:14:37.109 [2024-07-13 23:00:26.432686] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006c80 00:14:37.109 [2024-07-13 23:00:26.432711] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000006c80 00:14:37.109 [2024-07-13 23:00:26.432856] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.109 23:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:37.109 23:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:37.109 23:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:37.109 23:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:37.109 23:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:37.109 23:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:37.109 23:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:37.109 23:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:37.109 23:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:37.109 23:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:37.109 23:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.109 23:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.369 23:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:37.369 "name": "raid_bdev1", 00:14:37.369 "uuid": "ec2d7e89-e818-4e8d-a568-34db7916b21c", 00:14:37.369 "strip_size_kb": 64, 00:14:37.369 "state": "online", 00:14:37.369 "raid_level": "raid0", 00:14:37.369 "superblock": true, 00:14:37.369 "num_base_bdevs": 2, 00:14:37.369 "num_base_bdevs_discovered": 2, 00:14:37.369 "num_base_bdevs_operational": 2, 00:14:37.369 "base_bdevs_list": [ 00:14:37.369 { 00:14:37.369 "name": "pt1", 00:14:37.369 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:37.369 "is_configured": true, 00:14:37.369 "data_offset": 2048, 00:14:37.369 "data_size": 63488 00:14:37.369 }, 00:14:37.369 { 00:14:37.369 "name": "pt2", 00:14:37.369 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:37.369 "is_configured": true, 00:14:37.369 "data_offset": 2048, 00:14:37.369 "data_size": 63488 00:14:37.369 } 00:14:37.369 ] 00:14:37.369 }' 00:14:37.369 23:00:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:37.369 23:00:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.936 23:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:14:37.936 23:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:37.936 23:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:37.936 23:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:37.936 23:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:37.937 23:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:37.937 23:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:37.937 23:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:38.196 [2024-07-13 23:00:27.594160] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:38.454 23:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:38.454 "name": "raid_bdev1", 00:14:38.454 "aliases": [ 00:14:38.454 "ec2d7e89-e818-4e8d-a568-34db7916b21c" 00:14:38.454 ], 00:14:38.454 "product_name": "Raid Volume", 00:14:38.454 "block_size": 512, 00:14:38.454 "num_blocks": 126976, 00:14:38.454 "uuid": "ec2d7e89-e818-4e8d-a568-34db7916b21c", 00:14:38.454 "assigned_rate_limits": { 00:14:38.454 "rw_ios_per_sec": 0, 00:14:38.454 "rw_mbytes_per_sec": 0, 00:14:38.454 "r_mbytes_per_sec": 0, 00:14:38.454 "w_mbytes_per_sec": 0 00:14:38.454 }, 00:14:38.454 "claimed": false, 00:14:38.454 "zoned": false, 00:14:38.454 "supported_io_types": { 00:14:38.454 "read": true, 00:14:38.454 "write": true, 00:14:38.454 "unmap": true, 00:14:38.454 "flush": true, 00:14:38.454 "reset": true, 00:14:38.454 "nvme_admin": false, 00:14:38.454 "nvme_io": false, 00:14:38.454 "nvme_io_md": false, 00:14:38.454 "write_zeroes": true, 00:14:38.454 "zcopy": false, 00:14:38.454 "get_zone_info": false, 00:14:38.454 "zone_management": false, 00:14:38.454 "zone_append": false, 00:14:38.454 "compare": false, 00:14:38.454 "compare_and_write": false, 00:14:38.454 "abort": false, 00:14:38.454 "seek_hole": false, 00:14:38.454 "seek_data": false, 00:14:38.454 "copy": false, 00:14:38.454 "nvme_iov_md": false 00:14:38.454 }, 00:14:38.454 "memory_domains": [ 00:14:38.454 { 00:14:38.454 "dma_device_id": "system", 00:14:38.454 "dma_device_type": 1 00:14:38.454 }, 00:14:38.454 { 00:14:38.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.454 "dma_device_type": 2 00:14:38.454 }, 00:14:38.454 { 00:14:38.454 "dma_device_id": "system", 00:14:38.454 "dma_device_type": 1 00:14:38.454 }, 00:14:38.454 { 00:14:38.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.454 "dma_device_type": 2 00:14:38.454 } 00:14:38.454 ], 00:14:38.454 "driver_specific": { 00:14:38.454 "raid": { 00:14:38.454 "uuid": "ec2d7e89-e818-4e8d-a568-34db7916b21c", 00:14:38.454 "strip_size_kb": 64, 00:14:38.454 "state": "online", 00:14:38.454 "raid_level": "raid0", 00:14:38.454 "superblock": true, 00:14:38.454 "num_base_bdevs": 2, 00:14:38.454 "num_base_bdevs_discovered": 2, 00:14:38.454 "num_base_bdevs_operational": 2, 00:14:38.454 "base_bdevs_list": [ 00:14:38.454 { 00:14:38.454 "name": "pt1", 00:14:38.454 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:38.454 "is_configured": true, 00:14:38.454 "data_offset": 2048, 00:14:38.454 "data_size": 63488 00:14:38.454 }, 00:14:38.454 { 00:14:38.454 "name": "pt2", 00:14:38.454 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:38.454 "is_configured": true, 00:14:38.454 "data_offset": 2048, 00:14:38.454 "data_size": 63488 00:14:38.454 } 00:14:38.454 ] 00:14:38.454 } 00:14:38.454 } 00:14:38.454 }' 00:14:38.454 23:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:38.454 23:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:38.454 pt2' 00:14:38.454 23:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:38.454 23:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:38.455 23:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:38.713 23:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:38.713 "name": "pt1", 00:14:38.713 "aliases": [ 00:14:38.713 "00000000-0000-0000-0000-000000000001" 00:14:38.713 ], 00:14:38.713 "product_name": "passthru", 00:14:38.713 "block_size": 512, 00:14:38.713 "num_blocks": 65536, 00:14:38.713 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:38.713 "assigned_rate_limits": { 00:14:38.713 "rw_ios_per_sec": 0, 00:14:38.713 "rw_mbytes_per_sec": 0, 00:14:38.713 "r_mbytes_per_sec": 0, 00:14:38.713 "w_mbytes_per_sec": 0 00:14:38.713 }, 00:14:38.713 "claimed": true, 00:14:38.713 "claim_type": "exclusive_write", 00:14:38.713 "zoned": false, 00:14:38.713 "supported_io_types": { 00:14:38.713 "read": true, 00:14:38.713 "write": true, 00:14:38.713 "unmap": true, 00:14:38.713 "flush": true, 00:14:38.713 "reset": true, 00:14:38.713 "nvme_admin": false, 00:14:38.713 "nvme_io": false, 00:14:38.713 "nvme_io_md": false, 00:14:38.713 "write_zeroes": true, 00:14:38.713 "zcopy": true, 00:14:38.713 "get_zone_info": false, 00:14:38.713 "zone_management": false, 00:14:38.713 "zone_append": false, 00:14:38.713 "compare": false, 00:14:38.713 "compare_and_write": false, 00:14:38.713 "abort": true, 00:14:38.713 "seek_hole": false, 00:14:38.713 "seek_data": false, 00:14:38.713 "copy": true, 00:14:38.713 "nvme_iov_md": false 00:14:38.713 }, 00:14:38.713 "memory_domains": [ 00:14:38.713 { 00:14:38.713 "dma_device_id": "system", 00:14:38.713 "dma_device_type": 1 00:14:38.713 }, 00:14:38.713 { 00:14:38.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.713 "dma_device_type": 2 00:14:38.713 } 00:14:38.713 ], 00:14:38.713 "driver_specific": { 00:14:38.713 "passthru": { 00:14:38.713 "name": "pt1", 00:14:38.713 "base_bdev_name": "malloc1" 00:14:38.713 } 00:14:38.713 } 00:14:38.713 }' 00:14:38.713 23:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:38.713 23:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:38.713 23:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:38.713 23:00:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:38.713 23:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:38.713 23:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:38.713 23:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:38.972 23:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:38.972 23:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:38.972 23:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:38.972 23:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:38.972 23:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:38.972 23:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:38.972 23:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:38.972 23:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:39.230 23:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:39.230 "name": "pt2", 00:14:39.230 "aliases": [ 00:14:39.230 "00000000-0000-0000-0000-000000000002" 00:14:39.230 ], 00:14:39.230 "product_name": "passthru", 00:14:39.230 "block_size": 512, 00:14:39.230 "num_blocks": 65536, 00:14:39.230 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:39.230 "assigned_rate_limits": { 00:14:39.231 "rw_ios_per_sec": 0, 00:14:39.231 "rw_mbytes_per_sec": 0, 00:14:39.231 "r_mbytes_per_sec": 0, 00:14:39.231 "w_mbytes_per_sec": 0 00:14:39.231 }, 00:14:39.231 "claimed": true, 00:14:39.231 "claim_type": "exclusive_write", 00:14:39.231 "zoned": false, 00:14:39.231 "supported_io_types": { 00:14:39.231 "read": true, 00:14:39.231 "write": true, 00:14:39.231 "unmap": true, 00:14:39.231 "flush": true, 00:14:39.231 "reset": true, 00:14:39.231 "nvme_admin": false, 00:14:39.231 "nvme_io": false, 00:14:39.231 "nvme_io_md": false, 00:14:39.231 "write_zeroes": true, 00:14:39.231 "zcopy": true, 00:14:39.231 "get_zone_info": false, 00:14:39.231 "zone_management": false, 00:14:39.231 "zone_append": false, 00:14:39.231 "compare": false, 00:14:39.231 "compare_and_write": false, 00:14:39.231 "abort": true, 00:14:39.231 "seek_hole": false, 00:14:39.231 "seek_data": false, 00:14:39.231 "copy": true, 00:14:39.231 "nvme_iov_md": false 00:14:39.231 }, 00:14:39.231 "memory_domains": [ 00:14:39.231 { 00:14:39.231 "dma_device_id": "system", 00:14:39.231 "dma_device_type": 1 00:14:39.231 }, 00:14:39.231 { 00:14:39.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.231 "dma_device_type": 2 00:14:39.231 } 00:14:39.231 ], 00:14:39.231 "driver_specific": { 00:14:39.231 "passthru": { 00:14:39.231 "name": "pt2", 00:14:39.231 "base_bdev_name": "malloc2" 00:14:39.231 } 00:14:39.231 } 00:14:39.231 }' 00:14:39.231 23:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:39.231 23:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:39.231 23:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:39.231 23:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:39.490 23:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:39.490 23:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:39.490 23:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:39.490 23:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:39.490 23:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:39.490 23:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:39.490 23:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:39.490 23:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:39.490 23:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:39.490 23:00:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:14:39.749 [2024-07-13 23:00:29.086435] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:39.749 23:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=ec2d7e89-e818-4e8d-a568-34db7916b21c 00:14:39.749 23:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z ec2d7e89-e818-4e8d-a568-34db7916b21c ']' 00:14:39.749 23:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:40.008 [2024-07-13 23:00:29.358235] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:40.008 [2024-07-13 23:00:29.358263] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:40.008 [2024-07-13 23:00:29.358394] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:40.008 [2024-07-13 23:00:29.358472] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:40.008 [2024-07-13 23:00:29.358488] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name raid_bdev1, state offline 00:14:40.008 23:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:40.008 23:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:14:40.267 23:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:14:40.267 23:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:14:40.267 23:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:40.267 23:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:40.834 23:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:40.834 23:00:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:40.834 23:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:40.834 23:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:41.092 23:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:14:41.092 23:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:41.092 23:00:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:14:41.092 23:00:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:41.092 23:00:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:41.092 23:00:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:41.093 23:00:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:41.093 23:00:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:41.093 23:00:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:41.093 23:00:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:41.093 23:00:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:41.093 23:00:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:41.093 23:00:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:41.350 [2024-07-13 23:00:30.686469] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:41.350 [2024-07-13 23:00:30.688742] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:41.350 [2024-07-13 23:00:30.688831] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:41.350 [2024-07-13 23:00:30.688938] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:41.350 [2024-07-13 23:00:30.688993] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:41.350 [2024-07-13 23:00:30.689007] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state configuring 00:14:41.350 request: 00:14:41.350 { 00:14:41.350 "name": "raid_bdev1", 00:14:41.350 "raid_level": "raid0", 00:14:41.350 "base_bdevs": [ 00:14:41.350 "malloc1", 00:14:41.350 "malloc2" 00:14:41.350 ], 00:14:41.350 "strip_size_kb": 64, 00:14:41.350 "superblock": false, 00:14:41.350 "method": "bdev_raid_create", 00:14:41.350 "req_id": 1 00:14:41.350 } 00:14:41.350 Got JSON-RPC error response 00:14:41.350 response: 00:14:41.350 { 00:14:41.350 "code": -17, 00:14:41.350 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:41.350 } 00:14:41.350 23:00:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:14:41.350 23:00:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:41.350 23:00:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:41.350 23:00:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:41.350 23:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:14:41.350 23:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.609 23:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:14:41.609 23:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:14:41.609 23:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:41.867 [2024-07-13 23:00:31.122499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:41.867 [2024-07-13 23:00:31.122588] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.867 [2024-07-13 23:00:31.122634] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:14:41.867 [2024-07-13 23:00:31.122666] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.867 [2024-07-13 23:00:31.125080] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.867 [2024-07-13 23:00:31.125139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:41.867 [2024-07-13 23:00:31.125218] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:41.867 [2024-07-13 23:00:31.125289] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:41.867 pt1 00:14:41.867 23:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:14:41.867 23:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:41.868 23:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:41.868 23:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:41.868 23:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:41.868 23:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:41.868 23:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:41.868 23:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:41.868 23:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:41.868 23:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:41.868 23:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.868 23:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.125 23:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:42.125 "name": "raid_bdev1", 00:14:42.125 "uuid": "ec2d7e89-e818-4e8d-a568-34db7916b21c", 00:14:42.125 "strip_size_kb": 64, 00:14:42.125 "state": "configuring", 00:14:42.125 "raid_level": "raid0", 00:14:42.125 "superblock": true, 00:14:42.125 "num_base_bdevs": 2, 00:14:42.126 "num_base_bdevs_discovered": 1, 00:14:42.126 "num_base_bdevs_operational": 2, 00:14:42.126 "base_bdevs_list": [ 00:14:42.126 { 00:14:42.126 "name": "pt1", 00:14:42.126 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:42.126 "is_configured": true, 00:14:42.126 "data_offset": 2048, 00:14:42.126 "data_size": 63488 00:14:42.126 }, 00:14:42.126 { 00:14:42.126 "name": null, 00:14:42.126 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:42.126 "is_configured": false, 00:14:42.126 "data_offset": 2048, 00:14:42.126 "data_size": 63488 00:14:42.126 } 00:14:42.126 ] 00:14:42.126 }' 00:14:42.126 23:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:42.126 23:00:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.693 23:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:14:42.693 23:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:14:42.693 23:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:42.693 23:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:42.975 [2024-07-13 23:00:32.250725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:42.975 [2024-07-13 23:00:32.250861] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.975 [2024-07-13 23:00:32.250904] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:14:42.975 [2024-07-13 23:00:32.250937] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.975 [2024-07-13 23:00:32.251472] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.975 [2024-07-13 23:00:32.251525] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:42.975 [2024-07-13 23:00:32.251627] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:42.975 [2024-07-13 23:00:32.251662] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:42.975 [2024-07-13 23:00:32.251810] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:14:42.975 [2024-07-13 23:00:32.251836] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:42.975 [2024-07-13 23:00:32.251939] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:14:42.975 [2024-07-13 23:00:32.252273] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:14:42.975 [2024-07-13 23:00:32.252298] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:14:42.975 [2024-07-13 23:00:32.252411] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.975 pt2 00:14:42.975 23:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:14:42.975 23:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:42.975 23:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:42.975 23:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:42.975 23:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:42.975 23:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:42.975 23:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:42.975 23:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:42.976 23:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:42.976 23:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:42.976 23:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:42.976 23:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:42.976 23:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.976 23:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.254 23:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:43.254 "name": "raid_bdev1", 00:14:43.254 "uuid": "ec2d7e89-e818-4e8d-a568-34db7916b21c", 00:14:43.254 "strip_size_kb": 64, 00:14:43.254 "state": "online", 00:14:43.254 "raid_level": "raid0", 00:14:43.254 "superblock": true, 00:14:43.254 "num_base_bdevs": 2, 00:14:43.254 "num_base_bdevs_discovered": 2, 00:14:43.254 "num_base_bdevs_operational": 2, 00:14:43.254 "base_bdevs_list": [ 00:14:43.254 { 00:14:43.254 "name": "pt1", 00:14:43.254 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:43.254 "is_configured": true, 00:14:43.254 "data_offset": 2048, 00:14:43.254 "data_size": 63488 00:14:43.254 }, 00:14:43.254 { 00:14:43.254 "name": "pt2", 00:14:43.254 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:43.254 "is_configured": true, 00:14:43.254 "data_offset": 2048, 00:14:43.254 "data_size": 63488 00:14:43.254 } 00:14:43.254 ] 00:14:43.254 }' 00:14:43.254 23:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:43.254 23:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.829 23:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:14:43.829 23:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:43.829 23:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:43.829 23:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:43.829 23:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:43.829 23:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:43.829 23:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:43.829 23:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:44.086 [2024-07-13 23:00:33.339186] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:44.086 23:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:44.086 "name": "raid_bdev1", 00:14:44.086 "aliases": [ 00:14:44.086 "ec2d7e89-e818-4e8d-a568-34db7916b21c" 00:14:44.086 ], 00:14:44.086 "product_name": "Raid Volume", 00:14:44.086 "block_size": 512, 00:14:44.086 "num_blocks": 126976, 00:14:44.086 "uuid": "ec2d7e89-e818-4e8d-a568-34db7916b21c", 00:14:44.086 "assigned_rate_limits": { 00:14:44.086 "rw_ios_per_sec": 0, 00:14:44.086 "rw_mbytes_per_sec": 0, 00:14:44.086 "r_mbytes_per_sec": 0, 00:14:44.086 "w_mbytes_per_sec": 0 00:14:44.086 }, 00:14:44.086 "claimed": false, 00:14:44.086 "zoned": false, 00:14:44.086 "supported_io_types": { 00:14:44.086 "read": true, 00:14:44.086 "write": true, 00:14:44.086 "unmap": true, 00:14:44.086 "flush": true, 00:14:44.086 "reset": true, 00:14:44.086 "nvme_admin": false, 00:14:44.086 "nvme_io": false, 00:14:44.086 "nvme_io_md": false, 00:14:44.086 "write_zeroes": true, 00:14:44.086 "zcopy": false, 00:14:44.086 "get_zone_info": false, 00:14:44.086 "zone_management": false, 00:14:44.086 "zone_append": false, 00:14:44.086 "compare": false, 00:14:44.086 "compare_and_write": false, 00:14:44.086 "abort": false, 00:14:44.086 "seek_hole": false, 00:14:44.086 "seek_data": false, 00:14:44.086 "copy": false, 00:14:44.086 "nvme_iov_md": false 00:14:44.086 }, 00:14:44.086 "memory_domains": [ 00:14:44.086 { 00:14:44.086 "dma_device_id": "system", 00:14:44.086 "dma_device_type": 1 00:14:44.086 }, 00:14:44.086 { 00:14:44.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.086 "dma_device_type": 2 00:14:44.086 }, 00:14:44.086 { 00:14:44.086 "dma_device_id": "system", 00:14:44.086 "dma_device_type": 1 00:14:44.086 }, 00:14:44.086 { 00:14:44.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.086 "dma_device_type": 2 00:14:44.086 } 00:14:44.086 ], 00:14:44.086 "driver_specific": { 00:14:44.086 "raid": { 00:14:44.086 "uuid": "ec2d7e89-e818-4e8d-a568-34db7916b21c", 00:14:44.086 "strip_size_kb": 64, 00:14:44.086 "state": "online", 00:14:44.086 "raid_level": "raid0", 00:14:44.086 "superblock": true, 00:14:44.086 "num_base_bdevs": 2, 00:14:44.086 "num_base_bdevs_discovered": 2, 00:14:44.086 "num_base_bdevs_operational": 2, 00:14:44.086 "base_bdevs_list": [ 00:14:44.086 { 00:14:44.086 "name": "pt1", 00:14:44.086 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:44.086 "is_configured": true, 00:14:44.086 "data_offset": 2048, 00:14:44.086 "data_size": 63488 00:14:44.086 }, 00:14:44.086 { 00:14:44.086 "name": "pt2", 00:14:44.086 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:44.086 "is_configured": true, 00:14:44.086 "data_offset": 2048, 00:14:44.086 "data_size": 63488 00:14:44.086 } 00:14:44.086 ] 00:14:44.086 } 00:14:44.086 } 00:14:44.086 }' 00:14:44.086 23:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:44.086 23:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:44.086 pt2' 00:14:44.086 23:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:44.086 23:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:44.086 23:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:44.343 23:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:44.343 "name": "pt1", 00:14:44.343 "aliases": [ 00:14:44.343 "00000000-0000-0000-0000-000000000001" 00:14:44.343 ], 00:14:44.343 "product_name": "passthru", 00:14:44.343 "block_size": 512, 00:14:44.343 "num_blocks": 65536, 00:14:44.343 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:44.343 "assigned_rate_limits": { 00:14:44.343 "rw_ios_per_sec": 0, 00:14:44.343 "rw_mbytes_per_sec": 0, 00:14:44.343 "r_mbytes_per_sec": 0, 00:14:44.343 "w_mbytes_per_sec": 0 00:14:44.343 }, 00:14:44.343 "claimed": true, 00:14:44.343 "claim_type": "exclusive_write", 00:14:44.343 "zoned": false, 00:14:44.343 "supported_io_types": { 00:14:44.343 "read": true, 00:14:44.343 "write": true, 00:14:44.343 "unmap": true, 00:14:44.343 "flush": true, 00:14:44.343 "reset": true, 00:14:44.343 "nvme_admin": false, 00:14:44.343 "nvme_io": false, 00:14:44.343 "nvme_io_md": false, 00:14:44.343 "write_zeroes": true, 00:14:44.343 "zcopy": true, 00:14:44.343 "get_zone_info": false, 00:14:44.343 "zone_management": false, 00:14:44.343 "zone_append": false, 00:14:44.343 "compare": false, 00:14:44.343 "compare_and_write": false, 00:14:44.343 "abort": true, 00:14:44.343 "seek_hole": false, 00:14:44.343 "seek_data": false, 00:14:44.343 "copy": true, 00:14:44.343 "nvme_iov_md": false 00:14:44.343 }, 00:14:44.343 "memory_domains": [ 00:14:44.343 { 00:14:44.343 "dma_device_id": "system", 00:14:44.343 "dma_device_type": 1 00:14:44.343 }, 00:14:44.343 { 00:14:44.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.343 "dma_device_type": 2 00:14:44.343 } 00:14:44.343 ], 00:14:44.343 "driver_specific": { 00:14:44.343 "passthru": { 00:14:44.343 "name": "pt1", 00:14:44.343 "base_bdev_name": "malloc1" 00:14:44.343 } 00:14:44.343 } 00:14:44.343 }' 00:14:44.343 23:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:44.343 23:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:44.600 23:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:44.600 23:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:44.600 23:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:44.600 23:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:44.600 23:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:44.600 23:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:44.600 23:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:44.600 23:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:44.858 23:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:44.858 23:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:44.858 23:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:44.858 23:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:44.858 23:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:45.115 23:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:45.115 "name": "pt2", 00:14:45.115 "aliases": [ 00:14:45.115 "00000000-0000-0000-0000-000000000002" 00:14:45.115 ], 00:14:45.115 "product_name": "passthru", 00:14:45.115 "block_size": 512, 00:14:45.115 "num_blocks": 65536, 00:14:45.115 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:45.115 "assigned_rate_limits": { 00:14:45.115 "rw_ios_per_sec": 0, 00:14:45.115 "rw_mbytes_per_sec": 0, 00:14:45.115 "r_mbytes_per_sec": 0, 00:14:45.115 "w_mbytes_per_sec": 0 00:14:45.115 }, 00:14:45.115 "claimed": true, 00:14:45.115 "claim_type": "exclusive_write", 00:14:45.115 "zoned": false, 00:14:45.115 "supported_io_types": { 00:14:45.115 "read": true, 00:14:45.115 "write": true, 00:14:45.115 "unmap": true, 00:14:45.115 "flush": true, 00:14:45.115 "reset": true, 00:14:45.115 "nvme_admin": false, 00:14:45.115 "nvme_io": false, 00:14:45.115 "nvme_io_md": false, 00:14:45.115 "write_zeroes": true, 00:14:45.115 "zcopy": true, 00:14:45.115 "get_zone_info": false, 00:14:45.115 "zone_management": false, 00:14:45.115 "zone_append": false, 00:14:45.115 "compare": false, 00:14:45.115 "compare_and_write": false, 00:14:45.115 "abort": true, 00:14:45.115 "seek_hole": false, 00:14:45.115 "seek_data": false, 00:14:45.115 "copy": true, 00:14:45.115 "nvme_iov_md": false 00:14:45.115 }, 00:14:45.115 "memory_domains": [ 00:14:45.115 { 00:14:45.115 "dma_device_id": "system", 00:14:45.115 "dma_device_type": 1 00:14:45.115 }, 00:14:45.115 { 00:14:45.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.115 "dma_device_type": 2 00:14:45.115 } 00:14:45.115 ], 00:14:45.115 "driver_specific": { 00:14:45.115 "passthru": { 00:14:45.115 "name": "pt2", 00:14:45.115 "base_bdev_name": "malloc2" 00:14:45.115 } 00:14:45.115 } 00:14:45.115 }' 00:14:45.115 23:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:45.115 23:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:45.115 23:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:45.115 23:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:45.115 23:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:45.373 23:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:45.373 23:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:45.373 23:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:45.373 23:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:45.373 23:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:45.373 23:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:45.373 23:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:45.373 23:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:45.373 23:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:14:45.631 [2024-07-13 23:00:34.943455] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:45.631 23:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' ec2d7e89-e818-4e8d-a568-34db7916b21c '!=' ec2d7e89-e818-4e8d-a568-34db7916b21c ']' 00:14:45.631 23:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:14:45.631 23:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:45.631 23:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:45.631 23:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 131482 00:14:45.631 23:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 131482 ']' 00:14:45.631 23:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 131482 00:14:45.631 23:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:14:45.631 23:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:45.631 23:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 131482 00:14:45.631 23:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:45.631 23:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:45.631 23:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 131482' 00:14:45.631 killing process with pid 131482 00:14:45.631 23:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 131482 00:14:45.631 [2024-07-13 23:00:34.987203] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:45.631 23:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 131482 00:14:45.631 [2024-07-13 23:00:34.987326] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:45.631 [2024-07-13 23:00:34.987399] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:45.631 [2024-07-13 23:00:34.987414] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:14:45.631 [2024-07-13 23:00:35.012369] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:46.199 23:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:14:46.199 00:14:46.199 real 0m10.583s 00:14:46.199 user 0m19.802s 00:14:46.199 sys 0m1.444s 00:14:46.199 23:00:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:46.199 23:00:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.199 ************************************ 00:14:46.199 END TEST raid_superblock_test 00:14:46.199 ************************************ 00:14:46.199 23:00:35 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:46.199 23:00:35 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:14:46.199 23:00:35 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:46.199 23:00:35 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:46.199 23:00:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:46.199 ************************************ 00:14:46.199 START TEST raid_read_error_test 00:14:46.199 ************************************ 00:14:46.199 23:00:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 read 00:14:46.199 23:00:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:14:46.199 23:00:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:14:46.199 23:00:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:14:46.199 23:00:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:14:46.199 23:00:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:46.199 23:00:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:14:46.199 23:00:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:46.199 23:00:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:46.199 23:00:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:14:46.199 23:00:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:46.199 23:00:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:46.199 23:00:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:46.199 23:00:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:14:46.199 23:00:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:14:46.199 23:00:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:14:46.199 23:00:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:14:46.199 23:00:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:14:46.199 23:00:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:14:46.199 23:00:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:14:46.199 23:00:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:14:46.199 23:00:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:14:46.199 23:00:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:14:46.199 23:00:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.0cy6JlsHoo 00:14:46.199 23:00:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=131844 00:14:46.199 23:00:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 131844 /var/tmp/spdk-raid.sock 00:14:46.199 23:00:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 131844 ']' 00:14:46.199 23:00:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:46.199 23:00:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:46.199 23:00:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:46.199 23:00:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:46.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:46.199 23:00:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:46.199 23:00:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.199 [2024-07-13 23:00:35.431486] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:46.199 [2024-07-13 23:00:35.431719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131844 ] 00:14:46.199 [2024-07-13 23:00:35.571237] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.458 [2024-07-13 23:00:35.642093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.458 [2024-07-13 23:00:35.713342] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:47.024 23:00:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:47.024 23:00:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:14:47.024 23:00:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:47.024 23:00:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:47.283 BaseBdev1_malloc 00:14:47.283 23:00:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:47.541 true 00:14:47.541 23:00:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:47.800 [2024-07-13 23:00:37.069680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:47.800 [2024-07-13 23:00:37.069816] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.800 [2024-07-13 23:00:37.069873] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:14:47.800 [2024-07-13 23:00:37.069936] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.800 [2024-07-13 23:00:37.072611] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.800 [2024-07-13 23:00:37.072666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:47.800 BaseBdev1 00:14:47.800 23:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:47.800 23:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:48.058 BaseBdev2_malloc 00:14:48.058 23:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:48.316 true 00:14:48.316 23:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:48.575 [2024-07-13 23:00:37.779292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:48.575 [2024-07-13 23:00:37.779423] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.575 [2024-07-13 23:00:37.779472] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:14:48.575 [2024-07-13 23:00:37.779529] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.575 [2024-07-13 23:00:37.781884] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.575 [2024-07-13 23:00:37.781939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:48.575 BaseBdev2 00:14:48.575 23:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:14:48.833 [2024-07-13 23:00:37.983457] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:48.833 [2024-07-13 23:00:37.985583] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:48.833 [2024-07-13 23:00:37.985826] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:14:48.833 [2024-07-13 23:00:37.985844] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:48.833 [2024-07-13 23:00:37.985976] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:14:48.833 [2024-07-13 23:00:37.986409] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:14:48.833 [2024-07-13 23:00:37.986434] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007280 00:14:48.833 [2024-07-13 23:00:37.986601] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.833 23:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:48.833 23:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:48.833 23:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:48.833 23:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:48.833 23:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:48.833 23:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:48.833 23:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:48.833 23:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:48.833 23:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:48.833 23:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:48.833 23:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.833 23:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.833 23:00:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:48.833 "name": "raid_bdev1", 00:14:48.833 "uuid": "d74bc14c-12e7-4a08-95ae-0ad9400c6fef", 00:14:48.833 "strip_size_kb": 64, 00:14:48.833 "state": "online", 00:14:48.833 "raid_level": "raid0", 00:14:48.833 "superblock": true, 00:14:48.833 "num_base_bdevs": 2, 00:14:48.833 "num_base_bdevs_discovered": 2, 00:14:48.833 "num_base_bdevs_operational": 2, 00:14:48.833 "base_bdevs_list": [ 00:14:48.833 { 00:14:48.833 "name": "BaseBdev1", 00:14:48.833 "uuid": "1771f114-8d62-5b12-a71d-2a5bf7553a1d", 00:14:48.833 "is_configured": true, 00:14:48.833 "data_offset": 2048, 00:14:48.833 "data_size": 63488 00:14:48.833 }, 00:14:48.833 { 00:14:48.833 "name": "BaseBdev2", 00:14:48.833 "uuid": "07fa2478-d159-5ead-9eb2-2dc3ebc9c4f2", 00:14:48.833 "is_configured": true, 00:14:48.833 "data_offset": 2048, 00:14:48.833 "data_size": 63488 00:14:48.833 } 00:14:48.833 ] 00:14:48.833 }' 00:14:48.833 23:00:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:48.833 23:00:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.769 23:00:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:49.769 23:00:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:14:49.769 [2024-07-13 23:00:38.960073] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:14:50.702 23:00:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:50.702 23:00:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:14:50.702 23:00:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:50.702 23:00:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:14:50.702 23:00:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:50.961 23:00:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:50.961 23:00:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:50.961 23:00:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:50.961 23:00:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:50.961 23:00:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:50.961 23:00:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:50.961 23:00:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:50.961 23:00:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:50.961 23:00:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:50.961 23:00:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.961 23:00:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.218 23:00:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:51.218 "name": "raid_bdev1", 00:14:51.218 "uuid": "d74bc14c-12e7-4a08-95ae-0ad9400c6fef", 00:14:51.218 "strip_size_kb": 64, 00:14:51.218 "state": "online", 00:14:51.218 "raid_level": "raid0", 00:14:51.218 "superblock": true, 00:14:51.218 "num_base_bdevs": 2, 00:14:51.218 "num_base_bdevs_discovered": 2, 00:14:51.218 "num_base_bdevs_operational": 2, 00:14:51.218 "base_bdevs_list": [ 00:14:51.218 { 00:14:51.218 "name": "BaseBdev1", 00:14:51.218 "uuid": "1771f114-8d62-5b12-a71d-2a5bf7553a1d", 00:14:51.218 "is_configured": true, 00:14:51.218 "data_offset": 2048, 00:14:51.218 "data_size": 63488 00:14:51.218 }, 00:14:51.218 { 00:14:51.218 "name": "BaseBdev2", 00:14:51.218 "uuid": "07fa2478-d159-5ead-9eb2-2dc3ebc9c4f2", 00:14:51.218 "is_configured": true, 00:14:51.218 "data_offset": 2048, 00:14:51.218 "data_size": 63488 00:14:51.218 } 00:14:51.218 ] 00:14:51.218 }' 00:14:51.218 23:00:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:51.218 23:00:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.784 23:00:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:52.042 [2024-07-13 23:00:41.300344] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:52.042 [2024-07-13 23:00:41.300414] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:52.042 [2024-07-13 23:00:41.302975] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:52.042 [2024-07-13 23:00:41.303044] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.042 [2024-07-13 23:00:41.303088] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:52.042 [2024-07-13 23:00:41.303102] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state offline 00:14:52.042 0 00:14:52.042 23:00:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 131844 00:14:52.042 23:00:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 131844 ']' 00:14:52.042 23:00:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 131844 00:14:52.042 23:00:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:14:52.042 23:00:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:52.042 23:00:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 131844 00:14:52.042 23:00:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:52.042 23:00:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:52.042 23:00:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 131844' 00:14:52.042 killing process with pid 131844 00:14:52.042 23:00:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 131844 00:14:52.042 [2024-07-13 23:00:41.337743] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:52.042 23:00:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 131844 00:14:52.042 [2024-07-13 23:00:41.351714] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:52.299 23:00:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.0cy6JlsHoo 00:14:52.299 23:00:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:14:52.299 23:00:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:14:52.299 23:00:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:14:52.299 23:00:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:14:52.300 23:00:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:52.300 23:00:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:52.300 23:00:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:14:52.300 00:14:52.300 real 0m6.319s 00:14:52.300 user 0m10.049s 00:14:52.300 sys 0m0.839s 00:14:52.300 23:00:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:52.300 23:00:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.300 ************************************ 00:14:52.300 END TEST raid_read_error_test 00:14:52.300 ************************************ 00:14:52.558 23:00:41 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:52.558 23:00:41 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:14:52.558 23:00:41 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:52.558 23:00:41 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:52.558 23:00:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:52.558 ************************************ 00:14:52.558 START TEST raid_write_error_test 00:14:52.558 ************************************ 00:14:52.558 23:00:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 write 00:14:52.558 23:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:14:52.558 23:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:14:52.558 23:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:14:52.558 23:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:14:52.558 23:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:52.558 23:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:14:52.558 23:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:52.558 23:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:52.558 23:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:14:52.558 23:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:52.558 23:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:52.558 23:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:52.558 23:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:14:52.558 23:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:14:52.558 23:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:14:52.558 23:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:14:52.558 23:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:14:52.558 23:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:14:52.558 23:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:14:52.558 23:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:14:52.558 23:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:14:52.558 23:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:14:52.558 23:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.8oJKqPz0m4 00:14:52.558 23:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=132029 00:14:52.558 23:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 132029 /var/tmp/spdk-raid.sock 00:14:52.558 23:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:52.558 23:00:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 132029 ']' 00:14:52.558 23:00:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:52.558 23:00:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:52.558 23:00:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:52.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:52.558 23:00:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:52.558 23:00:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.558 [2024-07-13 23:00:41.807516] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:52.558 [2024-07-13 23:00:41.807808] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132029 ] 00:14:52.558 [2024-07-13 23:00:41.946268] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.816 [2024-07-13 23:00:42.016957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.816 [2024-07-13 23:00:42.089532] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:53.749 23:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:53.749 23:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:14:53.749 23:00:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:53.749 23:00:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:53.749 BaseBdev1_malloc 00:14:53.749 23:00:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:54.008 true 00:14:54.008 23:00:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:54.266 [2024-07-13 23:00:43.495395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:54.266 [2024-07-13 23:00:43.495529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.266 [2024-07-13 23:00:43.495586] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:14:54.266 [2024-07-13 23:00:43.495661] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.266 [2024-07-13 23:00:43.498292] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.266 [2024-07-13 23:00:43.498368] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:54.266 BaseBdev1 00:14:54.266 23:00:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:54.266 23:00:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:54.524 BaseBdev2_malloc 00:14:54.524 23:00:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:54.783 true 00:14:54.783 23:00:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:55.041 [2024-07-13 23:00:44.216964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:55.041 [2024-07-13 23:00:44.217063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.041 [2024-07-13 23:00:44.217112] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:14:55.041 [2024-07-13 23:00:44.217164] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.041 [2024-07-13 23:00:44.219432] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.041 [2024-07-13 23:00:44.219485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:55.041 BaseBdev2 00:14:55.041 23:00:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:14:55.299 [2024-07-13 23:00:44.473106] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:55.299 [2024-07-13 23:00:44.475252] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:55.299 [2024-07-13 23:00:44.475519] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:14:55.299 [2024-07-13 23:00:44.475537] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:55.299 [2024-07-13 23:00:44.475653] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:14:55.299 [2024-07-13 23:00:44.476084] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:14:55.299 [2024-07-13 23:00:44.476109] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007280 00:14:55.299 [2024-07-13 23:00:44.476266] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.299 23:00:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:55.299 23:00:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:55.299 23:00:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:55.299 23:00:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:55.299 23:00:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:55.299 23:00:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:55.299 23:00:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:55.299 23:00:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:55.299 23:00:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:55.299 23:00:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:55.299 23:00:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.299 23:00:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.565 23:00:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:55.565 "name": "raid_bdev1", 00:14:55.565 "uuid": "7d83dce3-c62f-4fbb-8108-10aec8c98195", 00:14:55.565 "strip_size_kb": 64, 00:14:55.565 "state": "online", 00:14:55.565 "raid_level": "raid0", 00:14:55.565 "superblock": true, 00:14:55.565 "num_base_bdevs": 2, 00:14:55.565 "num_base_bdevs_discovered": 2, 00:14:55.565 "num_base_bdevs_operational": 2, 00:14:55.565 "base_bdevs_list": [ 00:14:55.565 { 00:14:55.565 "name": "BaseBdev1", 00:14:55.565 "uuid": "288e7fc5-059c-5119-9c1b-9639f9583a50", 00:14:55.565 "is_configured": true, 00:14:55.565 "data_offset": 2048, 00:14:55.565 "data_size": 63488 00:14:55.565 }, 00:14:55.565 { 00:14:55.565 "name": "BaseBdev2", 00:14:55.565 "uuid": "a6c4cab6-4850-5c01-8ba9-5c8507bdaf36", 00:14:55.565 "is_configured": true, 00:14:55.565 "data_offset": 2048, 00:14:55.565 "data_size": 63488 00:14:55.565 } 00:14:55.565 ] 00:14:55.565 }' 00:14:55.565 23:00:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:55.565 23:00:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.147 23:00:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:14:56.147 23:00:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:56.147 [2024-07-13 23:00:45.517804] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:14:57.083 23:00:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:57.342 23:00:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:14:57.342 23:00:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:57.342 23:00:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:14:57.342 23:00:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:57.342 23:00:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:57.342 23:00:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:57.342 23:00:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:57.342 23:00:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:57.342 23:00:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:57.342 23:00:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:57.342 23:00:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:57.342 23:00:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:57.342 23:00:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:57.342 23:00:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.342 23:00:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.601 23:00:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:57.601 "name": "raid_bdev1", 00:14:57.601 "uuid": "7d83dce3-c62f-4fbb-8108-10aec8c98195", 00:14:57.601 "strip_size_kb": 64, 00:14:57.601 "state": "online", 00:14:57.601 "raid_level": "raid0", 00:14:57.601 "superblock": true, 00:14:57.601 "num_base_bdevs": 2, 00:14:57.601 "num_base_bdevs_discovered": 2, 00:14:57.601 "num_base_bdevs_operational": 2, 00:14:57.601 "base_bdevs_list": [ 00:14:57.601 { 00:14:57.601 "name": "BaseBdev1", 00:14:57.601 "uuid": "288e7fc5-059c-5119-9c1b-9639f9583a50", 00:14:57.601 "is_configured": true, 00:14:57.601 "data_offset": 2048, 00:14:57.601 "data_size": 63488 00:14:57.601 }, 00:14:57.601 { 00:14:57.601 "name": "BaseBdev2", 00:14:57.601 "uuid": "a6c4cab6-4850-5c01-8ba9-5c8507bdaf36", 00:14:57.601 "is_configured": true, 00:14:57.601 "data_offset": 2048, 00:14:57.601 "data_size": 63488 00:14:57.601 } 00:14:57.601 ] 00:14:57.601 }' 00:14:57.601 23:00:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:57.601 23:00:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.168 23:00:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:58.426 [2024-07-13 23:00:47.741494] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:58.426 [2024-07-13 23:00:47.741563] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:58.426 [2024-07-13 23:00:47.744074] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:58.426 [2024-07-13 23:00:47.744134] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.426 [2024-07-13 23:00:47.744176] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:58.426 [2024-07-13 23:00:47.744189] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state offline 00:14:58.426 0 00:14:58.426 23:00:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 132029 00:14:58.426 23:00:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 132029 ']' 00:14:58.426 23:00:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 132029 00:14:58.426 23:00:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:14:58.426 23:00:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:58.426 23:00:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 132029 00:14:58.426 23:00:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:58.426 23:00:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:58.426 23:00:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 132029' 00:14:58.426 killing process with pid 132029 00:14:58.426 23:00:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 132029 00:14:58.426 [2024-07-13 23:00:47.784152] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:58.426 23:00:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 132029 00:14:58.426 [2024-07-13 23:00:47.799013] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:58.993 23:00:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.8oJKqPz0m4 00:14:58.993 23:00:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:14:58.993 23:00:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:14:58.993 23:00:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.45 00:14:58.993 23:00:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:14:58.993 23:00:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:58.993 23:00:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:58.993 23:00:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.45 != \0\.\0\0 ]] 00:14:58.993 00:14:58.993 real 0m6.390s 00:14:58.993 user 0m10.198s 00:14:58.993 sys 0m0.848s 00:14:58.993 23:00:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:58.993 23:00:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.993 ************************************ 00:14:58.993 END TEST raid_write_error_test 00:14:58.993 ************************************ 00:14:58.993 23:00:48 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:58.993 23:00:48 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:14:58.993 23:00:48 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:14:58.993 23:00:48 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:58.993 23:00:48 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:58.993 23:00:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:58.993 ************************************ 00:14:58.993 START TEST raid_state_function_test 00:14:58.993 ************************************ 00:14:58.993 23:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 false 00:14:58.993 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:14:58.993 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:14:58.993 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:14:58.993 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:58.993 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:58.993 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:58.993 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:14:58.993 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:58.993 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:58.993 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:14:58.993 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:58.993 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:58.993 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:58.993 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:58.993 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:58.993 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:58.993 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:58.993 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:58.993 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:14:58.993 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:58.993 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:58.993 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:14:58.993 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:14:58.993 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=132212 00:14:58.993 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 132212' 00:14:58.993 Process raid pid: 132212 00:14:58.993 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 132212 /var/tmp/spdk-raid.sock 00:14:58.993 23:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 132212 ']' 00:14:58.993 23:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:58.993 23:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:58.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:58.993 23:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:58.993 23:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:58.993 23:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.993 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:58.993 [2024-07-13 23:00:48.250033] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:58.993 [2024-07-13 23:00:48.250494] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.994 [2024-07-13 23:00:48.389170] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.252 [2024-07-13 23:00:48.457795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.252 [2024-07-13 23:00:48.530273] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.252 23:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:59.252 23:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:14:59.252 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:59.510 [2024-07-13 23:00:48.852643] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:59.510 [2024-07-13 23:00:48.852742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:59.510 [2024-07-13 23:00:48.852768] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:59.510 [2024-07-13 23:00:48.852791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:59.510 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:59.510 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:59.510 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:59.510 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:59.510 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:59.510 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:59.510 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:59.510 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:59.510 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:59.510 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:59.510 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.510 23:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.768 23:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:59.768 "name": "Existed_Raid", 00:14:59.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.768 "strip_size_kb": 64, 00:14:59.768 "state": "configuring", 00:14:59.768 "raid_level": "concat", 00:14:59.768 "superblock": false, 00:14:59.768 "num_base_bdevs": 2, 00:14:59.768 "num_base_bdevs_discovered": 0, 00:14:59.768 "num_base_bdevs_operational": 2, 00:14:59.768 "base_bdevs_list": [ 00:14:59.768 { 00:14:59.768 "name": "BaseBdev1", 00:14:59.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.768 "is_configured": false, 00:14:59.768 "data_offset": 0, 00:14:59.768 "data_size": 0 00:14:59.768 }, 00:14:59.768 { 00:14:59.768 "name": "BaseBdev2", 00:14:59.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.768 "is_configured": false, 00:14:59.768 "data_offset": 0, 00:14:59.768 "data_size": 0 00:14:59.768 } 00:14:59.768 ] 00:14:59.768 }' 00:14:59.769 23:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:59.769 23:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.335 23:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:00.593 [2024-07-13 23:00:49.992680] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:00.593 [2024-07-13 23:00:49.992725] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:15:00.851 23:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:01.109 [2024-07-13 23:00:50.260757] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:01.109 [2024-07-13 23:00:50.260861] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:01.109 [2024-07-13 23:00:50.260887] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:01.109 [2024-07-13 23:00:50.260960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:01.109 23:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:01.109 [2024-07-13 23:00:50.492152] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:01.109 BaseBdev1 00:15:01.109 23:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:01.110 23:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:01.110 23:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:01.110 23:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:01.110 23:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:01.110 23:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:01.110 23:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:01.367 23:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:01.625 [ 00:15:01.625 { 00:15:01.625 "name": "BaseBdev1", 00:15:01.625 "aliases": [ 00:15:01.625 "4969f64e-5cbc-4160-a13b-9b2c56134d36" 00:15:01.625 ], 00:15:01.625 "product_name": "Malloc disk", 00:15:01.625 "block_size": 512, 00:15:01.625 "num_blocks": 65536, 00:15:01.625 "uuid": "4969f64e-5cbc-4160-a13b-9b2c56134d36", 00:15:01.625 "assigned_rate_limits": { 00:15:01.625 "rw_ios_per_sec": 0, 00:15:01.625 "rw_mbytes_per_sec": 0, 00:15:01.625 "r_mbytes_per_sec": 0, 00:15:01.625 "w_mbytes_per_sec": 0 00:15:01.625 }, 00:15:01.625 "claimed": true, 00:15:01.625 "claim_type": "exclusive_write", 00:15:01.625 "zoned": false, 00:15:01.625 "supported_io_types": { 00:15:01.625 "read": true, 00:15:01.625 "write": true, 00:15:01.625 "unmap": true, 00:15:01.625 "flush": true, 00:15:01.625 "reset": true, 00:15:01.625 "nvme_admin": false, 00:15:01.625 "nvme_io": false, 00:15:01.625 "nvme_io_md": false, 00:15:01.625 "write_zeroes": true, 00:15:01.625 "zcopy": true, 00:15:01.625 "get_zone_info": false, 00:15:01.625 "zone_management": false, 00:15:01.625 "zone_append": false, 00:15:01.625 "compare": false, 00:15:01.625 "compare_and_write": false, 00:15:01.625 "abort": true, 00:15:01.625 "seek_hole": false, 00:15:01.625 "seek_data": false, 00:15:01.625 "copy": true, 00:15:01.625 "nvme_iov_md": false 00:15:01.625 }, 00:15:01.625 "memory_domains": [ 00:15:01.625 { 00:15:01.625 "dma_device_id": "system", 00:15:01.625 "dma_device_type": 1 00:15:01.625 }, 00:15:01.625 { 00:15:01.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.625 "dma_device_type": 2 00:15:01.625 } 00:15:01.625 ], 00:15:01.625 "driver_specific": {} 00:15:01.625 } 00:15:01.625 ] 00:15:01.625 23:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:01.625 23:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:01.625 23:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:01.625 23:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:01.625 23:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:01.625 23:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:01.625 23:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:01.625 23:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:01.625 23:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:01.625 23:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:01.625 23:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:01.625 23:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.626 23:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.883 23:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:01.883 "name": "Existed_Raid", 00:15:01.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.883 "strip_size_kb": 64, 00:15:01.883 "state": "configuring", 00:15:01.883 "raid_level": "concat", 00:15:01.883 "superblock": false, 00:15:01.883 "num_base_bdevs": 2, 00:15:01.883 "num_base_bdevs_discovered": 1, 00:15:01.883 "num_base_bdevs_operational": 2, 00:15:01.883 "base_bdevs_list": [ 00:15:01.884 { 00:15:01.884 "name": "BaseBdev1", 00:15:01.884 "uuid": "4969f64e-5cbc-4160-a13b-9b2c56134d36", 00:15:01.884 "is_configured": true, 00:15:01.884 "data_offset": 0, 00:15:01.884 "data_size": 65536 00:15:01.884 }, 00:15:01.884 { 00:15:01.884 "name": "BaseBdev2", 00:15:01.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.884 "is_configured": false, 00:15:01.884 "data_offset": 0, 00:15:01.884 "data_size": 0 00:15:01.884 } 00:15:01.884 ] 00:15:01.884 }' 00:15:01.884 23:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:01.884 23:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.818 23:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:02.818 [2024-07-13 23:00:52.068458] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:02.818 [2024-07-13 23:00:52.068544] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:15:02.818 23:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:03.077 [2024-07-13 23:00:52.280519] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:03.077 [2024-07-13 23:00:52.282648] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:03.077 [2024-07-13 23:00:52.282718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:03.077 23:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:03.077 23:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:03.077 23:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:03.077 23:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:03.077 23:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:03.077 23:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:03.077 23:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:03.077 23:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:03.077 23:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:03.077 23:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:03.077 23:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:03.077 23:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:03.077 23:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:03.077 23:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.336 23:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:03.336 "name": "Existed_Raid", 00:15:03.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.336 "strip_size_kb": 64, 00:15:03.336 "state": "configuring", 00:15:03.336 "raid_level": "concat", 00:15:03.336 "superblock": false, 00:15:03.336 "num_base_bdevs": 2, 00:15:03.336 "num_base_bdevs_discovered": 1, 00:15:03.336 "num_base_bdevs_operational": 2, 00:15:03.336 "base_bdevs_list": [ 00:15:03.336 { 00:15:03.336 "name": "BaseBdev1", 00:15:03.336 "uuid": "4969f64e-5cbc-4160-a13b-9b2c56134d36", 00:15:03.336 "is_configured": true, 00:15:03.336 "data_offset": 0, 00:15:03.336 "data_size": 65536 00:15:03.336 }, 00:15:03.336 { 00:15:03.336 "name": "BaseBdev2", 00:15:03.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.336 "is_configured": false, 00:15:03.336 "data_offset": 0, 00:15:03.336 "data_size": 0 00:15:03.336 } 00:15:03.336 ] 00:15:03.336 }' 00:15:03.336 23:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:03.336 23:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.903 23:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:04.162 [2024-07-13 23:00:53.373690] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:04.162 [2024-07-13 23:00:53.373752] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:15:04.162 [2024-07-13 23:00:53.373763] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:04.162 [2024-07-13 23:00:53.373911] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:15:04.162 [2024-07-13 23:00:53.374454] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:15:04.162 [2024-07-13 23:00:53.374479] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:15:04.162 [2024-07-13 23:00:53.374797] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.162 BaseBdev2 00:15:04.162 23:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:04.162 23:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:04.162 23:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:04.162 23:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:04.162 23:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:04.162 23:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:04.162 23:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:04.421 23:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:04.680 [ 00:15:04.680 { 00:15:04.680 "name": "BaseBdev2", 00:15:04.680 "aliases": [ 00:15:04.680 "d48dadb4-dd43-4afe-bf1e-b69d0712e3ad" 00:15:04.680 ], 00:15:04.680 "product_name": "Malloc disk", 00:15:04.680 "block_size": 512, 00:15:04.680 "num_blocks": 65536, 00:15:04.680 "uuid": "d48dadb4-dd43-4afe-bf1e-b69d0712e3ad", 00:15:04.680 "assigned_rate_limits": { 00:15:04.680 "rw_ios_per_sec": 0, 00:15:04.680 "rw_mbytes_per_sec": 0, 00:15:04.680 "r_mbytes_per_sec": 0, 00:15:04.680 "w_mbytes_per_sec": 0 00:15:04.680 }, 00:15:04.680 "claimed": true, 00:15:04.680 "claim_type": "exclusive_write", 00:15:04.680 "zoned": false, 00:15:04.680 "supported_io_types": { 00:15:04.680 "read": true, 00:15:04.680 "write": true, 00:15:04.680 "unmap": true, 00:15:04.680 "flush": true, 00:15:04.680 "reset": true, 00:15:04.680 "nvme_admin": false, 00:15:04.680 "nvme_io": false, 00:15:04.680 "nvme_io_md": false, 00:15:04.680 "write_zeroes": true, 00:15:04.680 "zcopy": true, 00:15:04.680 "get_zone_info": false, 00:15:04.680 "zone_management": false, 00:15:04.680 "zone_append": false, 00:15:04.680 "compare": false, 00:15:04.680 "compare_and_write": false, 00:15:04.680 "abort": true, 00:15:04.680 "seek_hole": false, 00:15:04.680 "seek_data": false, 00:15:04.680 "copy": true, 00:15:04.680 "nvme_iov_md": false 00:15:04.680 }, 00:15:04.680 "memory_domains": [ 00:15:04.680 { 00:15:04.680 "dma_device_id": "system", 00:15:04.680 "dma_device_type": 1 00:15:04.680 }, 00:15:04.680 { 00:15:04.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.680 "dma_device_type": 2 00:15:04.680 } 00:15:04.680 ], 00:15:04.680 "driver_specific": {} 00:15:04.680 } 00:15:04.680 ] 00:15:04.680 23:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:04.680 23:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:04.680 23:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:04.680 23:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:15:04.680 23:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:04.680 23:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:04.680 23:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:04.680 23:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:04.680 23:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:04.680 23:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:04.680 23:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:04.680 23:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:04.680 23:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:04.680 23:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.680 23:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.939 23:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:04.939 "name": "Existed_Raid", 00:15:04.939 "uuid": "5734ef6e-3b9a-4d3c-936d-d2a10b02baf4", 00:15:04.939 "strip_size_kb": 64, 00:15:04.939 "state": "online", 00:15:04.939 "raid_level": "concat", 00:15:04.939 "superblock": false, 00:15:04.939 "num_base_bdevs": 2, 00:15:04.939 "num_base_bdevs_discovered": 2, 00:15:04.939 "num_base_bdevs_operational": 2, 00:15:04.939 "base_bdevs_list": [ 00:15:04.939 { 00:15:04.939 "name": "BaseBdev1", 00:15:04.939 "uuid": "4969f64e-5cbc-4160-a13b-9b2c56134d36", 00:15:04.939 "is_configured": true, 00:15:04.939 "data_offset": 0, 00:15:04.939 "data_size": 65536 00:15:04.939 }, 00:15:04.939 { 00:15:04.939 "name": "BaseBdev2", 00:15:04.939 "uuid": "d48dadb4-dd43-4afe-bf1e-b69d0712e3ad", 00:15:04.939 "is_configured": true, 00:15:04.939 "data_offset": 0, 00:15:04.939 "data_size": 65536 00:15:04.939 } 00:15:04.939 ] 00:15:04.939 }' 00:15:04.939 23:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:04.939 23:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.518 23:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:05.518 23:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:05.518 23:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:05.518 23:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:05.518 23:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:05.518 23:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:05.518 23:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:05.518 23:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:05.776 [2024-07-13 23:00:55.078273] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:05.776 23:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:05.776 "name": "Existed_Raid", 00:15:05.776 "aliases": [ 00:15:05.776 "5734ef6e-3b9a-4d3c-936d-d2a10b02baf4" 00:15:05.776 ], 00:15:05.776 "product_name": "Raid Volume", 00:15:05.776 "block_size": 512, 00:15:05.776 "num_blocks": 131072, 00:15:05.776 "uuid": "5734ef6e-3b9a-4d3c-936d-d2a10b02baf4", 00:15:05.776 "assigned_rate_limits": { 00:15:05.776 "rw_ios_per_sec": 0, 00:15:05.776 "rw_mbytes_per_sec": 0, 00:15:05.776 "r_mbytes_per_sec": 0, 00:15:05.776 "w_mbytes_per_sec": 0 00:15:05.776 }, 00:15:05.776 "claimed": false, 00:15:05.776 "zoned": false, 00:15:05.776 "supported_io_types": { 00:15:05.776 "read": true, 00:15:05.776 "write": true, 00:15:05.776 "unmap": true, 00:15:05.776 "flush": true, 00:15:05.776 "reset": true, 00:15:05.776 "nvme_admin": false, 00:15:05.776 "nvme_io": false, 00:15:05.776 "nvme_io_md": false, 00:15:05.776 "write_zeroes": true, 00:15:05.776 "zcopy": false, 00:15:05.776 "get_zone_info": false, 00:15:05.776 "zone_management": false, 00:15:05.776 "zone_append": false, 00:15:05.776 "compare": false, 00:15:05.776 "compare_and_write": false, 00:15:05.776 "abort": false, 00:15:05.776 "seek_hole": false, 00:15:05.776 "seek_data": false, 00:15:05.776 "copy": false, 00:15:05.776 "nvme_iov_md": false 00:15:05.776 }, 00:15:05.776 "memory_domains": [ 00:15:05.776 { 00:15:05.776 "dma_device_id": "system", 00:15:05.776 "dma_device_type": 1 00:15:05.776 }, 00:15:05.777 { 00:15:05.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.777 "dma_device_type": 2 00:15:05.777 }, 00:15:05.777 { 00:15:05.777 "dma_device_id": "system", 00:15:05.777 "dma_device_type": 1 00:15:05.777 }, 00:15:05.777 { 00:15:05.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.777 "dma_device_type": 2 00:15:05.777 } 00:15:05.777 ], 00:15:05.777 "driver_specific": { 00:15:05.777 "raid": { 00:15:05.777 "uuid": "5734ef6e-3b9a-4d3c-936d-d2a10b02baf4", 00:15:05.777 "strip_size_kb": 64, 00:15:05.777 "state": "online", 00:15:05.777 "raid_level": "concat", 00:15:05.777 "superblock": false, 00:15:05.777 "num_base_bdevs": 2, 00:15:05.777 "num_base_bdevs_discovered": 2, 00:15:05.777 "num_base_bdevs_operational": 2, 00:15:05.777 "base_bdevs_list": [ 00:15:05.777 { 00:15:05.777 "name": "BaseBdev1", 00:15:05.777 "uuid": "4969f64e-5cbc-4160-a13b-9b2c56134d36", 00:15:05.777 "is_configured": true, 00:15:05.777 "data_offset": 0, 00:15:05.777 "data_size": 65536 00:15:05.777 }, 00:15:05.777 { 00:15:05.777 "name": "BaseBdev2", 00:15:05.777 "uuid": "d48dadb4-dd43-4afe-bf1e-b69d0712e3ad", 00:15:05.777 "is_configured": true, 00:15:05.777 "data_offset": 0, 00:15:05.777 "data_size": 65536 00:15:05.777 } 00:15:05.777 ] 00:15:05.777 } 00:15:05.777 } 00:15:05.777 }' 00:15:05.777 23:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:05.777 23:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:05.777 BaseBdev2' 00:15:05.777 23:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:05.777 23:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:05.777 23:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:06.035 23:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:06.035 "name": "BaseBdev1", 00:15:06.035 "aliases": [ 00:15:06.035 "4969f64e-5cbc-4160-a13b-9b2c56134d36" 00:15:06.035 ], 00:15:06.035 "product_name": "Malloc disk", 00:15:06.035 "block_size": 512, 00:15:06.035 "num_blocks": 65536, 00:15:06.035 "uuid": "4969f64e-5cbc-4160-a13b-9b2c56134d36", 00:15:06.035 "assigned_rate_limits": { 00:15:06.035 "rw_ios_per_sec": 0, 00:15:06.035 "rw_mbytes_per_sec": 0, 00:15:06.035 "r_mbytes_per_sec": 0, 00:15:06.035 "w_mbytes_per_sec": 0 00:15:06.035 }, 00:15:06.035 "claimed": true, 00:15:06.035 "claim_type": "exclusive_write", 00:15:06.035 "zoned": false, 00:15:06.035 "supported_io_types": { 00:15:06.035 "read": true, 00:15:06.035 "write": true, 00:15:06.035 "unmap": true, 00:15:06.035 "flush": true, 00:15:06.035 "reset": true, 00:15:06.035 "nvme_admin": false, 00:15:06.035 "nvme_io": false, 00:15:06.035 "nvme_io_md": false, 00:15:06.035 "write_zeroes": true, 00:15:06.035 "zcopy": true, 00:15:06.035 "get_zone_info": false, 00:15:06.035 "zone_management": false, 00:15:06.035 "zone_append": false, 00:15:06.035 "compare": false, 00:15:06.035 "compare_and_write": false, 00:15:06.035 "abort": true, 00:15:06.035 "seek_hole": false, 00:15:06.035 "seek_data": false, 00:15:06.035 "copy": true, 00:15:06.035 "nvme_iov_md": false 00:15:06.035 }, 00:15:06.035 "memory_domains": [ 00:15:06.035 { 00:15:06.035 "dma_device_id": "system", 00:15:06.035 "dma_device_type": 1 00:15:06.035 }, 00:15:06.035 { 00:15:06.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.035 "dma_device_type": 2 00:15:06.035 } 00:15:06.035 ], 00:15:06.035 "driver_specific": {} 00:15:06.035 }' 00:15:06.035 23:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:06.035 23:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:06.294 23:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:06.294 23:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:06.294 23:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:06.294 23:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:06.294 23:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:06.294 23:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:06.294 23:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:06.294 23:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:06.294 23:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:06.553 23:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:06.553 23:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:06.553 23:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:06.553 23:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:06.812 23:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:06.812 "name": "BaseBdev2", 00:15:06.812 "aliases": [ 00:15:06.812 "d48dadb4-dd43-4afe-bf1e-b69d0712e3ad" 00:15:06.812 ], 00:15:06.812 "product_name": "Malloc disk", 00:15:06.812 "block_size": 512, 00:15:06.812 "num_blocks": 65536, 00:15:06.812 "uuid": "d48dadb4-dd43-4afe-bf1e-b69d0712e3ad", 00:15:06.812 "assigned_rate_limits": { 00:15:06.812 "rw_ios_per_sec": 0, 00:15:06.812 "rw_mbytes_per_sec": 0, 00:15:06.812 "r_mbytes_per_sec": 0, 00:15:06.812 "w_mbytes_per_sec": 0 00:15:06.812 }, 00:15:06.812 "claimed": true, 00:15:06.812 "claim_type": "exclusive_write", 00:15:06.812 "zoned": false, 00:15:06.812 "supported_io_types": { 00:15:06.812 "read": true, 00:15:06.812 "write": true, 00:15:06.812 "unmap": true, 00:15:06.812 "flush": true, 00:15:06.812 "reset": true, 00:15:06.812 "nvme_admin": false, 00:15:06.812 "nvme_io": false, 00:15:06.812 "nvme_io_md": false, 00:15:06.812 "write_zeroes": true, 00:15:06.812 "zcopy": true, 00:15:06.812 "get_zone_info": false, 00:15:06.812 "zone_management": false, 00:15:06.812 "zone_append": false, 00:15:06.812 "compare": false, 00:15:06.812 "compare_and_write": false, 00:15:06.812 "abort": true, 00:15:06.812 "seek_hole": false, 00:15:06.812 "seek_data": false, 00:15:06.812 "copy": true, 00:15:06.812 "nvme_iov_md": false 00:15:06.812 }, 00:15:06.812 "memory_domains": [ 00:15:06.812 { 00:15:06.812 "dma_device_id": "system", 00:15:06.812 "dma_device_type": 1 00:15:06.812 }, 00:15:06.812 { 00:15:06.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.812 "dma_device_type": 2 00:15:06.812 } 00:15:06.812 ], 00:15:06.812 "driver_specific": {} 00:15:06.812 }' 00:15:06.812 23:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:06.812 23:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:06.812 23:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:06.812 23:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:06.812 23:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:06.812 23:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:06.812 23:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:07.071 23:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:07.071 23:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:07.071 23:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:07.071 23:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:07.071 23:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:07.071 23:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:07.330 [2024-07-13 23:00:56.682521] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:07.330 [2024-07-13 23:00:56.682580] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:07.330 [2024-07-13 23:00:56.682693] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:07.330 23:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:07.330 23:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:15:07.330 23:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:07.330 23:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:07.330 23:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:07.330 23:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:15:07.330 23:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:07.330 23:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:07.330 23:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:07.330 23:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:07.330 23:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:07.330 23:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:07.330 23:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:07.330 23:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:07.330 23:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:07.330 23:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:07.330 23:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.589 23:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:07.589 "name": "Existed_Raid", 00:15:07.589 "uuid": "5734ef6e-3b9a-4d3c-936d-d2a10b02baf4", 00:15:07.589 "strip_size_kb": 64, 00:15:07.589 "state": "offline", 00:15:07.589 "raid_level": "concat", 00:15:07.589 "superblock": false, 00:15:07.589 "num_base_bdevs": 2, 00:15:07.589 "num_base_bdevs_discovered": 1, 00:15:07.589 "num_base_bdevs_operational": 1, 00:15:07.589 "base_bdevs_list": [ 00:15:07.589 { 00:15:07.589 "name": null, 00:15:07.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.589 "is_configured": false, 00:15:07.589 "data_offset": 0, 00:15:07.589 "data_size": 65536 00:15:07.589 }, 00:15:07.589 { 00:15:07.589 "name": "BaseBdev2", 00:15:07.589 "uuid": "d48dadb4-dd43-4afe-bf1e-b69d0712e3ad", 00:15:07.589 "is_configured": true, 00:15:07.589 "data_offset": 0, 00:15:07.589 "data_size": 65536 00:15:07.589 } 00:15:07.589 ] 00:15:07.589 }' 00:15:07.589 23:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:07.589 23:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.544 23:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:08.544 23:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:08.544 23:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.544 23:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:08.544 23:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:08.544 23:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:08.544 23:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:08.802 [2024-07-13 23:00:58.139185] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:08.802 [2024-07-13 23:00:58.139319] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:15:08.802 23:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:08.802 23:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:08.802 23:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.802 23:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:09.061 23:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:09.061 23:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:09.061 23:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:15:09.061 23:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 132212 00:15:09.061 23:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 132212 ']' 00:15:09.061 23:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 132212 00:15:09.061 23:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:15:09.061 23:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:09.061 23:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 132212 00:15:09.061 23:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:09.061 23:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:09.061 23:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 132212' 00:15:09.061 killing process with pid 132212 00:15:09.061 23:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 132212 00:15:09.061 23:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 132212 00:15:09.061 [2024-07-13 23:00:58.467729] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:09.061 [2024-07-13 23:00:58.467839] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:09.627 23:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:09.627 00:15:09.627 real 0m10.577s 00:15:09.627 user 0m19.751s 00:15:09.628 sys 0m1.395s 00:15:09.628 23:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:09.628 23:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.628 ************************************ 00:15:09.628 END TEST raid_state_function_test 00:15:09.628 ************************************ 00:15:09.628 23:00:58 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:09.628 23:00:58 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:15:09.628 23:00:58 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:09.628 23:00:58 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:09.628 23:00:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:09.628 ************************************ 00:15:09.628 START TEST raid_state_function_test_sb 00:15:09.628 ************************************ 00:15:09.628 23:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 true 00:15:09.628 23:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:15:09.628 23:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:15:09.628 23:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:09.628 23:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:09.628 23:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:09.628 23:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:09.628 23:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:15:09.628 23:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:09.628 23:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:09.628 23:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:15:09.628 23:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:09.628 23:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:09.628 23:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:09.628 23:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:09.628 23:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:09.628 23:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:09.628 23:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:09.628 23:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:09.628 23:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:15:09.628 23:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:09.628 23:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:09.628 23:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:09.628 23:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:09.628 23:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=132581 00:15:09.628 23:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 132581' 00:15:09.628 Process raid pid: 132581 00:15:09.628 23:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 132581 /var/tmp/spdk-raid.sock 00:15:09.628 23:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:09.628 23:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 132581 ']' 00:15:09.628 23:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:09.628 23:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:09.628 23:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:09.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:09.628 23:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:09.628 23:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.628 [2024-07-13 23:00:58.886552] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:15:09.628 [2024-07-13 23:00:58.886826] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.628 [2024-07-13 23:00:59.025518] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.887 [2024-07-13 23:00:59.101575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.887 [2024-07-13 23:00:59.177487] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:10.454 23:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:10.454 23:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:15:10.454 23:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:10.713 [2024-07-13 23:01:00.056795] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:10.713 [2024-07-13 23:01:00.056931] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:10.713 [2024-07-13 23:01:00.056957] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:10.713 [2024-07-13 23:01:00.056980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:10.713 23:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:10.713 23:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:10.713 23:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:10.713 23:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:10.713 23:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:10.713 23:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:10.713 23:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:10.713 23:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:10.713 23:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:10.713 23:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:10.713 23:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.713 23:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.972 23:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:10.972 "name": "Existed_Raid", 00:15:10.972 "uuid": "3c6ed196-460e-40e2-8386-17be3c22e73c", 00:15:10.972 "strip_size_kb": 64, 00:15:10.972 "state": "configuring", 00:15:10.972 "raid_level": "concat", 00:15:10.972 "superblock": true, 00:15:10.972 "num_base_bdevs": 2, 00:15:10.972 "num_base_bdevs_discovered": 0, 00:15:10.972 "num_base_bdevs_operational": 2, 00:15:10.972 "base_bdevs_list": [ 00:15:10.972 { 00:15:10.972 "name": "BaseBdev1", 00:15:10.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.972 "is_configured": false, 00:15:10.972 "data_offset": 0, 00:15:10.972 "data_size": 0 00:15:10.972 }, 00:15:10.972 { 00:15:10.972 "name": "BaseBdev2", 00:15:10.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.972 "is_configured": false, 00:15:10.972 "data_offset": 0, 00:15:10.972 "data_size": 0 00:15:10.972 } 00:15:10.972 ] 00:15:10.972 }' 00:15:10.972 23:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:10.972 23:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.909 23:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:11.909 [2024-07-13 23:01:01.236872] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:11.909 [2024-07-13 23:01:01.236941] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:15:11.909 23:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:12.167 [2024-07-13 23:01:01.528940] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:12.167 [2024-07-13 23:01:01.529032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:12.167 [2024-07-13 23:01:01.529047] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:12.167 [2024-07-13 23:01:01.529082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:12.167 23:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:12.425 [2024-07-13 23:01:01.814977] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:12.425 BaseBdev1 00:15:12.425 23:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:12.425 23:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:12.425 23:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:12.425 23:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:12.425 23:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:12.425 23:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:12.426 23:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:12.684 23:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:12.942 [ 00:15:12.942 { 00:15:12.942 "name": "BaseBdev1", 00:15:12.942 "aliases": [ 00:15:12.942 "762b2d84-63e4-457e-bea3-cde5f10ee75d" 00:15:12.942 ], 00:15:12.942 "product_name": "Malloc disk", 00:15:12.942 "block_size": 512, 00:15:12.942 "num_blocks": 65536, 00:15:12.942 "uuid": "762b2d84-63e4-457e-bea3-cde5f10ee75d", 00:15:12.942 "assigned_rate_limits": { 00:15:12.942 "rw_ios_per_sec": 0, 00:15:12.942 "rw_mbytes_per_sec": 0, 00:15:12.942 "r_mbytes_per_sec": 0, 00:15:12.942 "w_mbytes_per_sec": 0 00:15:12.942 }, 00:15:12.942 "claimed": true, 00:15:12.942 "claim_type": "exclusive_write", 00:15:12.942 "zoned": false, 00:15:12.942 "supported_io_types": { 00:15:12.942 "read": true, 00:15:12.942 "write": true, 00:15:12.942 "unmap": true, 00:15:12.942 "flush": true, 00:15:12.942 "reset": true, 00:15:12.942 "nvme_admin": false, 00:15:12.942 "nvme_io": false, 00:15:12.942 "nvme_io_md": false, 00:15:12.942 "write_zeroes": true, 00:15:12.942 "zcopy": true, 00:15:12.942 "get_zone_info": false, 00:15:12.942 "zone_management": false, 00:15:12.942 "zone_append": false, 00:15:12.942 "compare": false, 00:15:12.942 "compare_and_write": false, 00:15:12.942 "abort": true, 00:15:12.942 "seek_hole": false, 00:15:12.942 "seek_data": false, 00:15:12.942 "copy": true, 00:15:12.942 "nvme_iov_md": false 00:15:12.942 }, 00:15:12.942 "memory_domains": [ 00:15:12.942 { 00:15:12.942 "dma_device_id": "system", 00:15:12.942 "dma_device_type": 1 00:15:12.942 }, 00:15:12.942 { 00:15:12.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.942 "dma_device_type": 2 00:15:12.942 } 00:15:12.942 ], 00:15:12.942 "driver_specific": {} 00:15:12.942 } 00:15:12.942 ] 00:15:12.942 23:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:12.942 23:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:12.942 23:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:12.942 23:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:12.942 23:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:12.942 23:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:12.942 23:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:12.942 23:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:12.942 23:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:12.942 23:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:12.942 23:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:12.942 23:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.942 23:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.200 23:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:13.200 "name": "Existed_Raid", 00:15:13.200 "uuid": "6589d008-ba37-4619-ac53-70c6d4c43205", 00:15:13.200 "strip_size_kb": 64, 00:15:13.200 "state": "configuring", 00:15:13.200 "raid_level": "concat", 00:15:13.200 "superblock": true, 00:15:13.200 "num_base_bdevs": 2, 00:15:13.200 "num_base_bdevs_discovered": 1, 00:15:13.200 "num_base_bdevs_operational": 2, 00:15:13.200 "base_bdevs_list": [ 00:15:13.200 { 00:15:13.200 "name": "BaseBdev1", 00:15:13.201 "uuid": "762b2d84-63e4-457e-bea3-cde5f10ee75d", 00:15:13.201 "is_configured": true, 00:15:13.201 "data_offset": 2048, 00:15:13.201 "data_size": 63488 00:15:13.201 }, 00:15:13.201 { 00:15:13.201 "name": "BaseBdev2", 00:15:13.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.201 "is_configured": false, 00:15:13.201 "data_offset": 0, 00:15:13.201 "data_size": 0 00:15:13.201 } 00:15:13.201 ] 00:15:13.201 }' 00:15:13.201 23:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:13.201 23:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.767 23:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:14.024 [2024-07-13 23:01:03.375268] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:14.024 [2024-07-13 23:01:03.375350] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:15:14.024 23:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:14.281 [2024-07-13 23:01:03.583380] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:14.281 [2024-07-13 23:01:03.585508] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:14.281 [2024-07-13 23:01:03.585577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:14.281 23:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:14.281 23:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:14.281 23:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:14.281 23:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:14.281 23:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:14.281 23:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:14.281 23:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:14.281 23:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:14.281 23:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:14.281 23:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:14.281 23:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:14.281 23:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:14.281 23:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.281 23:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.539 23:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:14.539 "name": "Existed_Raid", 00:15:14.539 "uuid": "3a0c1074-79a5-450f-a11f-f0df15487da8", 00:15:14.539 "strip_size_kb": 64, 00:15:14.539 "state": "configuring", 00:15:14.539 "raid_level": "concat", 00:15:14.539 "superblock": true, 00:15:14.539 "num_base_bdevs": 2, 00:15:14.539 "num_base_bdevs_discovered": 1, 00:15:14.539 "num_base_bdevs_operational": 2, 00:15:14.539 "base_bdevs_list": [ 00:15:14.539 { 00:15:14.539 "name": "BaseBdev1", 00:15:14.539 "uuid": "762b2d84-63e4-457e-bea3-cde5f10ee75d", 00:15:14.539 "is_configured": true, 00:15:14.539 "data_offset": 2048, 00:15:14.539 "data_size": 63488 00:15:14.539 }, 00:15:14.539 { 00:15:14.539 "name": "BaseBdev2", 00:15:14.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.539 "is_configured": false, 00:15:14.539 "data_offset": 0, 00:15:14.539 "data_size": 0 00:15:14.539 } 00:15:14.539 ] 00:15:14.539 }' 00:15:14.539 23:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:14.539 23:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.105 23:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:15.364 [2024-07-13 23:01:04.705597] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:15.364 [2024-07-13 23:01:04.705876] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:15:15.364 [2024-07-13 23:01:04.705893] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:15.364 [2024-07-13 23:01:04.706064] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:15:15.364 [2024-07-13 23:01:04.706495] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:15:15.364 [2024-07-13 23:01:04.706522] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:15:15.364 [2024-07-13 23:01:04.706683] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.364 BaseBdev2 00:15:15.364 23:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:15.364 23:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:15.364 23:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:15.364 23:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:15.364 23:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:15.364 23:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:15.364 23:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:15.623 23:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:15.881 [ 00:15:15.881 { 00:15:15.881 "name": "BaseBdev2", 00:15:15.881 "aliases": [ 00:15:15.881 "416a66ae-8b8e-4a25-9d76-e66732a197a8" 00:15:15.881 ], 00:15:15.881 "product_name": "Malloc disk", 00:15:15.881 "block_size": 512, 00:15:15.881 "num_blocks": 65536, 00:15:15.881 "uuid": "416a66ae-8b8e-4a25-9d76-e66732a197a8", 00:15:15.881 "assigned_rate_limits": { 00:15:15.881 "rw_ios_per_sec": 0, 00:15:15.881 "rw_mbytes_per_sec": 0, 00:15:15.881 "r_mbytes_per_sec": 0, 00:15:15.881 "w_mbytes_per_sec": 0 00:15:15.881 }, 00:15:15.881 "claimed": true, 00:15:15.881 "claim_type": "exclusive_write", 00:15:15.881 "zoned": false, 00:15:15.881 "supported_io_types": { 00:15:15.881 "read": true, 00:15:15.881 "write": true, 00:15:15.882 "unmap": true, 00:15:15.882 "flush": true, 00:15:15.882 "reset": true, 00:15:15.882 "nvme_admin": false, 00:15:15.882 "nvme_io": false, 00:15:15.882 "nvme_io_md": false, 00:15:15.882 "write_zeroes": true, 00:15:15.882 "zcopy": true, 00:15:15.882 "get_zone_info": false, 00:15:15.882 "zone_management": false, 00:15:15.882 "zone_append": false, 00:15:15.882 "compare": false, 00:15:15.882 "compare_and_write": false, 00:15:15.882 "abort": true, 00:15:15.882 "seek_hole": false, 00:15:15.882 "seek_data": false, 00:15:15.882 "copy": true, 00:15:15.882 "nvme_iov_md": false 00:15:15.882 }, 00:15:15.882 "memory_domains": [ 00:15:15.882 { 00:15:15.882 "dma_device_id": "system", 00:15:15.882 "dma_device_type": 1 00:15:15.882 }, 00:15:15.882 { 00:15:15.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.882 "dma_device_type": 2 00:15:15.882 } 00:15:15.882 ], 00:15:15.882 "driver_specific": {} 00:15:15.882 } 00:15:15.882 ] 00:15:15.882 23:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:15.882 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:15.882 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:15.882 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:15:15.882 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:15.882 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:15.882 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:15.882 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:15.882 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:15.882 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:15.882 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:15.882 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:15.882 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:15.882 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.882 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.140 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:16.140 "name": "Existed_Raid", 00:15:16.140 "uuid": "3a0c1074-79a5-450f-a11f-f0df15487da8", 00:15:16.140 "strip_size_kb": 64, 00:15:16.140 "state": "online", 00:15:16.140 "raid_level": "concat", 00:15:16.140 "superblock": true, 00:15:16.140 "num_base_bdevs": 2, 00:15:16.140 "num_base_bdevs_discovered": 2, 00:15:16.140 "num_base_bdevs_operational": 2, 00:15:16.140 "base_bdevs_list": [ 00:15:16.140 { 00:15:16.140 "name": "BaseBdev1", 00:15:16.140 "uuid": "762b2d84-63e4-457e-bea3-cde5f10ee75d", 00:15:16.140 "is_configured": true, 00:15:16.140 "data_offset": 2048, 00:15:16.140 "data_size": 63488 00:15:16.140 }, 00:15:16.140 { 00:15:16.140 "name": "BaseBdev2", 00:15:16.140 "uuid": "416a66ae-8b8e-4a25-9d76-e66732a197a8", 00:15:16.140 "is_configured": true, 00:15:16.140 "data_offset": 2048, 00:15:16.140 "data_size": 63488 00:15:16.140 } 00:15:16.140 ] 00:15:16.140 }' 00:15:16.140 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:16.140 23:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.706 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:16.706 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:16.706 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:16.706 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:16.706 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:16.706 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:16.706 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:16.706 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:16.964 [2024-07-13 23:01:06.334140] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:16.964 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:16.964 "name": "Existed_Raid", 00:15:16.964 "aliases": [ 00:15:16.964 "3a0c1074-79a5-450f-a11f-f0df15487da8" 00:15:16.964 ], 00:15:16.964 "product_name": "Raid Volume", 00:15:16.964 "block_size": 512, 00:15:16.964 "num_blocks": 126976, 00:15:16.964 "uuid": "3a0c1074-79a5-450f-a11f-f0df15487da8", 00:15:16.964 "assigned_rate_limits": { 00:15:16.964 "rw_ios_per_sec": 0, 00:15:16.964 "rw_mbytes_per_sec": 0, 00:15:16.964 "r_mbytes_per_sec": 0, 00:15:16.964 "w_mbytes_per_sec": 0 00:15:16.964 }, 00:15:16.964 "claimed": false, 00:15:16.964 "zoned": false, 00:15:16.964 "supported_io_types": { 00:15:16.964 "read": true, 00:15:16.964 "write": true, 00:15:16.964 "unmap": true, 00:15:16.964 "flush": true, 00:15:16.964 "reset": true, 00:15:16.964 "nvme_admin": false, 00:15:16.964 "nvme_io": false, 00:15:16.964 "nvme_io_md": false, 00:15:16.964 "write_zeroes": true, 00:15:16.964 "zcopy": false, 00:15:16.964 "get_zone_info": false, 00:15:16.964 "zone_management": false, 00:15:16.964 "zone_append": false, 00:15:16.964 "compare": false, 00:15:16.964 "compare_and_write": false, 00:15:16.964 "abort": false, 00:15:16.964 "seek_hole": false, 00:15:16.964 "seek_data": false, 00:15:16.964 "copy": false, 00:15:16.964 "nvme_iov_md": false 00:15:16.964 }, 00:15:16.964 "memory_domains": [ 00:15:16.964 { 00:15:16.964 "dma_device_id": "system", 00:15:16.964 "dma_device_type": 1 00:15:16.964 }, 00:15:16.964 { 00:15:16.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.964 "dma_device_type": 2 00:15:16.964 }, 00:15:16.964 { 00:15:16.964 "dma_device_id": "system", 00:15:16.964 "dma_device_type": 1 00:15:16.964 }, 00:15:16.964 { 00:15:16.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.964 "dma_device_type": 2 00:15:16.964 } 00:15:16.964 ], 00:15:16.964 "driver_specific": { 00:15:16.964 "raid": { 00:15:16.964 "uuid": "3a0c1074-79a5-450f-a11f-f0df15487da8", 00:15:16.964 "strip_size_kb": 64, 00:15:16.964 "state": "online", 00:15:16.964 "raid_level": "concat", 00:15:16.964 "superblock": true, 00:15:16.964 "num_base_bdevs": 2, 00:15:16.964 "num_base_bdevs_discovered": 2, 00:15:16.964 "num_base_bdevs_operational": 2, 00:15:16.964 "base_bdevs_list": [ 00:15:16.964 { 00:15:16.964 "name": "BaseBdev1", 00:15:16.964 "uuid": "762b2d84-63e4-457e-bea3-cde5f10ee75d", 00:15:16.964 "is_configured": true, 00:15:16.964 "data_offset": 2048, 00:15:16.964 "data_size": 63488 00:15:16.964 }, 00:15:16.964 { 00:15:16.964 "name": "BaseBdev2", 00:15:16.964 "uuid": "416a66ae-8b8e-4a25-9d76-e66732a197a8", 00:15:16.964 "is_configured": true, 00:15:16.964 "data_offset": 2048, 00:15:16.964 "data_size": 63488 00:15:16.964 } 00:15:16.964 ] 00:15:16.964 } 00:15:16.964 } 00:15:16.964 }' 00:15:16.964 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:17.222 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:17.222 BaseBdev2' 00:15:17.222 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:17.222 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:17.222 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:17.480 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:17.480 "name": "BaseBdev1", 00:15:17.480 "aliases": [ 00:15:17.480 "762b2d84-63e4-457e-bea3-cde5f10ee75d" 00:15:17.480 ], 00:15:17.480 "product_name": "Malloc disk", 00:15:17.480 "block_size": 512, 00:15:17.480 "num_blocks": 65536, 00:15:17.480 "uuid": "762b2d84-63e4-457e-bea3-cde5f10ee75d", 00:15:17.480 "assigned_rate_limits": { 00:15:17.480 "rw_ios_per_sec": 0, 00:15:17.480 "rw_mbytes_per_sec": 0, 00:15:17.480 "r_mbytes_per_sec": 0, 00:15:17.480 "w_mbytes_per_sec": 0 00:15:17.480 }, 00:15:17.480 "claimed": true, 00:15:17.480 "claim_type": "exclusive_write", 00:15:17.480 "zoned": false, 00:15:17.480 "supported_io_types": { 00:15:17.480 "read": true, 00:15:17.480 "write": true, 00:15:17.480 "unmap": true, 00:15:17.480 "flush": true, 00:15:17.480 "reset": true, 00:15:17.480 "nvme_admin": false, 00:15:17.480 "nvme_io": false, 00:15:17.480 "nvme_io_md": false, 00:15:17.480 "write_zeroes": true, 00:15:17.480 "zcopy": true, 00:15:17.480 "get_zone_info": false, 00:15:17.480 "zone_management": false, 00:15:17.480 "zone_append": false, 00:15:17.480 "compare": false, 00:15:17.480 "compare_and_write": false, 00:15:17.480 "abort": true, 00:15:17.480 "seek_hole": false, 00:15:17.480 "seek_data": false, 00:15:17.480 "copy": true, 00:15:17.480 "nvme_iov_md": false 00:15:17.480 }, 00:15:17.480 "memory_domains": [ 00:15:17.480 { 00:15:17.480 "dma_device_id": "system", 00:15:17.480 "dma_device_type": 1 00:15:17.480 }, 00:15:17.480 { 00:15:17.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.480 "dma_device_type": 2 00:15:17.480 } 00:15:17.480 ], 00:15:17.480 "driver_specific": {} 00:15:17.480 }' 00:15:17.480 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:17.480 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:17.480 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:17.480 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:17.480 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:17.480 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:17.480 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:17.737 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:17.737 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:17.737 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:17.737 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:17.737 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:17.737 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:17.737 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:17.737 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:17.995 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:17.995 "name": "BaseBdev2", 00:15:17.995 "aliases": [ 00:15:17.995 "416a66ae-8b8e-4a25-9d76-e66732a197a8" 00:15:17.995 ], 00:15:17.995 "product_name": "Malloc disk", 00:15:17.995 "block_size": 512, 00:15:17.995 "num_blocks": 65536, 00:15:17.995 "uuid": "416a66ae-8b8e-4a25-9d76-e66732a197a8", 00:15:17.995 "assigned_rate_limits": { 00:15:17.995 "rw_ios_per_sec": 0, 00:15:17.995 "rw_mbytes_per_sec": 0, 00:15:17.995 "r_mbytes_per_sec": 0, 00:15:17.995 "w_mbytes_per_sec": 0 00:15:17.995 }, 00:15:17.995 "claimed": true, 00:15:17.995 "claim_type": "exclusive_write", 00:15:17.995 "zoned": false, 00:15:17.995 "supported_io_types": { 00:15:17.995 "read": true, 00:15:17.995 "write": true, 00:15:17.995 "unmap": true, 00:15:17.995 "flush": true, 00:15:17.995 "reset": true, 00:15:17.995 "nvme_admin": false, 00:15:17.995 "nvme_io": false, 00:15:17.995 "nvme_io_md": false, 00:15:17.995 "write_zeroes": true, 00:15:17.995 "zcopy": true, 00:15:17.995 "get_zone_info": false, 00:15:17.995 "zone_management": false, 00:15:17.995 "zone_append": false, 00:15:17.995 "compare": false, 00:15:17.995 "compare_and_write": false, 00:15:17.995 "abort": true, 00:15:17.995 "seek_hole": false, 00:15:17.995 "seek_data": false, 00:15:17.995 "copy": true, 00:15:17.995 "nvme_iov_md": false 00:15:17.995 }, 00:15:17.995 "memory_domains": [ 00:15:17.995 { 00:15:17.995 "dma_device_id": "system", 00:15:17.995 "dma_device_type": 1 00:15:17.995 }, 00:15:17.995 { 00:15:17.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.995 "dma_device_type": 2 00:15:17.995 } 00:15:17.995 ], 00:15:17.995 "driver_specific": {} 00:15:17.995 }' 00:15:17.995 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:17.995 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:18.252 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:18.253 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:18.253 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:18.253 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:18.253 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:18.253 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:18.253 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:18.253 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:18.510 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:18.510 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:18.510 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:18.768 [2024-07-13 23:01:07.957627] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:18.768 [2024-07-13 23:01:07.957664] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:18.768 [2024-07-13 23:01:07.957760] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:18.768 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:18.768 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:15:18.768 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:18.768 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:15:18.768 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:18.768 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:15:18.768 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:18.768 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:18.768 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:18.768 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:18.768 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:18.768 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:18.768 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:18.768 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:18.768 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:18.768 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.769 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.027 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:19.027 "name": "Existed_Raid", 00:15:19.027 "uuid": "3a0c1074-79a5-450f-a11f-f0df15487da8", 00:15:19.027 "strip_size_kb": 64, 00:15:19.027 "state": "offline", 00:15:19.027 "raid_level": "concat", 00:15:19.027 "superblock": true, 00:15:19.027 "num_base_bdevs": 2, 00:15:19.027 "num_base_bdevs_discovered": 1, 00:15:19.027 "num_base_bdevs_operational": 1, 00:15:19.027 "base_bdevs_list": [ 00:15:19.027 { 00:15:19.027 "name": null, 00:15:19.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.027 "is_configured": false, 00:15:19.027 "data_offset": 2048, 00:15:19.027 "data_size": 63488 00:15:19.027 }, 00:15:19.027 { 00:15:19.027 "name": "BaseBdev2", 00:15:19.027 "uuid": "416a66ae-8b8e-4a25-9d76-e66732a197a8", 00:15:19.027 "is_configured": true, 00:15:19.027 "data_offset": 2048, 00:15:19.027 "data_size": 63488 00:15:19.027 } 00:15:19.027 ] 00:15:19.027 }' 00:15:19.027 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:19.027 23:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.593 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:19.593 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:19.593 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.593 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:19.852 23:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:19.852 23:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:19.852 23:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:20.121 [2024-07-13 23:01:09.377784] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:20.121 [2024-07-13 23:01:09.377892] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:15:20.121 23:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:20.121 23:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:20.121 23:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.121 23:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:20.387 23:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:20.387 23:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:20.387 23:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:15:20.387 23:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 132581 00:15:20.387 23:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 132581 ']' 00:15:20.387 23:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 132581 00:15:20.387 23:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:15:20.387 23:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:20.387 23:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 132581 00:15:20.387 23:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:20.387 23:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:20.387 23:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 132581' 00:15:20.387 killing process with pid 132581 00:15:20.387 23:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 132581 00:15:20.387 [2024-07-13 23:01:09.667726] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:20.387 23:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 132581 00:15:20.387 [2024-07-13 23:01:09.667811] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:20.646 23:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:15:20.646 00:15:20.646 real 0m11.083s 00:15:20.646 user 0m20.405s 00:15:20.646 sys 0m1.387s 00:15:20.646 23:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:20.646 23:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.646 ************************************ 00:15:20.646 END TEST raid_state_function_test_sb 00:15:20.646 ************************************ 00:15:20.646 23:01:09 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:20.646 23:01:09 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:15:20.646 23:01:09 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:20.646 23:01:09 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:20.646 23:01:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:20.646 ************************************ 00:15:20.646 START TEST raid_superblock_test 00:15:20.646 ************************************ 00:15:20.646 23:01:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 2 00:15:20.646 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:15:20.646 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:15:20.646 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:15:20.646 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:15:20.646 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:15:20.646 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:15:20.646 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:15:20.646 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:15:20.646 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:15:20.646 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:15:20.646 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:15:20.646 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:15:20.646 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:15:20.646 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:15:20.646 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:15:20.646 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:15:20.646 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=132949 00:15:20.646 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 132949 /var/tmp/spdk-raid.sock 00:15:20.646 23:01:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 132949 ']' 00:15:20.646 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:20.646 23:01:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:20.646 23:01:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:20.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:20.646 23:01:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:20.646 23:01:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:20.646 23:01:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.646 [2024-07-13 23:01:10.014108] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:15:20.646 [2024-07-13 23:01:10.014331] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132949 ] 00:15:20.905 [2024-07-13 23:01:10.158565] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.905 [2024-07-13 23:01:10.256305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.164 [2024-07-13 23:01:10.315540] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:21.732 23:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:21.732 23:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:15:21.732 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:15:21.732 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:21.732 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:15:21.732 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:15:21.732 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:21.732 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:21.732 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:21.732 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:21.732 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:21.991 malloc1 00:15:21.991 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:22.250 [2024-07-13 23:01:11.467117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:22.250 [2024-07-13 23:01:11.467259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.250 [2024-07-13 23:01:11.467304] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:15:22.250 [2024-07-13 23:01:11.467358] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.250 [2024-07-13 23:01:11.469936] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.250 [2024-07-13 23:01:11.470012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:22.250 pt1 00:15:22.250 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:22.250 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:22.250 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:15:22.250 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:15:22.250 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:22.250 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:22.250 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:22.251 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:22.251 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:22.510 malloc2 00:15:22.510 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:22.510 [2024-07-13 23:01:11.910791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:22.510 [2024-07-13 23:01:11.910896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.510 [2024-07-13 23:01:11.910939] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:22.510 [2024-07-13 23:01:11.910994] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.510 [2024-07-13 23:01:11.913384] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.510 [2024-07-13 23:01:11.913446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:22.510 pt2 00:15:22.768 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:22.768 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:22.768 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:15:22.768 [2024-07-13 23:01:12.130945] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:22.768 [2024-07-13 23:01:12.133547] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:22.768 [2024-07-13 23:01:12.133828] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006c80 00:15:22.768 [2024-07-13 23:01:12.133849] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:22.769 [2024-07-13 23:01:12.134098] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:15:22.769 [2024-07-13 23:01:12.134675] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006c80 00:15:22.769 [2024-07-13 23:01:12.134703] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000006c80 00:15:22.769 [2024-07-13 23:01:12.134942] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.769 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:22.769 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:22.769 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:22.769 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:22.769 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:22.769 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:22.769 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:22.769 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:22.769 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:22.769 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:22.769 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.769 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.027 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:23.027 "name": "raid_bdev1", 00:15:23.027 "uuid": "d7841c9f-33f3-419c-aac0-3f800046ecd5", 00:15:23.027 "strip_size_kb": 64, 00:15:23.027 "state": "online", 00:15:23.027 "raid_level": "concat", 00:15:23.027 "superblock": true, 00:15:23.027 "num_base_bdevs": 2, 00:15:23.027 "num_base_bdevs_discovered": 2, 00:15:23.027 "num_base_bdevs_operational": 2, 00:15:23.027 "base_bdevs_list": [ 00:15:23.027 { 00:15:23.027 "name": "pt1", 00:15:23.027 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:23.027 "is_configured": true, 00:15:23.027 "data_offset": 2048, 00:15:23.027 "data_size": 63488 00:15:23.027 }, 00:15:23.027 { 00:15:23.027 "name": "pt2", 00:15:23.027 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:23.027 "is_configured": true, 00:15:23.027 "data_offset": 2048, 00:15:23.027 "data_size": 63488 00:15:23.027 } 00:15:23.027 ] 00:15:23.027 }' 00:15:23.027 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:23.027 23:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.594 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:15:23.594 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:23.594 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:23.594 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:23.594 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:23.594 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:23.854 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:23.854 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:23.854 [2024-07-13 23:01:13.203477] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:23.854 23:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:23.854 "name": "raid_bdev1", 00:15:23.854 "aliases": [ 00:15:23.854 "d7841c9f-33f3-419c-aac0-3f800046ecd5" 00:15:23.854 ], 00:15:23.854 "product_name": "Raid Volume", 00:15:23.854 "block_size": 512, 00:15:23.854 "num_blocks": 126976, 00:15:23.854 "uuid": "d7841c9f-33f3-419c-aac0-3f800046ecd5", 00:15:23.854 "assigned_rate_limits": { 00:15:23.854 "rw_ios_per_sec": 0, 00:15:23.854 "rw_mbytes_per_sec": 0, 00:15:23.854 "r_mbytes_per_sec": 0, 00:15:23.854 "w_mbytes_per_sec": 0 00:15:23.854 }, 00:15:23.854 "claimed": false, 00:15:23.854 "zoned": false, 00:15:23.854 "supported_io_types": { 00:15:23.854 "read": true, 00:15:23.854 "write": true, 00:15:23.854 "unmap": true, 00:15:23.854 "flush": true, 00:15:23.854 "reset": true, 00:15:23.854 "nvme_admin": false, 00:15:23.854 "nvme_io": false, 00:15:23.854 "nvme_io_md": false, 00:15:23.854 "write_zeroes": true, 00:15:23.854 "zcopy": false, 00:15:23.854 "get_zone_info": false, 00:15:23.854 "zone_management": false, 00:15:23.854 "zone_append": false, 00:15:23.854 "compare": false, 00:15:23.854 "compare_and_write": false, 00:15:23.854 "abort": false, 00:15:23.854 "seek_hole": false, 00:15:23.854 "seek_data": false, 00:15:23.854 "copy": false, 00:15:23.854 "nvme_iov_md": false 00:15:23.854 }, 00:15:23.854 "memory_domains": [ 00:15:23.854 { 00:15:23.854 "dma_device_id": "system", 00:15:23.854 "dma_device_type": 1 00:15:23.854 }, 00:15:23.854 { 00:15:23.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.854 "dma_device_type": 2 00:15:23.854 }, 00:15:23.854 { 00:15:23.854 "dma_device_id": "system", 00:15:23.854 "dma_device_type": 1 00:15:23.854 }, 00:15:23.854 { 00:15:23.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.854 "dma_device_type": 2 00:15:23.854 } 00:15:23.854 ], 00:15:23.854 "driver_specific": { 00:15:23.854 "raid": { 00:15:23.854 "uuid": "d7841c9f-33f3-419c-aac0-3f800046ecd5", 00:15:23.854 "strip_size_kb": 64, 00:15:23.854 "state": "online", 00:15:23.854 "raid_level": "concat", 00:15:23.854 "superblock": true, 00:15:23.854 "num_base_bdevs": 2, 00:15:23.854 "num_base_bdevs_discovered": 2, 00:15:23.854 "num_base_bdevs_operational": 2, 00:15:23.854 "base_bdevs_list": [ 00:15:23.854 { 00:15:23.854 "name": "pt1", 00:15:23.854 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:23.854 "is_configured": true, 00:15:23.854 "data_offset": 2048, 00:15:23.854 "data_size": 63488 00:15:23.854 }, 00:15:23.854 { 00:15:23.854 "name": "pt2", 00:15:23.854 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:23.854 "is_configured": true, 00:15:23.854 "data_offset": 2048, 00:15:23.854 "data_size": 63488 00:15:23.854 } 00:15:23.854 ] 00:15:23.854 } 00:15:23.854 } 00:15:23.854 }' 00:15:23.854 23:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:23.854 23:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:23.854 pt2' 00:15:23.854 23:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:23.854 23:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:23.854 23:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:24.113 23:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:24.113 "name": "pt1", 00:15:24.113 "aliases": [ 00:15:24.113 "00000000-0000-0000-0000-000000000001" 00:15:24.113 ], 00:15:24.113 "product_name": "passthru", 00:15:24.113 "block_size": 512, 00:15:24.113 "num_blocks": 65536, 00:15:24.113 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:24.113 "assigned_rate_limits": { 00:15:24.113 "rw_ios_per_sec": 0, 00:15:24.113 "rw_mbytes_per_sec": 0, 00:15:24.113 "r_mbytes_per_sec": 0, 00:15:24.113 "w_mbytes_per_sec": 0 00:15:24.113 }, 00:15:24.113 "claimed": true, 00:15:24.113 "claim_type": "exclusive_write", 00:15:24.113 "zoned": false, 00:15:24.113 "supported_io_types": { 00:15:24.113 "read": true, 00:15:24.113 "write": true, 00:15:24.113 "unmap": true, 00:15:24.113 "flush": true, 00:15:24.113 "reset": true, 00:15:24.113 "nvme_admin": false, 00:15:24.113 "nvme_io": false, 00:15:24.113 "nvme_io_md": false, 00:15:24.113 "write_zeroes": true, 00:15:24.113 "zcopy": true, 00:15:24.113 "get_zone_info": false, 00:15:24.113 "zone_management": false, 00:15:24.113 "zone_append": false, 00:15:24.113 "compare": false, 00:15:24.113 "compare_and_write": false, 00:15:24.113 "abort": true, 00:15:24.113 "seek_hole": false, 00:15:24.113 "seek_data": false, 00:15:24.113 "copy": true, 00:15:24.113 "nvme_iov_md": false 00:15:24.113 }, 00:15:24.113 "memory_domains": [ 00:15:24.113 { 00:15:24.113 "dma_device_id": "system", 00:15:24.113 "dma_device_type": 1 00:15:24.113 }, 00:15:24.113 { 00:15:24.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.113 "dma_device_type": 2 00:15:24.113 } 00:15:24.113 ], 00:15:24.113 "driver_specific": { 00:15:24.113 "passthru": { 00:15:24.113 "name": "pt1", 00:15:24.113 "base_bdev_name": "malloc1" 00:15:24.113 } 00:15:24.113 } 00:15:24.113 }' 00:15:24.113 23:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:24.372 23:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:24.372 23:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:24.372 23:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:24.372 23:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:24.372 23:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:24.372 23:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:24.372 23:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:24.372 23:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:24.372 23:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:24.630 23:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:24.630 23:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:24.630 23:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:24.630 23:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:24.630 23:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:24.889 23:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:24.889 "name": "pt2", 00:15:24.889 "aliases": [ 00:15:24.889 "00000000-0000-0000-0000-000000000002" 00:15:24.889 ], 00:15:24.889 "product_name": "passthru", 00:15:24.889 "block_size": 512, 00:15:24.889 "num_blocks": 65536, 00:15:24.889 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:24.889 "assigned_rate_limits": { 00:15:24.889 "rw_ios_per_sec": 0, 00:15:24.889 "rw_mbytes_per_sec": 0, 00:15:24.889 "r_mbytes_per_sec": 0, 00:15:24.889 "w_mbytes_per_sec": 0 00:15:24.889 }, 00:15:24.889 "claimed": true, 00:15:24.889 "claim_type": "exclusive_write", 00:15:24.889 "zoned": false, 00:15:24.889 "supported_io_types": { 00:15:24.889 "read": true, 00:15:24.889 "write": true, 00:15:24.889 "unmap": true, 00:15:24.889 "flush": true, 00:15:24.889 "reset": true, 00:15:24.889 "nvme_admin": false, 00:15:24.889 "nvme_io": false, 00:15:24.889 "nvme_io_md": false, 00:15:24.889 "write_zeroes": true, 00:15:24.889 "zcopy": true, 00:15:24.889 "get_zone_info": false, 00:15:24.889 "zone_management": false, 00:15:24.889 "zone_append": false, 00:15:24.889 "compare": false, 00:15:24.889 "compare_and_write": false, 00:15:24.889 "abort": true, 00:15:24.889 "seek_hole": false, 00:15:24.889 "seek_data": false, 00:15:24.889 "copy": true, 00:15:24.889 "nvme_iov_md": false 00:15:24.889 }, 00:15:24.889 "memory_domains": [ 00:15:24.889 { 00:15:24.889 "dma_device_id": "system", 00:15:24.889 "dma_device_type": 1 00:15:24.889 }, 00:15:24.889 { 00:15:24.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.889 "dma_device_type": 2 00:15:24.889 } 00:15:24.889 ], 00:15:24.889 "driver_specific": { 00:15:24.889 "passthru": { 00:15:24.889 "name": "pt2", 00:15:24.889 "base_bdev_name": "malloc2" 00:15:24.889 } 00:15:24.889 } 00:15:24.889 }' 00:15:24.889 23:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:24.889 23:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:24.889 23:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:24.889 23:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:24.889 23:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:24.889 23:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:24.889 23:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:24.889 23:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:25.148 23:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:25.148 23:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:25.148 23:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:25.148 23:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:25.148 23:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:25.148 23:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:15:25.407 [2024-07-13 23:01:14.635533] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:25.407 23:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=d7841c9f-33f3-419c-aac0-3f800046ecd5 00:15:25.407 23:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z d7841c9f-33f3-419c-aac0-3f800046ecd5 ']' 00:15:25.407 23:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:25.665 [2024-07-13 23:01:14.899337] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:25.665 [2024-07-13 23:01:14.899365] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:25.666 [2024-07-13 23:01:14.899475] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:25.666 [2024-07-13 23:01:14.899552] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:25.666 [2024-07-13 23:01:14.899568] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name raid_bdev1, state offline 00:15:25.666 23:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.666 23:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:15:25.924 23:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:15:25.924 23:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:15:25.924 23:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:25.924 23:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:26.183 23:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:26.183 23:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:26.442 23:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:26.442 23:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:26.701 23:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:15:26.701 23:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:26.701 23:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:15:26.701 23:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:26.701 23:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:26.701 23:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:26.701 23:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:26.701 23:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:26.701 23:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:26.701 23:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:26.701 23:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:26.701 23:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:26.701 23:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:26.960 [2024-07-13 23:01:16.187598] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:26.960 [2024-07-13 23:01:16.189831] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:26.960 [2024-07-13 23:01:16.189927] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:26.960 [2024-07-13 23:01:16.190031] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:26.960 [2024-07-13 23:01:16.190081] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:26.960 [2024-07-13 23:01:16.190094] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state configuring 00:15:26.960 request: 00:15:26.960 { 00:15:26.960 "name": "raid_bdev1", 00:15:26.960 "raid_level": "concat", 00:15:26.960 "base_bdevs": [ 00:15:26.960 "malloc1", 00:15:26.960 "malloc2" 00:15:26.960 ], 00:15:26.960 "strip_size_kb": 64, 00:15:26.960 "superblock": false, 00:15:26.960 "method": "bdev_raid_create", 00:15:26.960 "req_id": 1 00:15:26.960 } 00:15:26.960 Got JSON-RPC error response 00:15:26.960 response: 00:15:26.960 { 00:15:26.960 "code": -17, 00:15:26.960 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:26.960 } 00:15:26.960 23:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:15:26.960 23:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:26.960 23:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:26.960 23:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:26.960 23:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.960 23:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:15:27.220 23:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:15:27.220 23:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:15:27.220 23:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:27.479 [2024-07-13 23:01:16.691598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:27.479 [2024-07-13 23:01:16.691750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.479 [2024-07-13 23:01:16.691800] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:15:27.479 [2024-07-13 23:01:16.691835] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.479 [2024-07-13 23:01:16.694268] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.479 [2024-07-13 23:01:16.694333] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:27.479 [2024-07-13 23:01:16.694415] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:27.479 [2024-07-13 23:01:16.694469] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:27.479 pt1 00:15:27.479 23:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:15:27.479 23:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:27.479 23:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:27.479 23:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:27.479 23:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:27.479 23:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:27.479 23:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:27.479 23:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:27.479 23:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:27.480 23:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:27.480 23:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.480 23:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.739 23:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:27.739 "name": "raid_bdev1", 00:15:27.739 "uuid": "d7841c9f-33f3-419c-aac0-3f800046ecd5", 00:15:27.739 "strip_size_kb": 64, 00:15:27.739 "state": "configuring", 00:15:27.739 "raid_level": "concat", 00:15:27.739 "superblock": true, 00:15:27.739 "num_base_bdevs": 2, 00:15:27.739 "num_base_bdevs_discovered": 1, 00:15:27.739 "num_base_bdevs_operational": 2, 00:15:27.739 "base_bdevs_list": [ 00:15:27.739 { 00:15:27.739 "name": "pt1", 00:15:27.739 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:27.739 "is_configured": true, 00:15:27.739 "data_offset": 2048, 00:15:27.739 "data_size": 63488 00:15:27.739 }, 00:15:27.739 { 00:15:27.739 "name": null, 00:15:27.739 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:27.739 "is_configured": false, 00:15:27.739 "data_offset": 2048, 00:15:27.739 "data_size": 63488 00:15:27.739 } 00:15:27.739 ] 00:15:27.739 }' 00:15:27.739 23:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:27.739 23:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.306 23:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:15:28.306 23:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:15:28.306 23:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:28.306 23:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:28.565 [2024-07-13 23:01:17.904001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:28.565 [2024-07-13 23:01:17.904179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.565 [2024-07-13 23:01:17.904228] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:15:28.565 [2024-07-13 23:01:17.904262] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.565 [2024-07-13 23:01:17.904856] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.565 [2024-07-13 23:01:17.904938] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:28.565 [2024-07-13 23:01:17.905043] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:28.565 [2024-07-13 23:01:17.905080] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:28.565 [2024-07-13 23:01:17.905249] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:15:28.565 [2024-07-13 23:01:17.905267] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:28.565 [2024-07-13 23:01:17.905361] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:28.565 [2024-07-13 23:01:17.905695] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:15:28.565 [2024-07-13 23:01:17.905720] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:15:28.565 [2024-07-13 23:01:17.905835] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.565 pt2 00:15:28.565 23:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:28.565 23:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:28.565 23:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:28.565 23:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:28.565 23:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:28.565 23:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:28.565 23:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:28.565 23:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:28.565 23:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:28.565 23:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:28.565 23:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:28.565 23:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:28.565 23:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.565 23:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.824 23:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:28.824 "name": "raid_bdev1", 00:15:28.824 "uuid": "d7841c9f-33f3-419c-aac0-3f800046ecd5", 00:15:28.824 "strip_size_kb": 64, 00:15:28.824 "state": "online", 00:15:28.824 "raid_level": "concat", 00:15:28.824 "superblock": true, 00:15:28.824 "num_base_bdevs": 2, 00:15:28.824 "num_base_bdevs_discovered": 2, 00:15:28.824 "num_base_bdevs_operational": 2, 00:15:28.824 "base_bdevs_list": [ 00:15:28.824 { 00:15:28.824 "name": "pt1", 00:15:28.824 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:28.824 "is_configured": true, 00:15:28.824 "data_offset": 2048, 00:15:28.824 "data_size": 63488 00:15:28.824 }, 00:15:28.824 { 00:15:28.824 "name": "pt2", 00:15:28.824 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:28.824 "is_configured": true, 00:15:28.824 "data_offset": 2048, 00:15:28.824 "data_size": 63488 00:15:28.824 } 00:15:28.824 ] 00:15:28.824 }' 00:15:28.824 23:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:28.824 23:01:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.391 23:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:15:29.391 23:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:29.391 23:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:29.391 23:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:29.391 23:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:29.391 23:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:29.650 23:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:29.650 23:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:29.650 [2024-07-13 23:01:19.040269] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:29.650 23:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:29.650 "name": "raid_bdev1", 00:15:29.650 "aliases": [ 00:15:29.650 "d7841c9f-33f3-419c-aac0-3f800046ecd5" 00:15:29.650 ], 00:15:29.650 "product_name": "Raid Volume", 00:15:29.650 "block_size": 512, 00:15:29.650 "num_blocks": 126976, 00:15:29.650 "uuid": "d7841c9f-33f3-419c-aac0-3f800046ecd5", 00:15:29.650 "assigned_rate_limits": { 00:15:29.650 "rw_ios_per_sec": 0, 00:15:29.650 "rw_mbytes_per_sec": 0, 00:15:29.650 "r_mbytes_per_sec": 0, 00:15:29.650 "w_mbytes_per_sec": 0 00:15:29.650 }, 00:15:29.650 "claimed": false, 00:15:29.650 "zoned": false, 00:15:29.650 "supported_io_types": { 00:15:29.650 "read": true, 00:15:29.650 "write": true, 00:15:29.650 "unmap": true, 00:15:29.650 "flush": true, 00:15:29.650 "reset": true, 00:15:29.650 "nvme_admin": false, 00:15:29.650 "nvme_io": false, 00:15:29.650 "nvme_io_md": false, 00:15:29.650 "write_zeroes": true, 00:15:29.650 "zcopy": false, 00:15:29.650 "get_zone_info": false, 00:15:29.650 "zone_management": false, 00:15:29.650 "zone_append": false, 00:15:29.650 "compare": false, 00:15:29.650 "compare_and_write": false, 00:15:29.650 "abort": false, 00:15:29.650 "seek_hole": false, 00:15:29.650 "seek_data": false, 00:15:29.650 "copy": false, 00:15:29.650 "nvme_iov_md": false 00:15:29.650 }, 00:15:29.650 "memory_domains": [ 00:15:29.650 { 00:15:29.650 "dma_device_id": "system", 00:15:29.650 "dma_device_type": 1 00:15:29.650 }, 00:15:29.650 { 00:15:29.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.650 "dma_device_type": 2 00:15:29.650 }, 00:15:29.650 { 00:15:29.650 "dma_device_id": "system", 00:15:29.650 "dma_device_type": 1 00:15:29.650 }, 00:15:29.650 { 00:15:29.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.650 "dma_device_type": 2 00:15:29.650 } 00:15:29.650 ], 00:15:29.650 "driver_specific": { 00:15:29.650 "raid": { 00:15:29.650 "uuid": "d7841c9f-33f3-419c-aac0-3f800046ecd5", 00:15:29.650 "strip_size_kb": 64, 00:15:29.650 "state": "online", 00:15:29.650 "raid_level": "concat", 00:15:29.650 "superblock": true, 00:15:29.650 "num_base_bdevs": 2, 00:15:29.650 "num_base_bdevs_discovered": 2, 00:15:29.650 "num_base_bdevs_operational": 2, 00:15:29.650 "base_bdevs_list": [ 00:15:29.650 { 00:15:29.650 "name": "pt1", 00:15:29.650 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:29.650 "is_configured": true, 00:15:29.650 "data_offset": 2048, 00:15:29.650 "data_size": 63488 00:15:29.650 }, 00:15:29.650 { 00:15:29.650 "name": "pt2", 00:15:29.650 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:29.650 "is_configured": true, 00:15:29.650 "data_offset": 2048, 00:15:29.650 "data_size": 63488 00:15:29.650 } 00:15:29.650 ] 00:15:29.650 } 00:15:29.650 } 00:15:29.650 }' 00:15:29.909 23:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:29.909 23:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:29.909 pt2' 00:15:29.909 23:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:29.909 23:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:29.909 23:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:30.168 23:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:30.168 "name": "pt1", 00:15:30.168 "aliases": [ 00:15:30.168 "00000000-0000-0000-0000-000000000001" 00:15:30.168 ], 00:15:30.168 "product_name": "passthru", 00:15:30.168 "block_size": 512, 00:15:30.168 "num_blocks": 65536, 00:15:30.168 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:30.168 "assigned_rate_limits": { 00:15:30.168 "rw_ios_per_sec": 0, 00:15:30.168 "rw_mbytes_per_sec": 0, 00:15:30.168 "r_mbytes_per_sec": 0, 00:15:30.168 "w_mbytes_per_sec": 0 00:15:30.168 }, 00:15:30.168 "claimed": true, 00:15:30.168 "claim_type": "exclusive_write", 00:15:30.168 "zoned": false, 00:15:30.168 "supported_io_types": { 00:15:30.168 "read": true, 00:15:30.168 "write": true, 00:15:30.168 "unmap": true, 00:15:30.168 "flush": true, 00:15:30.168 "reset": true, 00:15:30.168 "nvme_admin": false, 00:15:30.168 "nvme_io": false, 00:15:30.168 "nvme_io_md": false, 00:15:30.168 "write_zeroes": true, 00:15:30.168 "zcopy": true, 00:15:30.168 "get_zone_info": false, 00:15:30.168 "zone_management": false, 00:15:30.168 "zone_append": false, 00:15:30.168 "compare": false, 00:15:30.168 "compare_and_write": false, 00:15:30.168 "abort": true, 00:15:30.168 "seek_hole": false, 00:15:30.168 "seek_data": false, 00:15:30.168 "copy": true, 00:15:30.168 "nvme_iov_md": false 00:15:30.168 }, 00:15:30.168 "memory_domains": [ 00:15:30.168 { 00:15:30.168 "dma_device_id": "system", 00:15:30.168 "dma_device_type": 1 00:15:30.168 }, 00:15:30.168 { 00:15:30.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.168 "dma_device_type": 2 00:15:30.168 } 00:15:30.168 ], 00:15:30.168 "driver_specific": { 00:15:30.168 "passthru": { 00:15:30.168 "name": "pt1", 00:15:30.168 "base_bdev_name": "malloc1" 00:15:30.168 } 00:15:30.168 } 00:15:30.168 }' 00:15:30.168 23:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:30.168 23:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:30.168 23:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:30.168 23:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:30.168 23:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:30.168 23:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:30.168 23:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:30.168 23:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:30.427 23:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:30.427 23:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:30.427 23:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:30.427 23:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:30.427 23:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:30.427 23:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:30.427 23:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:30.686 23:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:30.686 "name": "pt2", 00:15:30.686 "aliases": [ 00:15:30.686 "00000000-0000-0000-0000-000000000002" 00:15:30.686 ], 00:15:30.686 "product_name": "passthru", 00:15:30.686 "block_size": 512, 00:15:30.686 "num_blocks": 65536, 00:15:30.686 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:30.686 "assigned_rate_limits": { 00:15:30.686 "rw_ios_per_sec": 0, 00:15:30.686 "rw_mbytes_per_sec": 0, 00:15:30.686 "r_mbytes_per_sec": 0, 00:15:30.686 "w_mbytes_per_sec": 0 00:15:30.686 }, 00:15:30.686 "claimed": true, 00:15:30.686 "claim_type": "exclusive_write", 00:15:30.686 "zoned": false, 00:15:30.686 "supported_io_types": { 00:15:30.686 "read": true, 00:15:30.686 "write": true, 00:15:30.686 "unmap": true, 00:15:30.686 "flush": true, 00:15:30.686 "reset": true, 00:15:30.686 "nvme_admin": false, 00:15:30.686 "nvme_io": false, 00:15:30.686 "nvme_io_md": false, 00:15:30.686 "write_zeroes": true, 00:15:30.686 "zcopy": true, 00:15:30.686 "get_zone_info": false, 00:15:30.686 "zone_management": false, 00:15:30.686 "zone_append": false, 00:15:30.686 "compare": false, 00:15:30.686 "compare_and_write": false, 00:15:30.686 "abort": true, 00:15:30.686 "seek_hole": false, 00:15:30.686 "seek_data": false, 00:15:30.686 "copy": true, 00:15:30.686 "nvme_iov_md": false 00:15:30.686 }, 00:15:30.686 "memory_domains": [ 00:15:30.686 { 00:15:30.686 "dma_device_id": "system", 00:15:30.686 "dma_device_type": 1 00:15:30.686 }, 00:15:30.686 { 00:15:30.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.686 "dma_device_type": 2 00:15:30.686 } 00:15:30.686 ], 00:15:30.686 "driver_specific": { 00:15:30.686 "passthru": { 00:15:30.686 "name": "pt2", 00:15:30.686 "base_bdev_name": "malloc2" 00:15:30.686 } 00:15:30.686 } 00:15:30.686 }' 00:15:30.686 23:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:30.686 23:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:30.686 23:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:30.686 23:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:30.944 23:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:30.944 23:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:30.944 23:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:30.944 23:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:30.944 23:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:30.944 23:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:30.944 23:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:31.203 23:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:31.203 23:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:31.203 23:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:15:31.203 [2024-07-13 23:01:20.554059] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:31.203 23:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' d7841c9f-33f3-419c-aac0-3f800046ecd5 '!=' d7841c9f-33f3-419c-aac0-3f800046ecd5 ']' 00:15:31.203 23:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:15:31.203 23:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:31.203 23:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:31.203 23:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 132949 00:15:31.203 23:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 132949 ']' 00:15:31.203 23:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 132949 00:15:31.203 23:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:15:31.203 23:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:31.203 23:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 132949 00:15:31.203 killing process with pid 132949 00:15:31.203 23:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:31.203 23:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:31.203 23:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 132949' 00:15:31.203 23:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 132949 00:15:31.203 [2024-07-13 23:01:20.596885] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:31.203 23:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 132949 00:15:31.203 [2024-07-13 23:01:20.597018] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:31.203 [2024-07-13 23:01:20.597098] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:31.203 [2024-07-13 23:01:20.597112] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:15:31.507 [2024-07-13 23:01:20.617781] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:31.507 23:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:15:31.507 00:15:31.507 real 0m10.877s 00:15:31.507 user 0m20.163s 00:15:31.507 sys 0m1.366s 00:15:31.507 23:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:31.507 23:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.507 ************************************ 00:15:31.507 END TEST raid_superblock_test 00:15:31.507 ************************************ 00:15:31.507 23:01:20 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:31.507 23:01:20 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:15:31.507 23:01:20 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:31.507 23:01:20 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:31.507 23:01:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:31.765 ************************************ 00:15:31.765 START TEST raid_read_error_test 00:15:31.765 ************************************ 00:15:31.765 23:01:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 read 00:15:31.765 23:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:15:31.765 23:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:15:31.765 23:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:15:31.765 23:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:31.765 23:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:31.765 23:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:15:31.765 23:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:31.765 23:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:31.765 23:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:15:31.765 23:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:31.765 23:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:31.765 23:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:31.765 23:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:31.765 23:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:31.765 23:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:31.765 23:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:31.765 23:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:31.765 23:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:31.765 23:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:15:31.765 23:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:31.765 23:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:31.765 23:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:31.765 23:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.Un3fFoIgYi 00:15:31.765 23:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=133319 00:15:31.765 23:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:31.765 23:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 133319 /var/tmp/spdk-raid.sock 00:15:31.765 23:01:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 133319 ']' 00:15:31.765 23:01:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:31.765 23:01:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:31.765 23:01:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:31.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:31.765 23:01:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:31.765 23:01:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.765 [2024-07-13 23:01:20.969747] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:15:31.765 [2024-07-13 23:01:20.969994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133319 ] 00:15:31.765 [2024-07-13 23:01:21.120830] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.024 [2024-07-13 23:01:21.211252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.024 [2024-07-13 23:01:21.289701] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:32.591 23:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:32.591 23:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:15:32.591 23:01:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:32.591 23:01:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:32.850 BaseBdev1_malloc 00:15:32.850 23:01:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:33.108 true 00:15:33.108 23:01:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:33.366 [2024-07-13 23:01:22.570958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:33.366 [2024-07-13 23:01:22.571227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.366 [2024-07-13 23:01:22.571400] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:15:33.366 [2024-07-13 23:01:22.571557] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.366 [2024-07-13 23:01:22.574327] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.366 [2024-07-13 23:01:22.574509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:33.366 BaseBdev1 00:15:33.366 23:01:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:33.366 23:01:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:33.624 BaseBdev2_malloc 00:15:33.624 23:01:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:33.624 true 00:15:33.624 23:01:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:33.883 [2024-07-13 23:01:23.193018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:33.883 [2024-07-13 23:01:23.194347] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.883 [2024-07-13 23:01:23.194434] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:15:33.883 [2024-07-13 23:01:23.194673] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.883 [2024-07-13 23:01:23.197381] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.883 [2024-07-13 23:01:23.197545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:33.883 BaseBdev2 00:15:33.884 23:01:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:15:34.142 [2024-07-13 23:01:23.445966] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:34.142 [2024-07-13 23:01:23.448232] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:34.142 [2024-07-13 23:01:23.448607] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:15:34.142 [2024-07-13 23:01:23.448731] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:34.142 [2024-07-13 23:01:23.449008] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:15:34.142 [2024-07-13 23:01:23.449547] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:15:34.142 [2024-07-13 23:01:23.449694] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007280 00:15:34.142 [2024-07-13 23:01:23.450020] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.142 23:01:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:34.142 23:01:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:34.142 23:01:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:34.142 23:01:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:34.142 23:01:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:34.142 23:01:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:34.142 23:01:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:34.142 23:01:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:34.142 23:01:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:34.142 23:01:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:34.142 23:01:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.142 23:01:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.399 23:01:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:34.399 "name": "raid_bdev1", 00:15:34.399 "uuid": "c9e966ae-107c-4130-a6fd-9c910ed2efc3", 00:15:34.399 "strip_size_kb": 64, 00:15:34.399 "state": "online", 00:15:34.399 "raid_level": "concat", 00:15:34.399 "superblock": true, 00:15:34.399 "num_base_bdevs": 2, 00:15:34.399 "num_base_bdevs_discovered": 2, 00:15:34.399 "num_base_bdevs_operational": 2, 00:15:34.399 "base_bdevs_list": [ 00:15:34.399 { 00:15:34.399 "name": "BaseBdev1", 00:15:34.399 "uuid": "ec84a153-88b3-55ba-b877-334db88938de", 00:15:34.399 "is_configured": true, 00:15:34.399 "data_offset": 2048, 00:15:34.399 "data_size": 63488 00:15:34.399 }, 00:15:34.399 { 00:15:34.399 "name": "BaseBdev2", 00:15:34.399 "uuid": "d3bbfefd-8ac4-5308-be7b-95b28eaad8fa", 00:15:34.399 "is_configured": true, 00:15:34.399 "data_offset": 2048, 00:15:34.399 "data_size": 63488 00:15:34.399 } 00:15:34.399 ] 00:15:34.399 }' 00:15:34.399 23:01:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:34.399 23:01:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.963 23:01:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:34.963 23:01:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:35.222 [2024-07-13 23:01:24.422840] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:15:36.153 23:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:36.411 23:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:36.411 23:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:15:36.411 23:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:15:36.411 23:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:36.411 23:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:36.411 23:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:36.411 23:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:36.411 23:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:36.411 23:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:36.411 23:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:36.411 23:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:36.411 23:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:36.411 23:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:36.411 23:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.411 23:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.668 23:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:36.668 "name": "raid_bdev1", 00:15:36.668 "uuid": "c9e966ae-107c-4130-a6fd-9c910ed2efc3", 00:15:36.668 "strip_size_kb": 64, 00:15:36.668 "state": "online", 00:15:36.668 "raid_level": "concat", 00:15:36.668 "superblock": true, 00:15:36.668 "num_base_bdevs": 2, 00:15:36.668 "num_base_bdevs_discovered": 2, 00:15:36.668 "num_base_bdevs_operational": 2, 00:15:36.668 "base_bdevs_list": [ 00:15:36.668 { 00:15:36.668 "name": "BaseBdev1", 00:15:36.668 "uuid": "ec84a153-88b3-55ba-b877-334db88938de", 00:15:36.668 "is_configured": true, 00:15:36.668 "data_offset": 2048, 00:15:36.668 "data_size": 63488 00:15:36.668 }, 00:15:36.668 { 00:15:36.668 "name": "BaseBdev2", 00:15:36.668 "uuid": "d3bbfefd-8ac4-5308-be7b-95b28eaad8fa", 00:15:36.668 "is_configured": true, 00:15:36.668 "data_offset": 2048, 00:15:36.668 "data_size": 63488 00:15:36.668 } 00:15:36.668 ] 00:15:36.668 }' 00:15:36.668 23:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:36.668 23:01:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.233 23:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:37.491 [2024-07-13 23:01:26.694272] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:37.491 [2024-07-13 23:01:26.694330] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:37.491 [2024-07-13 23:01:26.696804] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.491 [2024-07-13 23:01:26.696870] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.491 [2024-07-13 23:01:26.696930] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:37.491 [2024-07-13 23:01:26.696942] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state offline 00:15:37.491 0 00:15:37.491 23:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 133319 00:15:37.491 23:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 133319 ']' 00:15:37.491 23:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 133319 00:15:37.491 23:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:15:37.491 23:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:37.491 23:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 133319 00:15:37.491 23:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:37.491 23:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:37.491 killing process with pid 133319 00:15:37.491 23:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 133319' 00:15:37.491 23:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 133319 00:15:37.491 23:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 133319 00:15:37.491 [2024-07-13 23:01:26.726409] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:37.491 [2024-07-13 23:01:26.741399] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:37.748 23:01:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.Un3fFoIgYi 00:15:37.749 23:01:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:37.749 23:01:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:37.749 23:01:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.44 00:15:37.749 23:01:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:15:37.749 23:01:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:37.749 23:01:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:37.749 23:01:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.44 != \0\.\0\0 ]] 00:15:37.749 00:15:37.749 real 0m6.164s 00:15:37.749 user 0m9.803s 00:15:37.749 sys 0m0.795s 00:15:37.749 23:01:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:37.749 23:01:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.749 ************************************ 00:15:37.749 END TEST raid_read_error_test 00:15:37.749 ************************************ 00:15:37.749 23:01:27 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:37.749 23:01:27 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:15:37.749 23:01:27 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:37.749 23:01:27 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:37.749 23:01:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:37.749 ************************************ 00:15:37.749 START TEST raid_write_error_test 00:15:37.749 ************************************ 00:15:37.749 23:01:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 write 00:15:37.749 23:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:15:37.749 23:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:15:37.749 23:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:15:37.749 23:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:37.749 23:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:37.749 23:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:15:37.749 23:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:37.749 23:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:37.749 23:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:15:37.749 23:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:37.749 23:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:37.749 23:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:37.749 23:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:37.749 23:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:37.749 23:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:37.749 23:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:37.749 23:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:37.749 23:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:37.749 23:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:15:37.749 23:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:37.749 23:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:37.749 23:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:37.749 23:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.1hJNkye9HT 00:15:37.749 23:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=133498 00:15:37.749 23:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 133498 /var/tmp/spdk-raid.sock 00:15:37.749 23:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:37.749 23:01:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 133498 ']' 00:15:37.749 23:01:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:37.749 23:01:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:37.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:37.749 23:01:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:37.749 23:01:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:37.749 23:01:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.007 [2024-07-13 23:01:27.184946] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:15:38.007 [2024-07-13 23:01:27.185156] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133498 ] 00:15:38.007 [2024-07-13 23:01:27.322526] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.007 [2024-07-13 23:01:27.406405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.265 [2024-07-13 23:01:27.481692] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:38.831 23:01:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:38.832 23:01:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:15:38.832 23:01:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:38.832 23:01:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:39.090 BaseBdev1_malloc 00:15:39.090 23:01:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:39.348 true 00:15:39.348 23:01:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:39.605 [2024-07-13 23:01:28.837672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:39.605 [2024-07-13 23:01:28.837772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.605 [2024-07-13 23:01:28.837819] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:15:39.605 [2024-07-13 23:01:28.837871] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.605 [2024-07-13 23:01:28.840372] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.605 [2024-07-13 23:01:28.840426] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:39.605 BaseBdev1 00:15:39.605 23:01:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:39.605 23:01:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:39.863 BaseBdev2_malloc 00:15:39.863 23:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:40.120 true 00:15:40.121 23:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:40.378 [2024-07-13 23:01:29.535245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:40.378 [2024-07-13 23:01:29.535340] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.378 [2024-07-13 23:01:29.535388] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:15:40.378 [2024-07-13 23:01:29.535441] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.378 [2024-07-13 23:01:29.537898] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.378 [2024-07-13 23:01:29.537947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:40.378 BaseBdev2 00:15:40.378 23:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:15:40.379 [2024-07-13 23:01:29.743346] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:40.379 [2024-07-13 23:01:29.745468] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:40.379 [2024-07-13 23:01:29.745708] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:15:40.379 [2024-07-13 23:01:29.745724] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:40.379 [2024-07-13 23:01:29.745839] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:15:40.379 [2024-07-13 23:01:29.746277] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:15:40.379 [2024-07-13 23:01:29.746297] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007280 00:15:40.379 [2024-07-13 23:01:29.746426] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.379 23:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:40.379 23:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:40.379 23:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:40.379 23:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:40.379 23:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:40.379 23:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:40.379 23:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:40.379 23:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:40.379 23:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:40.379 23:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:40.379 23:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.379 23:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.637 23:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:40.637 "name": "raid_bdev1", 00:15:40.637 "uuid": "dbec715f-6027-4ea3-80cc-8f9bb7ce2b00", 00:15:40.637 "strip_size_kb": 64, 00:15:40.637 "state": "online", 00:15:40.637 "raid_level": "concat", 00:15:40.637 "superblock": true, 00:15:40.637 "num_base_bdevs": 2, 00:15:40.637 "num_base_bdevs_discovered": 2, 00:15:40.637 "num_base_bdevs_operational": 2, 00:15:40.637 "base_bdevs_list": [ 00:15:40.637 { 00:15:40.637 "name": "BaseBdev1", 00:15:40.637 "uuid": "93d5eadf-02c9-565f-a595-bd7a8ee85655", 00:15:40.637 "is_configured": true, 00:15:40.637 "data_offset": 2048, 00:15:40.637 "data_size": 63488 00:15:40.637 }, 00:15:40.637 { 00:15:40.637 "name": "BaseBdev2", 00:15:40.637 "uuid": "2f8e0e65-39f8-543a-b8f6-6432371743df", 00:15:40.637 "is_configured": true, 00:15:40.637 "data_offset": 2048, 00:15:40.637 "data_size": 63488 00:15:40.637 } 00:15:40.637 ] 00:15:40.637 }' 00:15:40.637 23:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:40.637 23:01:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.204 23:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:41.204 23:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:41.463 [2024-07-13 23:01:30.668809] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:15:42.399 23:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:42.658 23:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:42.658 23:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:15:42.658 23:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:15:42.658 23:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:42.658 23:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:42.658 23:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:42.658 23:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:42.658 23:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:42.658 23:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:42.658 23:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:42.658 23:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:42.658 23:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:42.658 23:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:42.658 23:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.658 23:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.917 23:01:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:42.917 "name": "raid_bdev1", 00:15:42.917 "uuid": "dbec715f-6027-4ea3-80cc-8f9bb7ce2b00", 00:15:42.917 "strip_size_kb": 64, 00:15:42.917 "state": "online", 00:15:42.917 "raid_level": "concat", 00:15:42.917 "superblock": true, 00:15:42.917 "num_base_bdevs": 2, 00:15:42.917 "num_base_bdevs_discovered": 2, 00:15:42.917 "num_base_bdevs_operational": 2, 00:15:42.917 "base_bdevs_list": [ 00:15:42.917 { 00:15:42.917 "name": "BaseBdev1", 00:15:42.917 "uuid": "93d5eadf-02c9-565f-a595-bd7a8ee85655", 00:15:42.917 "is_configured": true, 00:15:42.917 "data_offset": 2048, 00:15:42.917 "data_size": 63488 00:15:42.917 }, 00:15:42.917 { 00:15:42.917 "name": "BaseBdev2", 00:15:42.917 "uuid": "2f8e0e65-39f8-543a-b8f6-6432371743df", 00:15:42.917 "is_configured": true, 00:15:42.917 "data_offset": 2048, 00:15:42.917 "data_size": 63488 00:15:42.917 } 00:15:42.917 ] 00:15:42.917 }' 00:15:42.917 23:01:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:42.917 23:01:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.484 23:01:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:43.742 [2024-07-13 23:01:32.910578] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:43.742 [2024-07-13 23:01:32.910625] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:43.742 [2024-07-13 23:01:32.913648] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:43.742 [2024-07-13 23:01:32.913726] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.742 [2024-07-13 23:01:32.913784] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:43.742 [2024-07-13 23:01:32.913797] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state offline 00:15:43.742 0 00:15:43.742 23:01:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 133498 00:15:43.742 23:01:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 133498 ']' 00:15:43.742 23:01:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 133498 00:15:43.742 23:01:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:15:43.743 23:01:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:43.743 23:01:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 133498 00:15:43.743 killing process with pid 133498 00:15:43.743 23:01:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:43.743 23:01:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:43.743 23:01:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 133498' 00:15:43.743 23:01:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 133498 00:15:43.743 23:01:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 133498 00:15:43.743 [2024-07-13 23:01:32.952510] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:43.743 [2024-07-13 23:01:32.967334] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:44.064 23:01:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.1hJNkye9HT 00:15:44.064 23:01:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:44.064 23:01:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:44.064 23:01:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.45 00:15:44.064 ************************************ 00:15:44.064 END TEST raid_write_error_test 00:15:44.064 ************************************ 00:15:44.064 23:01:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:15:44.064 23:01:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:44.064 23:01:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:44.064 23:01:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.45 != \0\.\0\0 ]] 00:15:44.064 00:15:44.064 real 0m6.089s 00:15:44.064 user 0m9.631s 00:15:44.064 sys 0m0.837s 00:15:44.064 23:01:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:44.064 23:01:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.064 23:01:33 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:44.064 23:01:33 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:15:44.064 23:01:33 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:15:44.064 23:01:33 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:44.064 23:01:33 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:44.064 23:01:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:44.064 ************************************ 00:15:44.064 START TEST raid_state_function_test 00:15:44.064 ************************************ 00:15:44.064 23:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 false 00:15:44.064 23:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:15:44.064 23:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:15:44.064 23:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:15:44.064 23:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:44.064 23:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:44.064 23:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:44.064 23:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:15:44.064 23:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:44.064 23:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:44.064 23:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:15:44.064 23:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:44.064 23:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:44.064 23:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:44.064 23:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:44.064 23:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:44.064 23:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:44.064 23:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:44.064 23:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:44.064 23:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:15:44.064 23:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:15:44.064 23:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:15:44.064 23:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:15:44.064 23:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=133674 00:15:44.064 Process raid pid: 133674 00:15:44.064 23:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 133674' 00:15:44.064 23:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 133674 /var/tmp/spdk-raid.sock 00:15:44.064 23:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:44.064 23:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 133674 ']' 00:15:44.064 23:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:44.064 23:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:44.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:44.064 23:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:44.064 23:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:44.064 23:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.064 [2024-07-13 23:01:33.329891] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:15:44.064 [2024-07-13 23:01:33.330127] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.335 [2024-07-13 23:01:33.475778] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.335 [2024-07-13 23:01:33.564525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.335 [2024-07-13 23:01:33.619353] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:45.268 23:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:45.268 23:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:15:45.268 23:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:45.268 [2024-07-13 23:01:34.505086] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:45.268 [2024-07-13 23:01:34.505158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:45.268 [2024-07-13 23:01:34.505188] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:45.268 [2024-07-13 23:01:34.505208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:45.268 23:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:45.268 23:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:45.268 23:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:45.268 23:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:45.268 23:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:45.268 23:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:45.268 23:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:45.268 23:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:45.268 23:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:45.268 23:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:45.268 23:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.268 23:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.526 23:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:45.526 "name": "Existed_Raid", 00:15:45.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.526 "strip_size_kb": 0, 00:15:45.527 "state": "configuring", 00:15:45.527 "raid_level": "raid1", 00:15:45.527 "superblock": false, 00:15:45.527 "num_base_bdevs": 2, 00:15:45.527 "num_base_bdevs_discovered": 0, 00:15:45.527 "num_base_bdevs_operational": 2, 00:15:45.527 "base_bdevs_list": [ 00:15:45.527 { 00:15:45.527 "name": "BaseBdev1", 00:15:45.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.527 "is_configured": false, 00:15:45.527 "data_offset": 0, 00:15:45.527 "data_size": 0 00:15:45.527 }, 00:15:45.527 { 00:15:45.527 "name": "BaseBdev2", 00:15:45.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.527 "is_configured": false, 00:15:45.527 "data_offset": 0, 00:15:45.527 "data_size": 0 00:15:45.527 } 00:15:45.527 ] 00:15:45.527 }' 00:15:45.527 23:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:45.527 23:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.094 23:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:46.352 [2024-07-13 23:01:35.541383] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:46.352 [2024-07-13 23:01:35.541457] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:15:46.352 23:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:46.610 [2024-07-13 23:01:35.789359] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:46.610 [2024-07-13 23:01:35.789463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:46.610 [2024-07-13 23:01:35.789493] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:46.610 [2024-07-13 23:01:35.789523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:46.610 23:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:46.610 [2024-07-13 23:01:36.012767] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:46.610 BaseBdev1 00:15:46.868 23:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:46.868 23:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:46.868 23:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:46.868 23:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:46.868 23:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:46.868 23:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:46.868 23:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:46.868 23:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:47.126 [ 00:15:47.126 { 00:15:47.126 "name": "BaseBdev1", 00:15:47.126 "aliases": [ 00:15:47.126 "6bf8bffe-3b2f-47b9-85d5-6d66b5f8d3f5" 00:15:47.126 ], 00:15:47.126 "product_name": "Malloc disk", 00:15:47.126 "block_size": 512, 00:15:47.126 "num_blocks": 65536, 00:15:47.126 "uuid": "6bf8bffe-3b2f-47b9-85d5-6d66b5f8d3f5", 00:15:47.126 "assigned_rate_limits": { 00:15:47.126 "rw_ios_per_sec": 0, 00:15:47.126 "rw_mbytes_per_sec": 0, 00:15:47.126 "r_mbytes_per_sec": 0, 00:15:47.126 "w_mbytes_per_sec": 0 00:15:47.126 }, 00:15:47.126 "claimed": true, 00:15:47.126 "claim_type": "exclusive_write", 00:15:47.126 "zoned": false, 00:15:47.126 "supported_io_types": { 00:15:47.126 "read": true, 00:15:47.126 "write": true, 00:15:47.126 "unmap": true, 00:15:47.126 "flush": true, 00:15:47.126 "reset": true, 00:15:47.126 "nvme_admin": false, 00:15:47.126 "nvme_io": false, 00:15:47.126 "nvme_io_md": false, 00:15:47.126 "write_zeroes": true, 00:15:47.126 "zcopy": true, 00:15:47.126 "get_zone_info": false, 00:15:47.126 "zone_management": false, 00:15:47.126 "zone_append": false, 00:15:47.126 "compare": false, 00:15:47.126 "compare_and_write": false, 00:15:47.126 "abort": true, 00:15:47.126 "seek_hole": false, 00:15:47.126 "seek_data": false, 00:15:47.126 "copy": true, 00:15:47.126 "nvme_iov_md": false 00:15:47.126 }, 00:15:47.126 "memory_domains": [ 00:15:47.126 { 00:15:47.126 "dma_device_id": "system", 00:15:47.126 "dma_device_type": 1 00:15:47.126 }, 00:15:47.126 { 00:15:47.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.126 "dma_device_type": 2 00:15:47.126 } 00:15:47.126 ], 00:15:47.126 "driver_specific": {} 00:15:47.126 } 00:15:47.126 ] 00:15:47.126 23:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:47.126 23:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:47.126 23:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:47.126 23:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:47.126 23:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:47.126 23:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:47.126 23:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:47.126 23:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:47.126 23:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:47.126 23:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:47.126 23:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:47.126 23:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.126 23:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.385 23:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:47.385 "name": "Existed_Raid", 00:15:47.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.385 "strip_size_kb": 0, 00:15:47.385 "state": "configuring", 00:15:47.385 "raid_level": "raid1", 00:15:47.385 "superblock": false, 00:15:47.385 "num_base_bdevs": 2, 00:15:47.385 "num_base_bdevs_discovered": 1, 00:15:47.385 "num_base_bdevs_operational": 2, 00:15:47.385 "base_bdevs_list": [ 00:15:47.385 { 00:15:47.385 "name": "BaseBdev1", 00:15:47.385 "uuid": "6bf8bffe-3b2f-47b9-85d5-6d66b5f8d3f5", 00:15:47.385 "is_configured": true, 00:15:47.385 "data_offset": 0, 00:15:47.385 "data_size": 65536 00:15:47.385 }, 00:15:47.385 { 00:15:47.385 "name": "BaseBdev2", 00:15:47.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.385 "is_configured": false, 00:15:47.385 "data_offset": 0, 00:15:47.385 "data_size": 0 00:15:47.385 } 00:15:47.385 ] 00:15:47.385 }' 00:15:47.385 23:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:47.385 23:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.319 23:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:48.319 [2024-07-13 23:01:37.701227] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:48.319 [2024-07-13 23:01:37.701314] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:15:48.319 23:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:48.578 [2024-07-13 23:01:37.961295] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:48.578 [2024-07-13 23:01:37.963450] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:48.578 [2024-07-13 23:01:37.963528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:48.578 23:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:48.578 23:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:48.578 23:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:48.578 23:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:48.578 23:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:48.578 23:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:48.578 23:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:48.578 23:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:48.578 23:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:48.578 23:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:48.578 23:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:48.578 23:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:48.578 23:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.578 23:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.145 23:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:49.145 "name": "Existed_Raid", 00:15:49.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.145 "strip_size_kb": 0, 00:15:49.145 "state": "configuring", 00:15:49.145 "raid_level": "raid1", 00:15:49.145 "superblock": false, 00:15:49.145 "num_base_bdevs": 2, 00:15:49.145 "num_base_bdevs_discovered": 1, 00:15:49.145 "num_base_bdevs_operational": 2, 00:15:49.145 "base_bdevs_list": [ 00:15:49.145 { 00:15:49.145 "name": "BaseBdev1", 00:15:49.145 "uuid": "6bf8bffe-3b2f-47b9-85d5-6d66b5f8d3f5", 00:15:49.145 "is_configured": true, 00:15:49.145 "data_offset": 0, 00:15:49.145 "data_size": 65536 00:15:49.145 }, 00:15:49.145 { 00:15:49.145 "name": "BaseBdev2", 00:15:49.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.145 "is_configured": false, 00:15:49.145 "data_offset": 0, 00:15:49.145 "data_size": 0 00:15:49.145 } 00:15:49.145 ] 00:15:49.145 }' 00:15:49.145 23:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:49.145 23:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.713 23:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:49.713 [2024-07-13 23:01:39.100705] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:49.713 [2024-07-13 23:01:39.100797] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:15:49.713 [2024-07-13 23:01:39.100809] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:49.713 [2024-07-13 23:01:39.100978] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:15:49.713 [2024-07-13 23:01:39.101478] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:15:49.713 [2024-07-13 23:01:39.101505] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:15:49.713 [2024-07-13 23:01:39.101759] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.713 BaseBdev2 00:15:49.972 23:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:49.972 23:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:49.972 23:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:49.972 23:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:49.972 23:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:49.972 23:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:49.972 23:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:50.231 23:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:50.490 [ 00:15:50.490 { 00:15:50.490 "name": "BaseBdev2", 00:15:50.490 "aliases": [ 00:15:50.490 "78d9f943-8b27-42e4-b22c-7d3bdde15e23" 00:15:50.490 ], 00:15:50.490 "product_name": "Malloc disk", 00:15:50.490 "block_size": 512, 00:15:50.490 "num_blocks": 65536, 00:15:50.490 "uuid": "78d9f943-8b27-42e4-b22c-7d3bdde15e23", 00:15:50.490 "assigned_rate_limits": { 00:15:50.490 "rw_ios_per_sec": 0, 00:15:50.490 "rw_mbytes_per_sec": 0, 00:15:50.490 "r_mbytes_per_sec": 0, 00:15:50.490 "w_mbytes_per_sec": 0 00:15:50.490 }, 00:15:50.490 "claimed": true, 00:15:50.490 "claim_type": "exclusive_write", 00:15:50.490 "zoned": false, 00:15:50.490 "supported_io_types": { 00:15:50.490 "read": true, 00:15:50.490 "write": true, 00:15:50.490 "unmap": true, 00:15:50.490 "flush": true, 00:15:50.490 "reset": true, 00:15:50.490 "nvme_admin": false, 00:15:50.490 "nvme_io": false, 00:15:50.490 "nvme_io_md": false, 00:15:50.490 "write_zeroes": true, 00:15:50.490 "zcopy": true, 00:15:50.490 "get_zone_info": false, 00:15:50.490 "zone_management": false, 00:15:50.490 "zone_append": false, 00:15:50.490 "compare": false, 00:15:50.490 "compare_and_write": false, 00:15:50.490 "abort": true, 00:15:50.490 "seek_hole": false, 00:15:50.490 "seek_data": false, 00:15:50.490 "copy": true, 00:15:50.490 "nvme_iov_md": false 00:15:50.490 }, 00:15:50.490 "memory_domains": [ 00:15:50.490 { 00:15:50.490 "dma_device_id": "system", 00:15:50.490 "dma_device_type": 1 00:15:50.490 }, 00:15:50.490 { 00:15:50.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.490 "dma_device_type": 2 00:15:50.490 } 00:15:50.490 ], 00:15:50.490 "driver_specific": {} 00:15:50.490 } 00:15:50.490 ] 00:15:50.490 23:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:50.490 23:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:50.490 23:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:50.490 23:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:50.490 23:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:50.490 23:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:50.490 23:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:50.490 23:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:50.490 23:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:50.490 23:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:50.490 23:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:50.490 23:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:50.490 23:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:50.490 23:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.490 23:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.748 23:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:50.748 "name": "Existed_Raid", 00:15:50.748 "uuid": "dbeb2199-82ea-4b48-9f6e-650923b723a2", 00:15:50.748 "strip_size_kb": 0, 00:15:50.748 "state": "online", 00:15:50.748 "raid_level": "raid1", 00:15:50.748 "superblock": false, 00:15:50.748 "num_base_bdevs": 2, 00:15:50.748 "num_base_bdevs_discovered": 2, 00:15:50.748 "num_base_bdevs_operational": 2, 00:15:50.748 "base_bdevs_list": [ 00:15:50.748 { 00:15:50.748 "name": "BaseBdev1", 00:15:50.748 "uuid": "6bf8bffe-3b2f-47b9-85d5-6d66b5f8d3f5", 00:15:50.748 "is_configured": true, 00:15:50.748 "data_offset": 0, 00:15:50.748 "data_size": 65536 00:15:50.748 }, 00:15:50.748 { 00:15:50.748 "name": "BaseBdev2", 00:15:50.748 "uuid": "78d9f943-8b27-42e4-b22c-7d3bdde15e23", 00:15:50.748 "is_configured": true, 00:15:50.748 "data_offset": 0, 00:15:50.748 "data_size": 65536 00:15:50.748 } 00:15:50.748 ] 00:15:50.748 }' 00:15:50.748 23:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:50.748 23:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.316 23:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:51.316 23:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:51.316 23:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:51.316 23:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:51.316 23:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:51.316 23:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:51.316 23:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:51.316 23:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:51.574 [2024-07-13 23:01:40.817477] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:51.574 23:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:51.574 "name": "Existed_Raid", 00:15:51.574 "aliases": [ 00:15:51.574 "dbeb2199-82ea-4b48-9f6e-650923b723a2" 00:15:51.575 ], 00:15:51.575 "product_name": "Raid Volume", 00:15:51.575 "block_size": 512, 00:15:51.575 "num_blocks": 65536, 00:15:51.575 "uuid": "dbeb2199-82ea-4b48-9f6e-650923b723a2", 00:15:51.575 "assigned_rate_limits": { 00:15:51.575 "rw_ios_per_sec": 0, 00:15:51.575 "rw_mbytes_per_sec": 0, 00:15:51.575 "r_mbytes_per_sec": 0, 00:15:51.575 "w_mbytes_per_sec": 0 00:15:51.575 }, 00:15:51.575 "claimed": false, 00:15:51.575 "zoned": false, 00:15:51.575 "supported_io_types": { 00:15:51.575 "read": true, 00:15:51.575 "write": true, 00:15:51.575 "unmap": false, 00:15:51.575 "flush": false, 00:15:51.575 "reset": true, 00:15:51.575 "nvme_admin": false, 00:15:51.575 "nvme_io": false, 00:15:51.575 "nvme_io_md": false, 00:15:51.575 "write_zeroes": true, 00:15:51.575 "zcopy": false, 00:15:51.575 "get_zone_info": false, 00:15:51.575 "zone_management": false, 00:15:51.575 "zone_append": false, 00:15:51.575 "compare": false, 00:15:51.575 "compare_and_write": false, 00:15:51.575 "abort": false, 00:15:51.575 "seek_hole": false, 00:15:51.575 "seek_data": false, 00:15:51.575 "copy": false, 00:15:51.575 "nvme_iov_md": false 00:15:51.575 }, 00:15:51.575 "memory_domains": [ 00:15:51.575 { 00:15:51.575 "dma_device_id": "system", 00:15:51.575 "dma_device_type": 1 00:15:51.575 }, 00:15:51.575 { 00:15:51.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.575 "dma_device_type": 2 00:15:51.575 }, 00:15:51.575 { 00:15:51.575 "dma_device_id": "system", 00:15:51.575 "dma_device_type": 1 00:15:51.575 }, 00:15:51.575 { 00:15:51.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.575 "dma_device_type": 2 00:15:51.575 } 00:15:51.575 ], 00:15:51.575 "driver_specific": { 00:15:51.575 "raid": { 00:15:51.575 "uuid": "dbeb2199-82ea-4b48-9f6e-650923b723a2", 00:15:51.575 "strip_size_kb": 0, 00:15:51.575 "state": "online", 00:15:51.575 "raid_level": "raid1", 00:15:51.575 "superblock": false, 00:15:51.575 "num_base_bdevs": 2, 00:15:51.575 "num_base_bdevs_discovered": 2, 00:15:51.575 "num_base_bdevs_operational": 2, 00:15:51.575 "base_bdevs_list": [ 00:15:51.575 { 00:15:51.575 "name": "BaseBdev1", 00:15:51.575 "uuid": "6bf8bffe-3b2f-47b9-85d5-6d66b5f8d3f5", 00:15:51.575 "is_configured": true, 00:15:51.575 "data_offset": 0, 00:15:51.575 "data_size": 65536 00:15:51.575 }, 00:15:51.575 { 00:15:51.575 "name": "BaseBdev2", 00:15:51.575 "uuid": "78d9f943-8b27-42e4-b22c-7d3bdde15e23", 00:15:51.575 "is_configured": true, 00:15:51.575 "data_offset": 0, 00:15:51.575 "data_size": 65536 00:15:51.575 } 00:15:51.575 ] 00:15:51.575 } 00:15:51.575 } 00:15:51.575 }' 00:15:51.575 23:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:51.575 23:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:51.575 BaseBdev2' 00:15:51.575 23:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:51.575 23:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:51.575 23:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:51.833 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:51.833 "name": "BaseBdev1", 00:15:51.833 "aliases": [ 00:15:51.833 "6bf8bffe-3b2f-47b9-85d5-6d66b5f8d3f5" 00:15:51.833 ], 00:15:51.833 "product_name": "Malloc disk", 00:15:51.833 "block_size": 512, 00:15:51.833 "num_blocks": 65536, 00:15:51.833 "uuid": "6bf8bffe-3b2f-47b9-85d5-6d66b5f8d3f5", 00:15:51.833 "assigned_rate_limits": { 00:15:51.833 "rw_ios_per_sec": 0, 00:15:51.833 "rw_mbytes_per_sec": 0, 00:15:51.833 "r_mbytes_per_sec": 0, 00:15:51.833 "w_mbytes_per_sec": 0 00:15:51.833 }, 00:15:51.833 "claimed": true, 00:15:51.833 "claim_type": "exclusive_write", 00:15:51.833 "zoned": false, 00:15:51.833 "supported_io_types": { 00:15:51.833 "read": true, 00:15:51.833 "write": true, 00:15:51.833 "unmap": true, 00:15:51.833 "flush": true, 00:15:51.833 "reset": true, 00:15:51.833 "nvme_admin": false, 00:15:51.833 "nvme_io": false, 00:15:51.833 "nvme_io_md": false, 00:15:51.833 "write_zeroes": true, 00:15:51.833 "zcopy": true, 00:15:51.833 "get_zone_info": false, 00:15:51.833 "zone_management": false, 00:15:51.833 "zone_append": false, 00:15:51.833 "compare": false, 00:15:51.833 "compare_and_write": false, 00:15:51.833 "abort": true, 00:15:51.833 "seek_hole": false, 00:15:51.833 "seek_data": false, 00:15:51.833 "copy": true, 00:15:51.833 "nvme_iov_md": false 00:15:51.833 }, 00:15:51.833 "memory_domains": [ 00:15:51.833 { 00:15:51.833 "dma_device_id": "system", 00:15:51.833 "dma_device_type": 1 00:15:51.833 }, 00:15:51.833 { 00:15:51.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.833 "dma_device_type": 2 00:15:51.833 } 00:15:51.833 ], 00:15:51.833 "driver_specific": {} 00:15:51.833 }' 00:15:51.834 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:51.834 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:51.834 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:51.834 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:52.092 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:52.092 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:52.092 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:52.092 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:52.092 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:52.092 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:52.092 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:52.351 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:52.351 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:52.351 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:52.351 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:52.609 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:52.609 "name": "BaseBdev2", 00:15:52.609 "aliases": [ 00:15:52.609 "78d9f943-8b27-42e4-b22c-7d3bdde15e23" 00:15:52.609 ], 00:15:52.609 "product_name": "Malloc disk", 00:15:52.609 "block_size": 512, 00:15:52.609 "num_blocks": 65536, 00:15:52.609 "uuid": "78d9f943-8b27-42e4-b22c-7d3bdde15e23", 00:15:52.609 "assigned_rate_limits": { 00:15:52.609 "rw_ios_per_sec": 0, 00:15:52.609 "rw_mbytes_per_sec": 0, 00:15:52.609 "r_mbytes_per_sec": 0, 00:15:52.609 "w_mbytes_per_sec": 0 00:15:52.609 }, 00:15:52.609 "claimed": true, 00:15:52.609 "claim_type": "exclusive_write", 00:15:52.609 "zoned": false, 00:15:52.609 "supported_io_types": { 00:15:52.609 "read": true, 00:15:52.609 "write": true, 00:15:52.609 "unmap": true, 00:15:52.609 "flush": true, 00:15:52.609 "reset": true, 00:15:52.609 "nvme_admin": false, 00:15:52.609 "nvme_io": false, 00:15:52.609 "nvme_io_md": false, 00:15:52.609 "write_zeroes": true, 00:15:52.609 "zcopy": true, 00:15:52.609 "get_zone_info": false, 00:15:52.609 "zone_management": false, 00:15:52.609 "zone_append": false, 00:15:52.609 "compare": false, 00:15:52.609 "compare_and_write": false, 00:15:52.609 "abort": true, 00:15:52.609 "seek_hole": false, 00:15:52.609 "seek_data": false, 00:15:52.609 "copy": true, 00:15:52.609 "nvme_iov_md": false 00:15:52.609 }, 00:15:52.609 "memory_domains": [ 00:15:52.609 { 00:15:52.609 "dma_device_id": "system", 00:15:52.609 "dma_device_type": 1 00:15:52.609 }, 00:15:52.609 { 00:15:52.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.609 "dma_device_type": 2 00:15:52.609 } 00:15:52.609 ], 00:15:52.609 "driver_specific": {} 00:15:52.609 }' 00:15:52.609 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:52.609 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:52.609 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:52.609 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:52.609 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:52.609 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:52.609 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:52.867 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:52.867 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:52.867 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:52.867 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:52.867 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:52.867 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:53.125 [2024-07-13 23:01:42.449710] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:53.125 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:53.125 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:15:53.125 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:53.125 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:15:53.125 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:15:53.125 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:53.125 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:53.125 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:53.125 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:53.125 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:53.125 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:53.125 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:53.125 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:53.125 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:53.125 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:53.125 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.125 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.384 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:53.384 "name": "Existed_Raid", 00:15:53.384 "uuid": "dbeb2199-82ea-4b48-9f6e-650923b723a2", 00:15:53.384 "strip_size_kb": 0, 00:15:53.384 "state": "online", 00:15:53.384 "raid_level": "raid1", 00:15:53.384 "superblock": false, 00:15:53.384 "num_base_bdevs": 2, 00:15:53.384 "num_base_bdevs_discovered": 1, 00:15:53.384 "num_base_bdevs_operational": 1, 00:15:53.384 "base_bdevs_list": [ 00:15:53.384 { 00:15:53.384 "name": null, 00:15:53.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.384 "is_configured": false, 00:15:53.384 "data_offset": 0, 00:15:53.384 "data_size": 65536 00:15:53.384 }, 00:15:53.384 { 00:15:53.384 "name": "BaseBdev2", 00:15:53.384 "uuid": "78d9f943-8b27-42e4-b22c-7d3bdde15e23", 00:15:53.384 "is_configured": true, 00:15:53.384 "data_offset": 0, 00:15:53.384 "data_size": 65536 00:15:53.384 } 00:15:53.384 ] 00:15:53.384 }' 00:15:53.384 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:53.384 23:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.322 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:54.322 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:54.323 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.323 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:54.323 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:54.323 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:54.323 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:54.580 [2024-07-13 23:01:43.928580] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:54.580 [2024-07-13 23:01:43.928709] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:54.580 [2024-07-13 23:01:43.939295] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:54.580 [2024-07-13 23:01:43.939367] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:54.580 [2024-07-13 23:01:43.939381] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:15:54.580 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:54.580 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:54.580 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.580 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:54.839 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:54.839 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:54.839 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:15:54.839 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 133674 00:15:54.839 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 133674 ']' 00:15:54.839 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 133674 00:15:54.839 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:15:54.839 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:54.839 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 133674 00:15:55.097 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:55.097 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:55.097 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 133674' 00:15:55.097 killing process with pid 133674 00:15:55.097 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 133674 00:15:55.097 [2024-07-13 23:01:44.263752] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:55.097 [2024-07-13 23:01:44.263883] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:55.097 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 133674 00:15:55.097 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:55.097 00:15:55.097 real 0m11.230s 00:15:55.097 user 0m20.706s 00:15:55.097 sys 0m1.421s 00:15:55.097 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:55.097 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.097 ************************************ 00:15:55.097 END TEST raid_state_function_test 00:15:55.097 ************************************ 00:15:55.356 23:01:44 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:55.356 23:01:44 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:15:55.356 23:01:44 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:55.356 23:01:44 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:55.356 23:01:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:55.356 ************************************ 00:15:55.356 START TEST raid_state_function_test_sb 00:15:55.356 ************************************ 00:15:55.356 23:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:15:55.356 23:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:15:55.356 23:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:15:55.356 23:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:55.356 23:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:55.356 23:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:55.356 23:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:55.356 23:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:15:55.356 23:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:55.356 23:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:55.356 23:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:15:55.356 23:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:55.356 23:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:55.356 23:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:55.356 23:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:55.356 23:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:55.356 23:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:55.356 23:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:55.356 23:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:55.356 23:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:15:55.356 23:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:15:55.356 23:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:55.356 23:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:55.356 23:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=134050 00:15:55.356 23:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 134050' 00:15:55.356 Process raid pid: 134050 00:15:55.356 23:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 134050 /var/tmp/spdk-raid.sock 00:15:55.356 23:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 134050 ']' 00:15:55.356 23:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:55.356 23:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:55.356 23:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:55.356 23:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:55.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:55.356 23:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:55.356 23:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.356 [2024-07-13 23:01:44.623630] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:15:55.356 [2024-07-13 23:01:44.623897] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.615 [2024-07-13 23:01:44.771470] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.615 [2024-07-13 23:01:44.854370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.615 [2024-07-13 23:01:44.913179] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:56.182 23:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:56.182 23:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:15:56.182 23:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:56.440 [2024-07-13 23:01:45.693643] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:56.440 [2024-07-13 23:01:45.693743] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:56.440 [2024-07-13 23:01:45.693773] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:56.440 [2024-07-13 23:01:45.693793] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:56.440 23:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:56.440 23:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:56.440 23:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:56.440 23:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:56.440 23:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:56.440 23:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:56.440 23:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:56.440 23:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:56.440 23:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:56.440 23:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:56.440 23:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:56.440 23:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.716 23:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:56.716 "name": "Existed_Raid", 00:15:56.716 "uuid": "43046931-bf9b-4497-9250-f09a49142a1d", 00:15:56.716 "strip_size_kb": 0, 00:15:56.716 "state": "configuring", 00:15:56.716 "raid_level": "raid1", 00:15:56.716 "superblock": true, 00:15:56.716 "num_base_bdevs": 2, 00:15:56.716 "num_base_bdevs_discovered": 0, 00:15:56.716 "num_base_bdevs_operational": 2, 00:15:56.716 "base_bdevs_list": [ 00:15:56.716 { 00:15:56.716 "name": "BaseBdev1", 00:15:56.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.716 "is_configured": false, 00:15:56.716 "data_offset": 0, 00:15:56.716 "data_size": 0 00:15:56.716 }, 00:15:56.716 { 00:15:56.716 "name": "BaseBdev2", 00:15:56.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.716 "is_configured": false, 00:15:56.716 "data_offset": 0, 00:15:56.716 "data_size": 0 00:15:56.716 } 00:15:56.716 ] 00:15:56.716 }' 00:15:56.716 23:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:56.716 23:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.293 23:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:57.551 [2024-07-13 23:01:46.889797] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:57.551 [2024-07-13 23:01:46.889873] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:15:57.551 23:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:57.809 [2024-07-13 23:01:47.125801] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:57.809 [2024-07-13 23:01:47.125895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:57.809 [2024-07-13 23:01:47.125911] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:57.809 [2024-07-13 23:01:47.125946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:57.809 23:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:58.067 [2024-07-13 23:01:47.344664] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:58.067 BaseBdev1 00:15:58.067 23:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:58.067 23:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:58.067 23:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:58.067 23:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:58.067 23:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:58.067 23:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:58.067 23:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:58.326 23:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:58.584 [ 00:15:58.584 { 00:15:58.584 "name": "BaseBdev1", 00:15:58.584 "aliases": [ 00:15:58.584 "95c1b8cf-aa62-4e6e-b7ea-18fe172abb7e" 00:15:58.584 ], 00:15:58.584 "product_name": "Malloc disk", 00:15:58.584 "block_size": 512, 00:15:58.584 "num_blocks": 65536, 00:15:58.584 "uuid": "95c1b8cf-aa62-4e6e-b7ea-18fe172abb7e", 00:15:58.584 "assigned_rate_limits": { 00:15:58.584 "rw_ios_per_sec": 0, 00:15:58.584 "rw_mbytes_per_sec": 0, 00:15:58.584 "r_mbytes_per_sec": 0, 00:15:58.584 "w_mbytes_per_sec": 0 00:15:58.584 }, 00:15:58.584 "claimed": true, 00:15:58.584 "claim_type": "exclusive_write", 00:15:58.584 "zoned": false, 00:15:58.584 "supported_io_types": { 00:15:58.584 "read": true, 00:15:58.584 "write": true, 00:15:58.584 "unmap": true, 00:15:58.584 "flush": true, 00:15:58.584 "reset": true, 00:15:58.584 "nvme_admin": false, 00:15:58.584 "nvme_io": false, 00:15:58.584 "nvme_io_md": false, 00:15:58.584 "write_zeroes": true, 00:15:58.584 "zcopy": true, 00:15:58.584 "get_zone_info": false, 00:15:58.584 "zone_management": false, 00:15:58.584 "zone_append": false, 00:15:58.584 "compare": false, 00:15:58.584 "compare_and_write": false, 00:15:58.584 "abort": true, 00:15:58.584 "seek_hole": false, 00:15:58.584 "seek_data": false, 00:15:58.584 "copy": true, 00:15:58.584 "nvme_iov_md": false 00:15:58.584 }, 00:15:58.584 "memory_domains": [ 00:15:58.584 { 00:15:58.584 "dma_device_id": "system", 00:15:58.584 "dma_device_type": 1 00:15:58.584 }, 00:15:58.584 { 00:15:58.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.584 "dma_device_type": 2 00:15:58.584 } 00:15:58.584 ], 00:15:58.584 "driver_specific": {} 00:15:58.584 } 00:15:58.584 ] 00:15:58.584 23:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:58.584 23:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:58.584 23:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:58.584 23:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:58.584 23:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:58.584 23:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:58.584 23:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:58.584 23:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:58.584 23:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:58.584 23:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:58.584 23:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:58.584 23:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.584 23:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.843 23:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:58.843 "name": "Existed_Raid", 00:15:58.843 "uuid": "1075cb09-183a-494a-8995-3d101c4795ac", 00:15:58.843 "strip_size_kb": 0, 00:15:58.843 "state": "configuring", 00:15:58.843 "raid_level": "raid1", 00:15:58.843 "superblock": true, 00:15:58.843 "num_base_bdevs": 2, 00:15:58.843 "num_base_bdevs_discovered": 1, 00:15:58.843 "num_base_bdevs_operational": 2, 00:15:58.843 "base_bdevs_list": [ 00:15:58.843 { 00:15:58.843 "name": "BaseBdev1", 00:15:58.843 "uuid": "95c1b8cf-aa62-4e6e-b7ea-18fe172abb7e", 00:15:58.843 "is_configured": true, 00:15:58.843 "data_offset": 2048, 00:15:58.843 "data_size": 63488 00:15:58.843 }, 00:15:58.843 { 00:15:58.843 "name": "BaseBdev2", 00:15:58.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.843 "is_configured": false, 00:15:58.843 "data_offset": 0, 00:15:58.843 "data_size": 0 00:15:58.843 } 00:15:58.843 ] 00:15:58.843 }' 00:15:58.843 23:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:58.843 23:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.410 23:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:59.668 [2024-07-13 23:01:48.945017] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:59.668 [2024-07-13 23:01:48.945094] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:15:59.668 23:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:59.927 [2024-07-13 23:01:49.213261] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:59.927 [2024-07-13 23:01:49.215598] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:59.927 [2024-07-13 23:01:49.215674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:59.927 23:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:59.927 23:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:59.927 23:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:59.927 23:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:59.927 23:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:59.927 23:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:59.927 23:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:59.927 23:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:59.927 23:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:59.927 23:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:59.927 23:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:59.927 23:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:59.927 23:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.927 23:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.186 23:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:00.186 "name": "Existed_Raid", 00:16:00.186 "uuid": "826ebaa6-7fc4-48a0-87cf-071184416b19", 00:16:00.186 "strip_size_kb": 0, 00:16:00.186 "state": "configuring", 00:16:00.186 "raid_level": "raid1", 00:16:00.186 "superblock": true, 00:16:00.186 "num_base_bdevs": 2, 00:16:00.186 "num_base_bdevs_discovered": 1, 00:16:00.186 "num_base_bdevs_operational": 2, 00:16:00.186 "base_bdevs_list": [ 00:16:00.186 { 00:16:00.186 "name": "BaseBdev1", 00:16:00.186 "uuid": "95c1b8cf-aa62-4e6e-b7ea-18fe172abb7e", 00:16:00.186 "is_configured": true, 00:16:00.186 "data_offset": 2048, 00:16:00.186 "data_size": 63488 00:16:00.186 }, 00:16:00.186 { 00:16:00.186 "name": "BaseBdev2", 00:16:00.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.186 "is_configured": false, 00:16:00.186 "data_offset": 0, 00:16:00.186 "data_size": 0 00:16:00.186 } 00:16:00.186 ] 00:16:00.186 }' 00:16:00.186 23:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:00.186 23:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.752 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:01.010 [2024-07-13 23:01:50.405728] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:01.010 [2024-07-13 23:01:50.406035] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:16:01.010 [2024-07-13 23:01:50.406053] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:01.010 [2024-07-13 23:01:50.406259] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:16:01.010 [2024-07-13 23:01:50.406746] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:16:01.010 [2024-07-13 23:01:50.406772] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:16:01.010 BaseBdev2 00:16:01.010 [2024-07-13 23:01:50.406958] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.269 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:01.269 23:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:01.269 23:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:01.269 23:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:01.269 23:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:01.269 23:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:01.269 23:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:01.269 23:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:01.527 [ 00:16:01.527 { 00:16:01.527 "name": "BaseBdev2", 00:16:01.527 "aliases": [ 00:16:01.527 "0488b00e-ce9e-4d6b-8917-46f8b90a1f0e" 00:16:01.527 ], 00:16:01.527 "product_name": "Malloc disk", 00:16:01.527 "block_size": 512, 00:16:01.527 "num_blocks": 65536, 00:16:01.527 "uuid": "0488b00e-ce9e-4d6b-8917-46f8b90a1f0e", 00:16:01.527 "assigned_rate_limits": { 00:16:01.527 "rw_ios_per_sec": 0, 00:16:01.527 "rw_mbytes_per_sec": 0, 00:16:01.527 "r_mbytes_per_sec": 0, 00:16:01.527 "w_mbytes_per_sec": 0 00:16:01.527 }, 00:16:01.527 "claimed": true, 00:16:01.527 "claim_type": "exclusive_write", 00:16:01.527 "zoned": false, 00:16:01.527 "supported_io_types": { 00:16:01.527 "read": true, 00:16:01.527 "write": true, 00:16:01.527 "unmap": true, 00:16:01.527 "flush": true, 00:16:01.527 "reset": true, 00:16:01.527 "nvme_admin": false, 00:16:01.527 "nvme_io": false, 00:16:01.527 "nvme_io_md": false, 00:16:01.527 "write_zeroes": true, 00:16:01.527 "zcopy": true, 00:16:01.527 "get_zone_info": false, 00:16:01.527 "zone_management": false, 00:16:01.527 "zone_append": false, 00:16:01.527 "compare": false, 00:16:01.527 "compare_and_write": false, 00:16:01.527 "abort": true, 00:16:01.527 "seek_hole": false, 00:16:01.527 "seek_data": false, 00:16:01.527 "copy": true, 00:16:01.527 "nvme_iov_md": false 00:16:01.527 }, 00:16:01.527 "memory_domains": [ 00:16:01.527 { 00:16:01.527 "dma_device_id": "system", 00:16:01.527 "dma_device_type": 1 00:16:01.527 }, 00:16:01.527 { 00:16:01.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.527 "dma_device_type": 2 00:16:01.527 } 00:16:01.527 ], 00:16:01.527 "driver_specific": {} 00:16:01.527 } 00:16:01.527 ] 00:16:01.527 23:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:01.527 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:01.527 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:01.527 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:01.527 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:01.527 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:01.527 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:01.527 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:01.527 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:01.527 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:01.527 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:01.527 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:01.527 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:01.527 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.527 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.786 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:01.786 "name": "Existed_Raid", 00:16:01.786 "uuid": "826ebaa6-7fc4-48a0-87cf-071184416b19", 00:16:01.786 "strip_size_kb": 0, 00:16:01.786 "state": "online", 00:16:01.786 "raid_level": "raid1", 00:16:01.786 "superblock": true, 00:16:01.786 "num_base_bdevs": 2, 00:16:01.786 "num_base_bdevs_discovered": 2, 00:16:01.786 "num_base_bdevs_operational": 2, 00:16:01.786 "base_bdevs_list": [ 00:16:01.786 { 00:16:01.786 "name": "BaseBdev1", 00:16:01.786 "uuid": "95c1b8cf-aa62-4e6e-b7ea-18fe172abb7e", 00:16:01.786 "is_configured": true, 00:16:01.786 "data_offset": 2048, 00:16:01.786 "data_size": 63488 00:16:01.786 }, 00:16:01.786 { 00:16:01.786 "name": "BaseBdev2", 00:16:01.786 "uuid": "0488b00e-ce9e-4d6b-8917-46f8b90a1f0e", 00:16:01.786 "is_configured": true, 00:16:01.786 "data_offset": 2048, 00:16:01.786 "data_size": 63488 00:16:01.786 } 00:16:01.786 ] 00:16:01.786 }' 00:16:01.786 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:01.786 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.720 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:02.720 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:02.720 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:02.720 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:02.720 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:02.720 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:02.720 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:02.720 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:02.720 [2024-07-13 23:01:52.041785] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:02.720 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:02.720 "name": "Existed_Raid", 00:16:02.720 "aliases": [ 00:16:02.720 "826ebaa6-7fc4-48a0-87cf-071184416b19" 00:16:02.720 ], 00:16:02.720 "product_name": "Raid Volume", 00:16:02.720 "block_size": 512, 00:16:02.720 "num_blocks": 63488, 00:16:02.720 "uuid": "826ebaa6-7fc4-48a0-87cf-071184416b19", 00:16:02.720 "assigned_rate_limits": { 00:16:02.720 "rw_ios_per_sec": 0, 00:16:02.720 "rw_mbytes_per_sec": 0, 00:16:02.720 "r_mbytes_per_sec": 0, 00:16:02.720 "w_mbytes_per_sec": 0 00:16:02.720 }, 00:16:02.720 "claimed": false, 00:16:02.720 "zoned": false, 00:16:02.720 "supported_io_types": { 00:16:02.720 "read": true, 00:16:02.720 "write": true, 00:16:02.720 "unmap": false, 00:16:02.720 "flush": false, 00:16:02.720 "reset": true, 00:16:02.720 "nvme_admin": false, 00:16:02.720 "nvme_io": false, 00:16:02.720 "nvme_io_md": false, 00:16:02.720 "write_zeroes": true, 00:16:02.720 "zcopy": false, 00:16:02.720 "get_zone_info": false, 00:16:02.720 "zone_management": false, 00:16:02.720 "zone_append": false, 00:16:02.720 "compare": false, 00:16:02.720 "compare_and_write": false, 00:16:02.720 "abort": false, 00:16:02.720 "seek_hole": false, 00:16:02.720 "seek_data": false, 00:16:02.720 "copy": false, 00:16:02.720 "nvme_iov_md": false 00:16:02.720 }, 00:16:02.720 "memory_domains": [ 00:16:02.720 { 00:16:02.720 "dma_device_id": "system", 00:16:02.720 "dma_device_type": 1 00:16:02.720 }, 00:16:02.720 { 00:16:02.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.720 "dma_device_type": 2 00:16:02.720 }, 00:16:02.720 { 00:16:02.720 "dma_device_id": "system", 00:16:02.720 "dma_device_type": 1 00:16:02.720 }, 00:16:02.720 { 00:16:02.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.721 "dma_device_type": 2 00:16:02.721 } 00:16:02.721 ], 00:16:02.721 "driver_specific": { 00:16:02.721 "raid": { 00:16:02.721 "uuid": "826ebaa6-7fc4-48a0-87cf-071184416b19", 00:16:02.721 "strip_size_kb": 0, 00:16:02.721 "state": "online", 00:16:02.721 "raid_level": "raid1", 00:16:02.721 "superblock": true, 00:16:02.721 "num_base_bdevs": 2, 00:16:02.721 "num_base_bdevs_discovered": 2, 00:16:02.721 "num_base_bdevs_operational": 2, 00:16:02.721 "base_bdevs_list": [ 00:16:02.721 { 00:16:02.721 "name": "BaseBdev1", 00:16:02.721 "uuid": "95c1b8cf-aa62-4e6e-b7ea-18fe172abb7e", 00:16:02.721 "is_configured": true, 00:16:02.721 "data_offset": 2048, 00:16:02.721 "data_size": 63488 00:16:02.721 }, 00:16:02.721 { 00:16:02.721 "name": "BaseBdev2", 00:16:02.721 "uuid": "0488b00e-ce9e-4d6b-8917-46f8b90a1f0e", 00:16:02.721 "is_configured": true, 00:16:02.721 "data_offset": 2048, 00:16:02.721 "data_size": 63488 00:16:02.721 } 00:16:02.721 ] 00:16:02.721 } 00:16:02.721 } 00:16:02.721 }' 00:16:02.721 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:02.721 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:02.721 BaseBdev2' 00:16:02.721 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:02.721 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:02.721 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:02.979 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:02.979 "name": "BaseBdev1", 00:16:02.979 "aliases": [ 00:16:02.979 "95c1b8cf-aa62-4e6e-b7ea-18fe172abb7e" 00:16:02.979 ], 00:16:02.979 "product_name": "Malloc disk", 00:16:02.979 "block_size": 512, 00:16:02.979 "num_blocks": 65536, 00:16:02.979 "uuid": "95c1b8cf-aa62-4e6e-b7ea-18fe172abb7e", 00:16:02.979 "assigned_rate_limits": { 00:16:02.979 "rw_ios_per_sec": 0, 00:16:02.979 "rw_mbytes_per_sec": 0, 00:16:02.979 "r_mbytes_per_sec": 0, 00:16:02.979 "w_mbytes_per_sec": 0 00:16:02.979 }, 00:16:02.979 "claimed": true, 00:16:02.979 "claim_type": "exclusive_write", 00:16:02.979 "zoned": false, 00:16:02.979 "supported_io_types": { 00:16:02.979 "read": true, 00:16:02.979 "write": true, 00:16:02.979 "unmap": true, 00:16:02.979 "flush": true, 00:16:02.979 "reset": true, 00:16:02.979 "nvme_admin": false, 00:16:02.979 "nvme_io": false, 00:16:02.979 "nvme_io_md": false, 00:16:02.979 "write_zeroes": true, 00:16:02.979 "zcopy": true, 00:16:02.979 "get_zone_info": false, 00:16:02.979 "zone_management": false, 00:16:02.979 "zone_append": false, 00:16:02.979 "compare": false, 00:16:02.979 "compare_and_write": false, 00:16:02.979 "abort": true, 00:16:02.979 "seek_hole": false, 00:16:02.979 "seek_data": false, 00:16:02.979 "copy": true, 00:16:02.979 "nvme_iov_md": false 00:16:02.979 }, 00:16:02.979 "memory_domains": [ 00:16:02.979 { 00:16:02.979 "dma_device_id": "system", 00:16:02.980 "dma_device_type": 1 00:16:02.980 }, 00:16:02.980 { 00:16:02.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.980 "dma_device_type": 2 00:16:02.980 } 00:16:02.980 ], 00:16:02.980 "driver_specific": {} 00:16:02.980 }' 00:16:03.239 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:03.239 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:03.239 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:03.239 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:03.239 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:03.239 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:03.239 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:03.239 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:03.498 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:03.498 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:03.498 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:03.498 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:03.498 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:03.498 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:03.498 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:03.757 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:03.758 "name": "BaseBdev2", 00:16:03.758 "aliases": [ 00:16:03.758 "0488b00e-ce9e-4d6b-8917-46f8b90a1f0e" 00:16:03.758 ], 00:16:03.758 "product_name": "Malloc disk", 00:16:03.758 "block_size": 512, 00:16:03.758 "num_blocks": 65536, 00:16:03.758 "uuid": "0488b00e-ce9e-4d6b-8917-46f8b90a1f0e", 00:16:03.758 "assigned_rate_limits": { 00:16:03.758 "rw_ios_per_sec": 0, 00:16:03.758 "rw_mbytes_per_sec": 0, 00:16:03.758 "r_mbytes_per_sec": 0, 00:16:03.758 "w_mbytes_per_sec": 0 00:16:03.758 }, 00:16:03.758 "claimed": true, 00:16:03.758 "claim_type": "exclusive_write", 00:16:03.758 "zoned": false, 00:16:03.758 "supported_io_types": { 00:16:03.758 "read": true, 00:16:03.758 "write": true, 00:16:03.758 "unmap": true, 00:16:03.758 "flush": true, 00:16:03.758 "reset": true, 00:16:03.758 "nvme_admin": false, 00:16:03.758 "nvme_io": false, 00:16:03.758 "nvme_io_md": false, 00:16:03.758 "write_zeroes": true, 00:16:03.758 "zcopy": true, 00:16:03.758 "get_zone_info": false, 00:16:03.758 "zone_management": false, 00:16:03.758 "zone_append": false, 00:16:03.758 "compare": false, 00:16:03.758 "compare_and_write": false, 00:16:03.758 "abort": true, 00:16:03.758 "seek_hole": false, 00:16:03.758 "seek_data": false, 00:16:03.758 "copy": true, 00:16:03.758 "nvme_iov_md": false 00:16:03.758 }, 00:16:03.758 "memory_domains": [ 00:16:03.758 { 00:16:03.758 "dma_device_id": "system", 00:16:03.758 "dma_device_type": 1 00:16:03.758 }, 00:16:03.758 { 00:16:03.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.758 "dma_device_type": 2 00:16:03.758 } 00:16:03.758 ], 00:16:03.758 "driver_specific": {} 00:16:03.758 }' 00:16:03.758 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:03.758 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:03.758 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:03.758 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:03.758 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:03.758 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:03.758 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:04.015 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:04.015 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:04.015 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:04.015 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:04.015 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:04.015 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:04.273 [2024-07-13 23:01:53.530000] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:04.273 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:04.273 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:16:04.273 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:04.273 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:16:04.273 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:16:04.273 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:04.273 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:04.273 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:04.273 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:04.273 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:04.273 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:04.273 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:04.273 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:04.273 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:04.273 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:04.273 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.273 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.532 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:04.532 "name": "Existed_Raid", 00:16:04.532 "uuid": "826ebaa6-7fc4-48a0-87cf-071184416b19", 00:16:04.532 "strip_size_kb": 0, 00:16:04.532 "state": "online", 00:16:04.532 "raid_level": "raid1", 00:16:04.532 "superblock": true, 00:16:04.532 "num_base_bdevs": 2, 00:16:04.532 "num_base_bdevs_discovered": 1, 00:16:04.532 "num_base_bdevs_operational": 1, 00:16:04.532 "base_bdevs_list": [ 00:16:04.532 { 00:16:04.532 "name": null, 00:16:04.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.532 "is_configured": false, 00:16:04.532 "data_offset": 2048, 00:16:04.532 "data_size": 63488 00:16:04.532 }, 00:16:04.532 { 00:16:04.532 "name": "BaseBdev2", 00:16:04.532 "uuid": "0488b00e-ce9e-4d6b-8917-46f8b90a1f0e", 00:16:04.532 "is_configured": true, 00:16:04.532 "data_offset": 2048, 00:16:04.532 "data_size": 63488 00:16:04.532 } 00:16:04.532 ] 00:16:04.532 }' 00:16:04.532 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:04.532 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.098 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:05.098 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:05.357 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.357 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:05.617 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:05.617 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:05.617 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:05.617 [2024-07-13 23:01:54.992853] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:05.617 [2024-07-13 23:01:54.993051] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:05.617 [2024-07-13 23:01:55.004191] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.617 [2024-07-13 23:01:55.004280] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:05.617 [2024-07-13 23:01:55.004294] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:16:05.617 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:05.617 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:05.876 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.876 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:06.135 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:06.135 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:06.135 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:16:06.135 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 134050 00:16:06.135 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 134050 ']' 00:16:06.135 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 134050 00:16:06.135 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:16:06.135 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:06.135 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 134050 00:16:06.135 killing process with pid 134050 00:16:06.135 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:06.135 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:06.135 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 134050' 00:16:06.135 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 134050 00:16:06.135 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 134050 00:16:06.135 [2024-07-13 23:01:55.355126] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:06.135 [2024-07-13 23:01:55.355228] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:06.394 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:16:06.394 00:16:06.394 real 0m11.034s 00:16:06.394 user 0m20.420s 00:16:06.394 sys 0m1.337s 00:16:06.394 ************************************ 00:16:06.394 END TEST raid_state_function_test_sb 00:16:06.394 ************************************ 00:16:06.394 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:06.394 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.394 23:01:55 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:06.394 23:01:55 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:16:06.394 23:01:55 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:06.394 23:01:55 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:06.394 23:01:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:06.394 ************************************ 00:16:06.394 START TEST raid_superblock_test 00:16:06.394 ************************************ 00:16:06.394 23:01:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:16:06.394 23:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:16:06.394 23:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:16:06.394 23:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:16:06.394 23:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:16:06.394 23:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:16:06.394 23:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:16:06.394 23:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:16:06.394 23:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:16:06.394 23:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:16:06.394 23:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:16:06.394 23:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:16:06.394 23:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:16:06.394 23:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:16:06.394 23:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:16:06.394 23:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:16:06.394 23:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=134425 00:16:06.394 23:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 134425 /var/tmp/spdk-raid.sock 00:16:06.394 23:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:06.394 23:01:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 134425 ']' 00:16:06.394 23:01:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:06.394 23:01:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:06.394 23:01:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:06.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:06.394 23:01:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:06.394 23:01:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.394 [2024-07-13 23:01:55.711207] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:16:06.394 [2024-07-13 23:01:55.711496] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134425 ] 00:16:06.653 [2024-07-13 23:01:55.862164] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.653 [2024-07-13 23:01:55.927745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.653 [2024-07-13 23:01:55.986145] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:07.588 23:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:07.588 23:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:16:07.588 23:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:16:07.588 23:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:07.588 23:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:16:07.588 23:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:16:07.588 23:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:07.588 23:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:07.588 23:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:07.588 23:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:07.588 23:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:07.588 malloc1 00:16:07.588 23:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:07.846 [2024-07-13 23:01:57.168695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:07.846 [2024-07-13 23:01:57.168958] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.846 [2024-07-13 23:01:57.169049] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:16:07.846 [2024-07-13 23:01:57.169148] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.846 [2024-07-13 23:01:57.173850] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.846 [2024-07-13 23:01:57.173921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:07.846 pt1 00:16:07.846 23:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:07.846 23:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:07.846 23:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:16:07.846 23:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:16:07.846 23:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:07.846 23:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:07.846 23:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:07.846 23:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:07.846 23:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:08.104 malloc2 00:16:08.104 23:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:08.361 [2024-07-13 23:01:57.685226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:08.361 [2024-07-13 23:01:57.685341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.361 [2024-07-13 23:01:57.685381] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:16:08.361 [2024-07-13 23:01:57.685449] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.361 [2024-07-13 23:01:57.687984] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.361 [2024-07-13 23:01:57.688044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:08.361 pt2 00:16:08.361 23:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:08.361 23:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:08.361 23:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:16:08.619 [2024-07-13 23:01:57.901336] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:08.619 [2024-07-13 23:01:57.903386] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:08.619 [2024-07-13 23:01:57.903624] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006c80 00:16:08.619 [2024-07-13 23:01:57.903640] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:08.619 [2024-07-13 23:01:57.903854] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:16:08.619 [2024-07-13 23:01:57.904357] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006c80 00:16:08.619 [2024-07-13 23:01:57.904381] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000006c80 00:16:08.619 [2024-07-13 23:01:57.904570] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.619 23:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:08.619 23:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:08.619 23:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:08.619 23:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:08.619 23:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:08.619 23:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:08.619 23:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:08.619 23:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:08.619 23:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:08.619 23:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:08.619 23:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.619 23:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.876 23:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:08.876 "name": "raid_bdev1", 00:16:08.876 "uuid": "da580d50-5c69-40ea-9cf4-d8be131c225b", 00:16:08.876 "strip_size_kb": 0, 00:16:08.876 "state": "online", 00:16:08.876 "raid_level": "raid1", 00:16:08.876 "superblock": true, 00:16:08.876 "num_base_bdevs": 2, 00:16:08.876 "num_base_bdevs_discovered": 2, 00:16:08.876 "num_base_bdevs_operational": 2, 00:16:08.876 "base_bdevs_list": [ 00:16:08.876 { 00:16:08.876 "name": "pt1", 00:16:08.876 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:08.876 "is_configured": true, 00:16:08.876 "data_offset": 2048, 00:16:08.876 "data_size": 63488 00:16:08.876 }, 00:16:08.876 { 00:16:08.876 "name": "pt2", 00:16:08.876 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:08.876 "is_configured": true, 00:16:08.876 "data_offset": 2048, 00:16:08.876 "data_size": 63488 00:16:08.876 } 00:16:08.876 ] 00:16:08.876 }' 00:16:08.876 23:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:08.876 23:01:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.443 23:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:16:09.443 23:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:09.443 23:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:09.443 23:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:09.443 23:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:09.443 23:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:09.443 23:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:09.443 23:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:09.715 [2024-07-13 23:01:58.957897] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:09.715 23:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:09.715 "name": "raid_bdev1", 00:16:09.715 "aliases": [ 00:16:09.715 "da580d50-5c69-40ea-9cf4-d8be131c225b" 00:16:09.715 ], 00:16:09.715 "product_name": "Raid Volume", 00:16:09.715 "block_size": 512, 00:16:09.715 "num_blocks": 63488, 00:16:09.715 "uuid": "da580d50-5c69-40ea-9cf4-d8be131c225b", 00:16:09.715 "assigned_rate_limits": { 00:16:09.715 "rw_ios_per_sec": 0, 00:16:09.715 "rw_mbytes_per_sec": 0, 00:16:09.715 "r_mbytes_per_sec": 0, 00:16:09.715 "w_mbytes_per_sec": 0 00:16:09.715 }, 00:16:09.715 "claimed": false, 00:16:09.715 "zoned": false, 00:16:09.715 "supported_io_types": { 00:16:09.715 "read": true, 00:16:09.715 "write": true, 00:16:09.715 "unmap": false, 00:16:09.715 "flush": false, 00:16:09.715 "reset": true, 00:16:09.715 "nvme_admin": false, 00:16:09.715 "nvme_io": false, 00:16:09.715 "nvme_io_md": false, 00:16:09.715 "write_zeroes": true, 00:16:09.715 "zcopy": false, 00:16:09.715 "get_zone_info": false, 00:16:09.715 "zone_management": false, 00:16:09.715 "zone_append": false, 00:16:09.715 "compare": false, 00:16:09.715 "compare_and_write": false, 00:16:09.715 "abort": false, 00:16:09.715 "seek_hole": false, 00:16:09.715 "seek_data": false, 00:16:09.715 "copy": false, 00:16:09.715 "nvme_iov_md": false 00:16:09.715 }, 00:16:09.715 "memory_domains": [ 00:16:09.715 { 00:16:09.715 "dma_device_id": "system", 00:16:09.715 "dma_device_type": 1 00:16:09.715 }, 00:16:09.715 { 00:16:09.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.715 "dma_device_type": 2 00:16:09.715 }, 00:16:09.715 { 00:16:09.715 "dma_device_id": "system", 00:16:09.715 "dma_device_type": 1 00:16:09.715 }, 00:16:09.715 { 00:16:09.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.715 "dma_device_type": 2 00:16:09.715 } 00:16:09.715 ], 00:16:09.715 "driver_specific": { 00:16:09.715 "raid": { 00:16:09.715 "uuid": "da580d50-5c69-40ea-9cf4-d8be131c225b", 00:16:09.715 "strip_size_kb": 0, 00:16:09.715 "state": "online", 00:16:09.715 "raid_level": "raid1", 00:16:09.715 "superblock": true, 00:16:09.715 "num_base_bdevs": 2, 00:16:09.715 "num_base_bdevs_discovered": 2, 00:16:09.715 "num_base_bdevs_operational": 2, 00:16:09.715 "base_bdevs_list": [ 00:16:09.715 { 00:16:09.715 "name": "pt1", 00:16:09.715 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:09.715 "is_configured": true, 00:16:09.715 "data_offset": 2048, 00:16:09.715 "data_size": 63488 00:16:09.715 }, 00:16:09.715 { 00:16:09.715 "name": "pt2", 00:16:09.715 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:09.715 "is_configured": true, 00:16:09.715 "data_offset": 2048, 00:16:09.715 "data_size": 63488 00:16:09.715 } 00:16:09.715 ] 00:16:09.715 } 00:16:09.715 } 00:16:09.715 }' 00:16:09.715 23:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:09.715 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:09.715 pt2' 00:16:09.715 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:09.715 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:09.715 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:09.985 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:09.985 "name": "pt1", 00:16:09.985 "aliases": [ 00:16:09.985 "00000000-0000-0000-0000-000000000001" 00:16:09.985 ], 00:16:09.985 "product_name": "passthru", 00:16:09.985 "block_size": 512, 00:16:09.985 "num_blocks": 65536, 00:16:09.985 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:09.985 "assigned_rate_limits": { 00:16:09.985 "rw_ios_per_sec": 0, 00:16:09.985 "rw_mbytes_per_sec": 0, 00:16:09.985 "r_mbytes_per_sec": 0, 00:16:09.985 "w_mbytes_per_sec": 0 00:16:09.985 }, 00:16:09.985 "claimed": true, 00:16:09.985 "claim_type": "exclusive_write", 00:16:09.985 "zoned": false, 00:16:09.985 "supported_io_types": { 00:16:09.985 "read": true, 00:16:09.985 "write": true, 00:16:09.985 "unmap": true, 00:16:09.985 "flush": true, 00:16:09.985 "reset": true, 00:16:09.985 "nvme_admin": false, 00:16:09.985 "nvme_io": false, 00:16:09.985 "nvme_io_md": false, 00:16:09.985 "write_zeroes": true, 00:16:09.985 "zcopy": true, 00:16:09.985 "get_zone_info": false, 00:16:09.985 "zone_management": false, 00:16:09.985 "zone_append": false, 00:16:09.985 "compare": false, 00:16:09.985 "compare_and_write": false, 00:16:09.985 "abort": true, 00:16:09.985 "seek_hole": false, 00:16:09.985 "seek_data": false, 00:16:09.985 "copy": true, 00:16:09.985 "nvme_iov_md": false 00:16:09.985 }, 00:16:09.985 "memory_domains": [ 00:16:09.985 { 00:16:09.985 "dma_device_id": "system", 00:16:09.985 "dma_device_type": 1 00:16:09.985 }, 00:16:09.985 { 00:16:09.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.985 "dma_device_type": 2 00:16:09.985 } 00:16:09.985 ], 00:16:09.985 "driver_specific": { 00:16:09.985 "passthru": { 00:16:09.985 "name": "pt1", 00:16:09.985 "base_bdev_name": "malloc1" 00:16:09.985 } 00:16:09.985 } 00:16:09.985 }' 00:16:09.985 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:09.985 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:09.985 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:09.985 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:09.985 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:10.244 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:10.244 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:10.244 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:10.244 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:10.244 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:10.244 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:10.244 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:10.244 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:10.244 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:10.244 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:10.504 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:10.504 "name": "pt2", 00:16:10.504 "aliases": [ 00:16:10.504 "00000000-0000-0000-0000-000000000002" 00:16:10.504 ], 00:16:10.504 "product_name": "passthru", 00:16:10.504 "block_size": 512, 00:16:10.504 "num_blocks": 65536, 00:16:10.504 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:10.504 "assigned_rate_limits": { 00:16:10.504 "rw_ios_per_sec": 0, 00:16:10.504 "rw_mbytes_per_sec": 0, 00:16:10.504 "r_mbytes_per_sec": 0, 00:16:10.504 "w_mbytes_per_sec": 0 00:16:10.504 }, 00:16:10.504 "claimed": true, 00:16:10.504 "claim_type": "exclusive_write", 00:16:10.504 "zoned": false, 00:16:10.504 "supported_io_types": { 00:16:10.504 "read": true, 00:16:10.504 "write": true, 00:16:10.504 "unmap": true, 00:16:10.504 "flush": true, 00:16:10.504 "reset": true, 00:16:10.504 "nvme_admin": false, 00:16:10.504 "nvme_io": false, 00:16:10.504 "nvme_io_md": false, 00:16:10.504 "write_zeroes": true, 00:16:10.504 "zcopy": true, 00:16:10.504 "get_zone_info": false, 00:16:10.504 "zone_management": false, 00:16:10.504 "zone_append": false, 00:16:10.504 "compare": false, 00:16:10.504 "compare_and_write": false, 00:16:10.504 "abort": true, 00:16:10.504 "seek_hole": false, 00:16:10.504 "seek_data": false, 00:16:10.504 "copy": true, 00:16:10.504 "nvme_iov_md": false 00:16:10.504 }, 00:16:10.504 "memory_domains": [ 00:16:10.504 { 00:16:10.504 "dma_device_id": "system", 00:16:10.504 "dma_device_type": 1 00:16:10.504 }, 00:16:10.504 { 00:16:10.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.504 "dma_device_type": 2 00:16:10.504 } 00:16:10.504 ], 00:16:10.504 "driver_specific": { 00:16:10.504 "passthru": { 00:16:10.504 "name": "pt2", 00:16:10.504 "base_bdev_name": "malloc2" 00:16:10.504 } 00:16:10.504 } 00:16:10.504 }' 00:16:10.504 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:10.504 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:10.762 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:10.762 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:10.763 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:10.763 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:10.763 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:10.763 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:10.763 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:10.763 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:10.763 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:11.021 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:11.021 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:11.021 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:16:11.279 [2024-07-13 23:02:00.482163] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:11.279 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=da580d50-5c69-40ea-9cf4-d8be131c225b 00:16:11.279 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z da580d50-5c69-40ea-9cf4-d8be131c225b ']' 00:16:11.279 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:11.538 [2024-07-13 23:02:00.705962] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:11.538 [2024-07-13 23:02:00.705992] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:11.538 [2024-07-13 23:02:00.706140] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:11.538 [2024-07-13 23:02:00.706238] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:11.538 [2024-07-13 23:02:00.706253] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name raid_bdev1, state offline 00:16:11.538 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.538 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:16:11.796 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:16:11.796 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:16:11.796 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:11.796 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:12.053 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:12.053 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:12.053 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:12.053 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:12.620 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:16:12.620 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:12.620 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:16:12.620 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:12.620 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:12.620 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:12.620 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:12.620 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:12.620 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:12.620 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:12.620 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:12.620 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:12.620 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:12.620 [2024-07-13 23:02:01.966194] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:12.620 [2024-07-13 23:02:01.968276] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:12.620 [2024-07-13 23:02:01.968365] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:12.620 [2024-07-13 23:02:01.968456] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:12.620 [2024-07-13 23:02:01.968509] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:12.621 [2024-07-13 23:02:01.968522] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state configuring 00:16:12.621 request: 00:16:12.621 { 00:16:12.621 "name": "raid_bdev1", 00:16:12.621 "raid_level": "raid1", 00:16:12.621 "base_bdevs": [ 00:16:12.621 "malloc1", 00:16:12.621 "malloc2" 00:16:12.621 ], 00:16:12.621 "superblock": false, 00:16:12.621 "method": "bdev_raid_create", 00:16:12.621 "req_id": 1 00:16:12.621 } 00:16:12.621 Got JSON-RPC error response 00:16:12.621 response: 00:16:12.621 { 00:16:12.621 "code": -17, 00:16:12.621 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:12.621 } 00:16:12.621 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:16:12.621 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:12.621 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:12.621 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:12.621 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.621 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:16:12.880 23:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:16:12.880 23:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:16:12.880 23:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:13.138 [2024-07-13 23:02:02.410281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:13.138 [2024-07-13 23:02:02.410374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.138 [2024-07-13 23:02:02.410422] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:16:13.138 [2024-07-13 23:02:02.410454] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.138 [2024-07-13 23:02:02.412939] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.138 [2024-07-13 23:02:02.412996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:13.138 [2024-07-13 23:02:02.413077] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:13.138 [2024-07-13 23:02:02.413153] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:13.138 pt1 00:16:13.138 23:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:13.138 23:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:13.138 23:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:13.138 23:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:13.138 23:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:13.138 23:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:13.138 23:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:13.138 23:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:13.138 23:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:13.138 23:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:13.139 23:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.139 23:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.398 23:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:13.398 "name": "raid_bdev1", 00:16:13.398 "uuid": "da580d50-5c69-40ea-9cf4-d8be131c225b", 00:16:13.398 "strip_size_kb": 0, 00:16:13.398 "state": "configuring", 00:16:13.398 "raid_level": "raid1", 00:16:13.398 "superblock": true, 00:16:13.398 "num_base_bdevs": 2, 00:16:13.398 "num_base_bdevs_discovered": 1, 00:16:13.398 "num_base_bdevs_operational": 2, 00:16:13.398 "base_bdevs_list": [ 00:16:13.398 { 00:16:13.398 "name": "pt1", 00:16:13.398 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:13.398 "is_configured": true, 00:16:13.398 "data_offset": 2048, 00:16:13.398 "data_size": 63488 00:16:13.398 }, 00:16:13.398 { 00:16:13.398 "name": null, 00:16:13.398 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:13.398 "is_configured": false, 00:16:13.398 "data_offset": 2048, 00:16:13.398 "data_size": 63488 00:16:13.398 } 00:16:13.398 ] 00:16:13.398 }' 00:16:13.398 23:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:13.398 23:02:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.966 23:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:16:13.966 23:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:16:13.966 23:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:13.966 23:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:14.225 [2024-07-13 23:02:03.414459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:14.225 [2024-07-13 23:02:03.414553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.225 [2024-07-13 23:02:03.414593] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:16:14.225 [2024-07-13 23:02:03.414626] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.225 [2024-07-13 23:02:03.415114] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.225 [2024-07-13 23:02:03.415168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:14.225 [2024-07-13 23:02:03.415265] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:14.225 [2024-07-13 23:02:03.415293] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:14.225 [2024-07-13 23:02:03.415448] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:16:14.225 [2024-07-13 23:02:03.415465] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:14.225 [2024-07-13 23:02:03.415556] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:16:14.225 [2024-07-13 23:02:03.415941] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:16:14.225 [2024-07-13 23:02:03.415969] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:16:14.225 [2024-07-13 23:02:03.416083] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:14.225 pt2 00:16:14.225 23:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:14.225 23:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:14.225 23:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:14.225 23:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:14.225 23:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:14.225 23:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:14.225 23:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:14.225 23:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:14.225 23:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:14.225 23:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:14.225 23:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:14.225 23:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:14.225 23:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.225 23:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.484 23:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:14.484 "name": "raid_bdev1", 00:16:14.484 "uuid": "da580d50-5c69-40ea-9cf4-d8be131c225b", 00:16:14.484 "strip_size_kb": 0, 00:16:14.484 "state": "online", 00:16:14.484 "raid_level": "raid1", 00:16:14.484 "superblock": true, 00:16:14.484 "num_base_bdevs": 2, 00:16:14.484 "num_base_bdevs_discovered": 2, 00:16:14.484 "num_base_bdevs_operational": 2, 00:16:14.484 "base_bdevs_list": [ 00:16:14.484 { 00:16:14.484 "name": "pt1", 00:16:14.484 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:14.484 "is_configured": true, 00:16:14.484 "data_offset": 2048, 00:16:14.484 "data_size": 63488 00:16:14.484 }, 00:16:14.484 { 00:16:14.484 "name": "pt2", 00:16:14.484 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:14.484 "is_configured": true, 00:16:14.484 "data_offset": 2048, 00:16:14.484 "data_size": 63488 00:16:14.484 } 00:16:14.484 ] 00:16:14.484 }' 00:16:14.484 23:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:14.484 23:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.051 23:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:16:15.051 23:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:15.051 23:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:15.051 23:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:15.051 23:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:15.051 23:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:15.051 23:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:15.051 23:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:15.310 [2024-07-13 23:02:04.510907] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:15.310 23:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:15.310 "name": "raid_bdev1", 00:16:15.310 "aliases": [ 00:16:15.310 "da580d50-5c69-40ea-9cf4-d8be131c225b" 00:16:15.310 ], 00:16:15.310 "product_name": "Raid Volume", 00:16:15.310 "block_size": 512, 00:16:15.310 "num_blocks": 63488, 00:16:15.310 "uuid": "da580d50-5c69-40ea-9cf4-d8be131c225b", 00:16:15.310 "assigned_rate_limits": { 00:16:15.310 "rw_ios_per_sec": 0, 00:16:15.310 "rw_mbytes_per_sec": 0, 00:16:15.310 "r_mbytes_per_sec": 0, 00:16:15.310 "w_mbytes_per_sec": 0 00:16:15.310 }, 00:16:15.310 "claimed": false, 00:16:15.310 "zoned": false, 00:16:15.310 "supported_io_types": { 00:16:15.310 "read": true, 00:16:15.310 "write": true, 00:16:15.310 "unmap": false, 00:16:15.310 "flush": false, 00:16:15.310 "reset": true, 00:16:15.310 "nvme_admin": false, 00:16:15.310 "nvme_io": false, 00:16:15.310 "nvme_io_md": false, 00:16:15.310 "write_zeroes": true, 00:16:15.310 "zcopy": false, 00:16:15.310 "get_zone_info": false, 00:16:15.310 "zone_management": false, 00:16:15.310 "zone_append": false, 00:16:15.310 "compare": false, 00:16:15.310 "compare_and_write": false, 00:16:15.310 "abort": false, 00:16:15.310 "seek_hole": false, 00:16:15.310 "seek_data": false, 00:16:15.310 "copy": false, 00:16:15.310 "nvme_iov_md": false 00:16:15.310 }, 00:16:15.310 "memory_domains": [ 00:16:15.310 { 00:16:15.310 "dma_device_id": "system", 00:16:15.310 "dma_device_type": 1 00:16:15.310 }, 00:16:15.310 { 00:16:15.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.310 "dma_device_type": 2 00:16:15.310 }, 00:16:15.310 { 00:16:15.310 "dma_device_id": "system", 00:16:15.310 "dma_device_type": 1 00:16:15.310 }, 00:16:15.310 { 00:16:15.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.310 "dma_device_type": 2 00:16:15.310 } 00:16:15.310 ], 00:16:15.310 "driver_specific": { 00:16:15.310 "raid": { 00:16:15.310 "uuid": "da580d50-5c69-40ea-9cf4-d8be131c225b", 00:16:15.310 "strip_size_kb": 0, 00:16:15.310 "state": "online", 00:16:15.310 "raid_level": "raid1", 00:16:15.310 "superblock": true, 00:16:15.310 "num_base_bdevs": 2, 00:16:15.310 "num_base_bdevs_discovered": 2, 00:16:15.310 "num_base_bdevs_operational": 2, 00:16:15.310 "base_bdevs_list": [ 00:16:15.310 { 00:16:15.310 "name": "pt1", 00:16:15.310 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:15.310 "is_configured": true, 00:16:15.310 "data_offset": 2048, 00:16:15.310 "data_size": 63488 00:16:15.310 }, 00:16:15.310 { 00:16:15.310 "name": "pt2", 00:16:15.310 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:15.310 "is_configured": true, 00:16:15.310 "data_offset": 2048, 00:16:15.310 "data_size": 63488 00:16:15.310 } 00:16:15.310 ] 00:16:15.310 } 00:16:15.310 } 00:16:15.310 }' 00:16:15.310 23:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:15.310 23:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:15.310 pt2' 00:16:15.310 23:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:15.310 23:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:15.310 23:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:15.569 23:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:15.569 "name": "pt1", 00:16:15.569 "aliases": [ 00:16:15.569 "00000000-0000-0000-0000-000000000001" 00:16:15.569 ], 00:16:15.569 "product_name": "passthru", 00:16:15.569 "block_size": 512, 00:16:15.569 "num_blocks": 65536, 00:16:15.569 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:15.569 "assigned_rate_limits": { 00:16:15.569 "rw_ios_per_sec": 0, 00:16:15.569 "rw_mbytes_per_sec": 0, 00:16:15.569 "r_mbytes_per_sec": 0, 00:16:15.569 "w_mbytes_per_sec": 0 00:16:15.569 }, 00:16:15.569 "claimed": true, 00:16:15.569 "claim_type": "exclusive_write", 00:16:15.569 "zoned": false, 00:16:15.569 "supported_io_types": { 00:16:15.569 "read": true, 00:16:15.569 "write": true, 00:16:15.569 "unmap": true, 00:16:15.569 "flush": true, 00:16:15.569 "reset": true, 00:16:15.569 "nvme_admin": false, 00:16:15.569 "nvme_io": false, 00:16:15.569 "nvme_io_md": false, 00:16:15.569 "write_zeroes": true, 00:16:15.569 "zcopy": true, 00:16:15.569 "get_zone_info": false, 00:16:15.569 "zone_management": false, 00:16:15.569 "zone_append": false, 00:16:15.569 "compare": false, 00:16:15.569 "compare_and_write": false, 00:16:15.569 "abort": true, 00:16:15.569 "seek_hole": false, 00:16:15.569 "seek_data": false, 00:16:15.569 "copy": true, 00:16:15.569 "nvme_iov_md": false 00:16:15.569 }, 00:16:15.569 "memory_domains": [ 00:16:15.569 { 00:16:15.569 "dma_device_id": "system", 00:16:15.569 "dma_device_type": 1 00:16:15.569 }, 00:16:15.569 { 00:16:15.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.569 "dma_device_type": 2 00:16:15.569 } 00:16:15.569 ], 00:16:15.569 "driver_specific": { 00:16:15.569 "passthru": { 00:16:15.569 "name": "pt1", 00:16:15.569 "base_bdev_name": "malloc1" 00:16:15.569 } 00:16:15.569 } 00:16:15.569 }' 00:16:15.569 23:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:15.569 23:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:15.569 23:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:15.569 23:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:15.828 23:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:15.828 23:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:15.828 23:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:15.828 23:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:15.828 23:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:15.828 23:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:15.828 23:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:16.086 23:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:16.086 23:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:16.086 23:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:16.086 23:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:16.344 23:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:16.345 "name": "pt2", 00:16:16.345 "aliases": [ 00:16:16.345 "00000000-0000-0000-0000-000000000002" 00:16:16.345 ], 00:16:16.345 "product_name": "passthru", 00:16:16.345 "block_size": 512, 00:16:16.345 "num_blocks": 65536, 00:16:16.345 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:16.345 "assigned_rate_limits": { 00:16:16.345 "rw_ios_per_sec": 0, 00:16:16.345 "rw_mbytes_per_sec": 0, 00:16:16.345 "r_mbytes_per_sec": 0, 00:16:16.345 "w_mbytes_per_sec": 0 00:16:16.345 }, 00:16:16.345 "claimed": true, 00:16:16.345 "claim_type": "exclusive_write", 00:16:16.345 "zoned": false, 00:16:16.345 "supported_io_types": { 00:16:16.345 "read": true, 00:16:16.345 "write": true, 00:16:16.345 "unmap": true, 00:16:16.345 "flush": true, 00:16:16.345 "reset": true, 00:16:16.345 "nvme_admin": false, 00:16:16.345 "nvme_io": false, 00:16:16.345 "nvme_io_md": false, 00:16:16.345 "write_zeroes": true, 00:16:16.345 "zcopy": true, 00:16:16.345 "get_zone_info": false, 00:16:16.345 "zone_management": false, 00:16:16.345 "zone_append": false, 00:16:16.345 "compare": false, 00:16:16.345 "compare_and_write": false, 00:16:16.345 "abort": true, 00:16:16.345 "seek_hole": false, 00:16:16.345 "seek_data": false, 00:16:16.345 "copy": true, 00:16:16.345 "nvme_iov_md": false 00:16:16.345 }, 00:16:16.345 "memory_domains": [ 00:16:16.345 { 00:16:16.345 "dma_device_id": "system", 00:16:16.345 "dma_device_type": 1 00:16:16.345 }, 00:16:16.345 { 00:16:16.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.345 "dma_device_type": 2 00:16:16.345 } 00:16:16.345 ], 00:16:16.345 "driver_specific": { 00:16:16.345 "passthru": { 00:16:16.345 "name": "pt2", 00:16:16.345 "base_bdev_name": "malloc2" 00:16:16.345 } 00:16:16.345 } 00:16:16.345 }' 00:16:16.345 23:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:16.345 23:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:16.345 23:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:16.345 23:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:16.345 23:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:16.345 23:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:16.345 23:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:16.345 23:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:16.604 23:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:16.604 23:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:16.604 23:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:16.604 23:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:16.604 23:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:16.604 23:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:16:16.862 [2024-07-13 23:02:06.107198] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:16.862 23:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' da580d50-5c69-40ea-9cf4-d8be131c225b '!=' da580d50-5c69-40ea-9cf4-d8be131c225b ']' 00:16:16.862 23:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:16:16.862 23:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:16.862 23:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:16.862 23:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:17.120 [2024-07-13 23:02:06.359161] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:17.120 23:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:17.120 23:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:17.120 23:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:17.120 23:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:17.120 23:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:17.120 23:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:17.120 23:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:17.120 23:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:17.120 23:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:17.120 23:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:17.120 23:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.120 23:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.376 23:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:17.376 "name": "raid_bdev1", 00:16:17.376 "uuid": "da580d50-5c69-40ea-9cf4-d8be131c225b", 00:16:17.376 "strip_size_kb": 0, 00:16:17.376 "state": "online", 00:16:17.376 "raid_level": "raid1", 00:16:17.376 "superblock": true, 00:16:17.376 "num_base_bdevs": 2, 00:16:17.376 "num_base_bdevs_discovered": 1, 00:16:17.376 "num_base_bdevs_operational": 1, 00:16:17.376 "base_bdevs_list": [ 00:16:17.376 { 00:16:17.376 "name": null, 00:16:17.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.376 "is_configured": false, 00:16:17.376 "data_offset": 2048, 00:16:17.376 "data_size": 63488 00:16:17.376 }, 00:16:17.376 { 00:16:17.376 "name": "pt2", 00:16:17.376 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:17.376 "is_configured": true, 00:16:17.376 "data_offset": 2048, 00:16:17.376 "data_size": 63488 00:16:17.376 } 00:16:17.376 ] 00:16:17.376 }' 00:16:17.376 23:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:17.376 23:02:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.941 23:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:18.199 [2024-07-13 23:02:07.523381] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:18.199 [2024-07-13 23:02:07.523620] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:18.199 [2024-07-13 23:02:07.523832] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:18.199 [2024-07-13 23:02:07.524038] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:18.199 [2024-07-13 23:02:07.524177] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:16:18.199 23:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.199 23:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:16:18.457 23:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:16:18.457 23:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:16:18.457 23:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:16:18.457 23:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:18.457 23:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:18.715 23:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:16:18.715 23:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:18.715 23:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:16:18.715 23:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:16:18.715 23:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=1 00:16:18.715 23:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:18.973 [2024-07-13 23:02:08.299601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:18.973 [2024-07-13 23:02:08.300605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.973 [2024-07-13 23:02:08.300703] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:18.973 [2024-07-13 23:02:08.300973] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.973 [2024-07-13 23:02:08.303630] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.973 [2024-07-13 23:02:08.303875] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:18.973 [2024-07-13 23:02:08.304098] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:18.973 [2024-07-13 23:02:08.304283] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:18.973 [2024-07-13 23:02:08.304583] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:16:18.973 [2024-07-13 23:02:08.304727] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:18.973 [2024-07-13 23:02:08.304870] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:16:18.973 pt2 00:16:18.973 [2024-07-13 23:02:08.305427] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:16:18.973 [2024-07-13 23:02:08.305453] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:16:18.973 [2024-07-13 23:02:08.305576] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.973 23:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:18.973 23:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:18.973 23:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:18.973 23:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:18.973 23:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:18.973 23:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:18.973 23:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:18.973 23:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:18.973 23:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:18.973 23:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:18.973 23:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.973 23:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.231 23:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:19.231 "name": "raid_bdev1", 00:16:19.231 "uuid": "da580d50-5c69-40ea-9cf4-d8be131c225b", 00:16:19.231 "strip_size_kb": 0, 00:16:19.231 "state": "online", 00:16:19.231 "raid_level": "raid1", 00:16:19.231 "superblock": true, 00:16:19.231 "num_base_bdevs": 2, 00:16:19.231 "num_base_bdevs_discovered": 1, 00:16:19.231 "num_base_bdevs_operational": 1, 00:16:19.231 "base_bdevs_list": [ 00:16:19.231 { 00:16:19.231 "name": null, 00:16:19.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.231 "is_configured": false, 00:16:19.231 "data_offset": 2048, 00:16:19.231 "data_size": 63488 00:16:19.231 }, 00:16:19.231 { 00:16:19.231 "name": "pt2", 00:16:19.231 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:19.231 "is_configured": true, 00:16:19.231 "data_offset": 2048, 00:16:19.231 "data_size": 63488 00:16:19.231 } 00:16:19.231 ] 00:16:19.231 }' 00:16:19.231 23:02:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:19.231 23:02:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.797 23:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:20.055 [2024-07-13 23:02:09.376481] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:20.055 [2024-07-13 23:02:09.376682] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:20.055 [2024-07-13 23:02:09.376862] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:20.055 [2024-07-13 23:02:09.377039] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:20.055 [2024-07-13 23:02:09.377155] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:16:20.055 23:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.055 23:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:16:20.313 23:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:16:20.313 23:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:16:20.313 23:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:16:20.313 23:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:20.572 [2024-07-13 23:02:09.844575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:20.572 [2024-07-13 23:02:09.844856] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.572 [2024-07-13 23:02:09.845064] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:16:20.572 [2024-07-13 23:02:09.845214] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.572 [2024-07-13 23:02:09.847604] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.572 [2024-07-13 23:02:09.847824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:20.572 [2024-07-13 23:02:09.848040] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:20.572 [2024-07-13 23:02:09.848211] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:20.572 [2024-07-13 23:02:09.848540] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:20.572 [2024-07-13 23:02:09.848664] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:20.572 [2024-07-13 23:02:09.848727] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state configuring 00:16:20.572 [2024-07-13 23:02:09.848983] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:20.572 [2024-07-13 23:02:09.849245] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:16:20.572 [2024-07-13 23:02:09.849382] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:20.572 [2024-07-13 23:02:09.849500] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:16:20.572 pt1 00:16:20.572 [2024-07-13 23:02:09.849969] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:16:20.572 [2024-07-13 23:02:09.850093] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:16:20.572 [2024-07-13 23:02:09.850348] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.572 23:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:16:20.572 23:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:20.572 23:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:20.572 23:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:20.572 23:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:20.572 23:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:20.572 23:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:20.572 23:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:20.572 23:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:20.572 23:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:20.572 23:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:20.572 23:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.572 23:02:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.830 23:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:20.830 "name": "raid_bdev1", 00:16:20.830 "uuid": "da580d50-5c69-40ea-9cf4-d8be131c225b", 00:16:20.830 "strip_size_kb": 0, 00:16:20.830 "state": "online", 00:16:20.830 "raid_level": "raid1", 00:16:20.830 "superblock": true, 00:16:20.830 "num_base_bdevs": 2, 00:16:20.830 "num_base_bdevs_discovered": 1, 00:16:20.830 "num_base_bdevs_operational": 1, 00:16:20.830 "base_bdevs_list": [ 00:16:20.830 { 00:16:20.830 "name": null, 00:16:20.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.830 "is_configured": false, 00:16:20.830 "data_offset": 2048, 00:16:20.830 "data_size": 63488 00:16:20.830 }, 00:16:20.830 { 00:16:20.830 "name": "pt2", 00:16:20.830 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:20.830 "is_configured": true, 00:16:20.830 "data_offset": 2048, 00:16:20.830 "data_size": 63488 00:16:20.830 } 00:16:20.830 ] 00:16:20.830 }' 00:16:20.830 23:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:20.830 23:02:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.397 23:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:16:21.397 23:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:21.655 23:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:16:21.655 23:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:21.655 23:02:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:16:21.914 [2024-07-13 23:02:11.229174] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:21.914 23:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' da580d50-5c69-40ea-9cf4-d8be131c225b '!=' da580d50-5c69-40ea-9cf4-d8be131c225b ']' 00:16:21.914 23:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 134425 00:16:21.914 23:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 134425 ']' 00:16:21.914 23:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 134425 00:16:21.914 23:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:16:21.914 23:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:21.914 23:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 134425 00:16:21.914 killing process with pid 134425 00:16:21.914 23:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:21.914 23:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:21.914 23:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 134425' 00:16:21.914 23:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 134425 00:16:21.914 [2024-07-13 23:02:11.268462] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:21.914 23:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 134425 00:16:21.914 [2024-07-13 23:02:11.268543] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:21.914 [2024-07-13 23:02:11.268596] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:21.914 [2024-07-13 23:02:11.268650] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:16:21.914 [2024-07-13 23:02:11.288710] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:22.173 ************************************ 00:16:22.173 END TEST raid_superblock_test 00:16:22.173 ************************************ 00:16:22.173 23:02:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:16:22.173 00:16:22.173 real 0m15.867s 00:16:22.173 user 0m29.996s 00:16:22.173 sys 0m1.905s 00:16:22.173 23:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:22.173 23:02:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.173 23:02:11 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:22.173 23:02:11 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:16:22.173 23:02:11 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:22.173 23:02:11 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:22.173 23:02:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:22.441 ************************************ 00:16:22.441 START TEST raid_read_error_test 00:16:22.441 ************************************ 00:16:22.441 23:02:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 read 00:16:22.441 23:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:16:22.441 23:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:16:22.441 23:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:16:22.441 23:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:22.441 23:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:22.441 23:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:16:22.441 23:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:22.442 23:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:22.442 23:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:16:22.442 23:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:22.442 23:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:22.442 23:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:22.442 23:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:22.442 23:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:22.442 23:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:22.442 23:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:22.442 23:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:22.442 23:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:22.442 23:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:16:22.442 23:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:16:22.442 23:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:22.442 23:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.KwppF3p73g 00:16:22.442 23:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=134960 00:16:22.442 23:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 134960 /var/tmp/spdk-raid.sock 00:16:22.442 23:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:22.442 23:02:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 134960 ']' 00:16:22.442 23:02:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:22.442 23:02:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:22.442 23:02:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:22.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:22.442 23:02:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:22.442 23:02:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.442 [2024-07-13 23:02:11.657552] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:16:22.442 [2024-07-13 23:02:11.658014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134960 ] 00:16:22.442 [2024-07-13 23:02:11.807063] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.715 [2024-07-13 23:02:11.888801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.715 [2024-07-13 23:02:11.966024] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:23.282 23:02:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:23.282 23:02:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:16:23.282 23:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:23.282 23:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:23.540 BaseBdev1_malloc 00:16:23.540 23:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:23.799 true 00:16:23.799 23:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:24.057 [2024-07-13 23:02:13.352624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:24.057 [2024-07-13 23:02:13.353050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.057 [2024-07-13 23:02:13.353208] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:16:24.057 [2024-07-13 23:02:13.353387] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.057 [2024-07-13 23:02:13.356138] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.057 [2024-07-13 23:02:13.356334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:24.057 BaseBdev1 00:16:24.057 23:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:24.057 23:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:24.314 BaseBdev2_malloc 00:16:24.314 23:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:24.572 true 00:16:24.572 23:02:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:24.831 [2024-07-13 23:02:14.030238] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:24.831 [2024-07-13 23:02:14.030551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.831 [2024-07-13 23:02:14.030636] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:16:24.831 [2024-07-13 23:02:14.030982] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.831 [2024-07-13 23:02:14.033575] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.831 [2024-07-13 23:02:14.033752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:24.831 BaseBdev2 00:16:24.831 23:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:16:25.089 [2024-07-13 23:02:14.242362] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:25.089 [2024-07-13 23:02:14.244655] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:25.089 [2024-07-13 23:02:14.245040] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:16:25.089 [2024-07-13 23:02:14.245166] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:25.089 [2024-07-13 23:02:14.245332] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:16:25.089 [2024-07-13 23:02:14.245957] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:16:25.089 [2024-07-13 23:02:14.246091] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007280 00:16:25.089 [2024-07-13 23:02:14.246369] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.089 23:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:25.089 23:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:25.089 23:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:25.089 23:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:25.089 23:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:25.090 23:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:25.090 23:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:25.090 23:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:25.090 23:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:25.090 23:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:25.090 23:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.090 23:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.090 23:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:25.090 "name": "raid_bdev1", 00:16:25.090 "uuid": "c86249b2-626e-4e3e-ad8e-916615c9b63c", 00:16:25.090 "strip_size_kb": 0, 00:16:25.090 "state": "online", 00:16:25.090 "raid_level": "raid1", 00:16:25.090 "superblock": true, 00:16:25.090 "num_base_bdevs": 2, 00:16:25.090 "num_base_bdevs_discovered": 2, 00:16:25.090 "num_base_bdevs_operational": 2, 00:16:25.090 "base_bdevs_list": [ 00:16:25.090 { 00:16:25.090 "name": "BaseBdev1", 00:16:25.090 "uuid": "032d675a-9fe1-58c4-b12f-4953c7c3ef25", 00:16:25.090 "is_configured": true, 00:16:25.090 "data_offset": 2048, 00:16:25.090 "data_size": 63488 00:16:25.090 }, 00:16:25.090 { 00:16:25.090 "name": "BaseBdev2", 00:16:25.090 "uuid": "b45db7e7-04ee-524f-bb4c-c44254a87273", 00:16:25.090 "is_configured": true, 00:16:25.090 "data_offset": 2048, 00:16:25.090 "data_size": 63488 00:16:25.090 } 00:16:25.090 ] 00:16:25.090 }' 00:16:25.090 23:02:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:25.090 23:02:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.024 23:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:26.024 23:02:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:26.024 [2024-07-13 23:02:15.199013] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:16:27.051 23:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:27.051 23:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:27.051 23:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:27.051 23:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:16:27.051 23:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:16:27.051 23:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:27.051 23:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:27.051 23:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:27.051 23:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:27.051 23:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:27.051 23:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:27.051 23:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:27.051 23:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:27.051 23:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:27.051 23:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:27.051 23:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.051 23:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.308 23:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:27.308 "name": "raid_bdev1", 00:16:27.308 "uuid": "c86249b2-626e-4e3e-ad8e-916615c9b63c", 00:16:27.308 "strip_size_kb": 0, 00:16:27.308 "state": "online", 00:16:27.308 "raid_level": "raid1", 00:16:27.308 "superblock": true, 00:16:27.308 "num_base_bdevs": 2, 00:16:27.308 "num_base_bdevs_discovered": 2, 00:16:27.308 "num_base_bdevs_operational": 2, 00:16:27.308 "base_bdevs_list": [ 00:16:27.308 { 00:16:27.308 "name": "BaseBdev1", 00:16:27.308 "uuid": "032d675a-9fe1-58c4-b12f-4953c7c3ef25", 00:16:27.308 "is_configured": true, 00:16:27.308 "data_offset": 2048, 00:16:27.308 "data_size": 63488 00:16:27.308 }, 00:16:27.308 { 00:16:27.308 "name": "BaseBdev2", 00:16:27.308 "uuid": "b45db7e7-04ee-524f-bb4c-c44254a87273", 00:16:27.308 "is_configured": true, 00:16:27.308 "data_offset": 2048, 00:16:27.308 "data_size": 63488 00:16:27.308 } 00:16:27.308 ] 00:16:27.308 }' 00:16:27.308 23:02:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:27.308 23:02:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.874 23:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:28.133 [2024-07-13 23:02:17.520624] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:28.133 [2024-07-13 23:02:17.520980] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:28.133 [2024-07-13 23:02:17.523581] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:28.133 [2024-07-13 23:02:17.523765] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:28.133 [2024-07-13 23:02:17.523959] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:28.133 [2024-07-13 23:02:17.524173] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state offline 00:16:28.133 0 00:16:28.133 23:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 134960 00:16:28.133 23:02:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 134960 ']' 00:16:28.133 23:02:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 134960 00:16:28.133 23:02:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:16:28.391 23:02:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:28.391 23:02:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 134960 00:16:28.391 killing process with pid 134960 00:16:28.391 23:02:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:28.391 23:02:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:28.391 23:02:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 134960' 00:16:28.391 23:02:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 134960 00:16:28.391 23:02:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 134960 00:16:28.391 [2024-07-13 23:02:17.561957] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:28.391 [2024-07-13 23:02:17.576930] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:28.650 23:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.KwppF3p73g 00:16:28.650 23:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:28.650 23:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:28.650 ************************************ 00:16:28.650 END TEST raid_read_error_test 00:16:28.650 ************************************ 00:16:28.651 23:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:16:28.651 23:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:16:28.651 23:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:28.651 23:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:28.651 23:02:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:28.651 00:16:28.651 real 0m6.316s 00:16:28.651 user 0m10.077s 00:16:28.651 sys 0m0.801s 00:16:28.651 23:02:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:28.651 23:02:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.651 23:02:17 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:28.651 23:02:17 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:16:28.651 23:02:17 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:28.651 23:02:17 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:28.651 23:02:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:28.651 ************************************ 00:16:28.651 START TEST raid_write_error_test 00:16:28.651 ************************************ 00:16:28.651 23:02:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 write 00:16:28.651 23:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:16:28.651 23:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:16:28.651 23:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:16:28.651 23:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:28.651 23:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:28.651 23:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:16:28.651 23:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:28.651 23:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:28.651 23:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:16:28.651 23:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:28.651 23:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:28.651 23:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:28.651 23:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:28.651 23:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:28.651 23:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:28.651 23:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:28.651 23:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:28.651 23:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:28.651 23:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:16:28.651 23:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:16:28.651 23:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:28.651 23:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.cZaJpKXOFY 00:16:28.651 23:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=135139 00:16:28.651 23:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:28.651 23:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 135139 /var/tmp/spdk-raid.sock 00:16:28.651 23:02:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 135139 ']' 00:16:28.651 23:02:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:28.651 23:02:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:28.651 23:02:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:28.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:28.651 23:02:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:28.651 23:02:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.651 [2024-07-13 23:02:18.011047] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:16:28.651 [2024-07-13 23:02:18.011450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135139 ] 00:16:28.910 [2024-07-13 23:02:18.142045] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.910 [2024-07-13 23:02:18.216101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.910 [2024-07-13 23:02:18.286115] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:29.168 23:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:29.168 23:02:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:16:29.168 23:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:29.168 23:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:29.426 BaseBdev1_malloc 00:16:29.426 23:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:29.684 true 00:16:29.684 23:02:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:29.684 [2024-07-13 23:02:19.029516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:29.684 [2024-07-13 23:02:19.029776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.684 [2024-07-13 23:02:19.029871] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:16:29.684 [2024-07-13 23:02:19.030099] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.684 [2024-07-13 23:02:19.032773] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.684 [2024-07-13 23:02:19.032981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:29.684 BaseBdev1 00:16:29.684 23:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:29.684 23:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:29.942 BaseBdev2_malloc 00:16:29.942 23:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:30.201 true 00:16:30.201 23:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:30.459 [2024-07-13 23:02:19.723115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:30.459 [2024-07-13 23:02:19.723355] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.459 [2024-07-13 23:02:19.723516] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:16:30.459 [2024-07-13 23:02:19.723671] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.459 [2024-07-13 23:02:19.726232] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.459 [2024-07-13 23:02:19.726407] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:30.459 BaseBdev2 00:16:30.459 23:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:16:30.718 [2024-07-13 23:02:19.927265] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:30.718 [2024-07-13 23:02:19.929532] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:30.718 [2024-07-13 23:02:19.929913] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:16:30.718 [2024-07-13 23:02:19.930038] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:30.718 [2024-07-13 23:02:19.930235] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:16:30.718 [2024-07-13 23:02:19.930790] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:16:30.718 [2024-07-13 23:02:19.930925] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007280 00:16:30.718 [2024-07-13 23:02:19.931217] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.718 23:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:30.718 23:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:30.718 23:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:30.718 23:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:30.718 23:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:30.718 23:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:30.718 23:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:30.718 23:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:30.718 23:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:30.718 23:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:30.718 23:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:30.718 23:02:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.976 23:02:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:30.976 "name": "raid_bdev1", 00:16:30.976 "uuid": "3aa2d914-b6c2-4639-8cda-ebefb2bee0d1", 00:16:30.976 "strip_size_kb": 0, 00:16:30.976 "state": "online", 00:16:30.976 "raid_level": "raid1", 00:16:30.976 "superblock": true, 00:16:30.976 "num_base_bdevs": 2, 00:16:30.976 "num_base_bdevs_discovered": 2, 00:16:30.976 "num_base_bdevs_operational": 2, 00:16:30.976 "base_bdevs_list": [ 00:16:30.976 { 00:16:30.976 "name": "BaseBdev1", 00:16:30.976 "uuid": "0a774bf4-bd72-5025-a78b-0b684e9bf2cb", 00:16:30.976 "is_configured": true, 00:16:30.976 "data_offset": 2048, 00:16:30.976 "data_size": 63488 00:16:30.976 }, 00:16:30.976 { 00:16:30.976 "name": "BaseBdev2", 00:16:30.976 "uuid": "1454baa5-3450-55b7-a660-56c1d711c4fa", 00:16:30.976 "is_configured": true, 00:16:30.976 "data_offset": 2048, 00:16:30.976 "data_size": 63488 00:16:30.976 } 00:16:30.976 ] 00:16:30.976 }' 00:16:30.976 23:02:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:30.976 23:02:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.543 23:02:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:31.543 23:02:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:31.543 [2024-07-13 23:02:20.859925] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:16:32.477 23:02:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:32.736 [2024-07-13 23:02:22.017555] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:16:32.736 [2024-07-13 23:02:22.017992] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:32.736 [2024-07-13 23:02:22.018369] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000022c0 00:16:32.736 23:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:32.736 23:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:32.736 23:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:16:32.736 23:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=1 00:16:32.736 23:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:32.736 23:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:32.736 23:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:32.736 23:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:32.736 23:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:32.736 23:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:32.736 23:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:32.736 23:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:32.736 23:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:32.736 23:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:32.736 23:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.736 23:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.994 23:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:32.994 "name": "raid_bdev1", 00:16:32.994 "uuid": "3aa2d914-b6c2-4639-8cda-ebefb2bee0d1", 00:16:32.994 "strip_size_kb": 0, 00:16:32.994 "state": "online", 00:16:32.994 "raid_level": "raid1", 00:16:32.994 "superblock": true, 00:16:32.994 "num_base_bdevs": 2, 00:16:32.994 "num_base_bdevs_discovered": 1, 00:16:32.994 "num_base_bdevs_operational": 1, 00:16:32.994 "base_bdevs_list": [ 00:16:32.994 { 00:16:32.994 "name": null, 00:16:32.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.994 "is_configured": false, 00:16:32.994 "data_offset": 2048, 00:16:32.994 "data_size": 63488 00:16:32.994 }, 00:16:32.994 { 00:16:32.994 "name": "BaseBdev2", 00:16:32.994 "uuid": "1454baa5-3450-55b7-a660-56c1d711c4fa", 00:16:32.995 "is_configured": true, 00:16:32.995 "data_offset": 2048, 00:16:32.995 "data_size": 63488 00:16:32.995 } 00:16:32.995 ] 00:16:32.995 }' 00:16:32.995 23:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:32.995 23:02:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.562 23:02:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:33.821 [2024-07-13 23:02:23.135574] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:33.821 [2024-07-13 23:02:23.135635] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:33.821 [2024-07-13 23:02:23.138411] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.821 [2024-07-13 23:02:23.138473] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.821 [2024-07-13 23:02:23.138530] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:33.821 [2024-07-13 23:02:23.138543] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state offline 00:16:33.821 0 00:16:33.821 23:02:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 135139 00:16:33.821 23:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 135139 ']' 00:16:33.821 23:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 135139 00:16:33.821 23:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:16:33.821 23:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:33.821 23:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 135139 00:16:33.821 23:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:33.821 23:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:33.821 killing process with pid 135139 00:16:33.821 23:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 135139' 00:16:33.821 23:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 135139 00:16:33.821 [2024-07-13 23:02:23.173557] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:33.821 23:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 135139 00:16:33.821 [2024-07-13 23:02:23.188710] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:34.388 23:02:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.cZaJpKXOFY 00:16:34.388 23:02:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:34.388 23:02:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:34.388 23:02:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:16:34.388 23:02:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:16:34.388 23:02:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:34.388 23:02:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:34.388 23:02:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:34.388 00:16:34.388 real 0m5.551s 00:16:34.388 user 0m8.977s 00:16:34.388 sys 0m0.735s 00:16:34.388 23:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:34.388 23:02:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.388 ************************************ 00:16:34.388 END TEST raid_write_error_test 00:16:34.388 ************************************ 00:16:34.388 23:02:23 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:34.388 23:02:23 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:16:34.388 23:02:23 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:16:34.388 23:02:23 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:16:34.388 23:02:23 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:34.388 23:02:23 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:34.388 23:02:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:34.388 ************************************ 00:16:34.388 START TEST raid_state_function_test 00:16:34.388 ************************************ 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 false 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=135313 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:34.388 Process raid pid: 135313 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 135313' 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 135313 /var/tmp/spdk-raid.sock 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 135313 ']' 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:34.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:34.388 23:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.388 [2024-07-13 23:02:23.622784] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:16:34.388 [2024-07-13 23:02:23.623072] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:34.388 [2024-07-13 23:02:23.764881] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.647 [2024-07-13 23:02:23.843947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.647 [2024-07-13 23:02:23.914155] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:35.214 23:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:35.214 23:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:16:35.214 23:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:35.473 [2024-07-13 23:02:24.871633] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:35.473 [2024-07-13 23:02:24.871753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:35.473 [2024-07-13 23:02:24.871778] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:35.473 [2024-07-13 23:02:24.871830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:35.473 [2024-07-13 23:02:24.871838] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:35.473 [2024-07-13 23:02:24.871882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:35.732 23:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:35.732 23:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:35.732 23:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:35.732 23:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:35.732 23:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:35.732 23:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:35.732 23:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:35.732 23:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:35.732 23:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:35.732 23:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:35.732 23:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.732 23:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.732 23:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:35.732 "name": "Existed_Raid", 00:16:35.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.732 "strip_size_kb": 64, 00:16:35.732 "state": "configuring", 00:16:35.732 "raid_level": "raid0", 00:16:35.732 "superblock": false, 00:16:35.732 "num_base_bdevs": 3, 00:16:35.732 "num_base_bdevs_discovered": 0, 00:16:35.732 "num_base_bdevs_operational": 3, 00:16:35.732 "base_bdevs_list": [ 00:16:35.732 { 00:16:35.732 "name": "BaseBdev1", 00:16:35.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.732 "is_configured": false, 00:16:35.732 "data_offset": 0, 00:16:35.732 "data_size": 0 00:16:35.732 }, 00:16:35.732 { 00:16:35.732 "name": "BaseBdev2", 00:16:35.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.732 "is_configured": false, 00:16:35.732 "data_offset": 0, 00:16:35.732 "data_size": 0 00:16:35.732 }, 00:16:35.732 { 00:16:35.732 "name": "BaseBdev3", 00:16:35.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.732 "is_configured": false, 00:16:35.732 "data_offset": 0, 00:16:35.732 "data_size": 0 00:16:35.732 } 00:16:35.732 ] 00:16:35.732 }' 00:16:35.732 23:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:35.732 23:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.668 23:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:36.668 [2024-07-13 23:02:26.027674] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:36.668 [2024-07-13 23:02:26.027717] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:16:36.668 23:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:36.927 [2024-07-13 23:02:26.243701] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:36.927 [2024-07-13 23:02:26.243764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:36.927 [2024-07-13 23:02:26.243775] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:36.927 [2024-07-13 23:02:26.243793] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:36.927 [2024-07-13 23:02:26.243800] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:36.927 [2024-07-13 23:02:26.243853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:36.927 23:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:37.185 [2024-07-13 23:02:26.522039] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:37.185 BaseBdev1 00:16:37.185 23:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:37.185 23:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:37.185 23:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:37.185 23:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:37.185 23:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:37.185 23:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:37.185 23:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:37.444 23:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:37.703 [ 00:16:37.703 { 00:16:37.703 "name": "BaseBdev1", 00:16:37.703 "aliases": [ 00:16:37.703 "32012ca5-f2c3-4053-94ca-7336bb196d75" 00:16:37.703 ], 00:16:37.703 "product_name": "Malloc disk", 00:16:37.703 "block_size": 512, 00:16:37.703 "num_blocks": 65536, 00:16:37.703 "uuid": "32012ca5-f2c3-4053-94ca-7336bb196d75", 00:16:37.703 "assigned_rate_limits": { 00:16:37.703 "rw_ios_per_sec": 0, 00:16:37.703 "rw_mbytes_per_sec": 0, 00:16:37.703 "r_mbytes_per_sec": 0, 00:16:37.703 "w_mbytes_per_sec": 0 00:16:37.703 }, 00:16:37.703 "claimed": true, 00:16:37.703 "claim_type": "exclusive_write", 00:16:37.703 "zoned": false, 00:16:37.703 "supported_io_types": { 00:16:37.703 "read": true, 00:16:37.703 "write": true, 00:16:37.703 "unmap": true, 00:16:37.703 "flush": true, 00:16:37.703 "reset": true, 00:16:37.703 "nvme_admin": false, 00:16:37.703 "nvme_io": false, 00:16:37.703 "nvme_io_md": false, 00:16:37.703 "write_zeroes": true, 00:16:37.703 "zcopy": true, 00:16:37.703 "get_zone_info": false, 00:16:37.703 "zone_management": false, 00:16:37.703 "zone_append": false, 00:16:37.703 "compare": false, 00:16:37.703 "compare_and_write": false, 00:16:37.703 "abort": true, 00:16:37.703 "seek_hole": false, 00:16:37.703 "seek_data": false, 00:16:37.703 "copy": true, 00:16:37.703 "nvme_iov_md": false 00:16:37.703 }, 00:16:37.703 "memory_domains": [ 00:16:37.703 { 00:16:37.703 "dma_device_id": "system", 00:16:37.703 "dma_device_type": 1 00:16:37.703 }, 00:16:37.703 { 00:16:37.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.703 "dma_device_type": 2 00:16:37.703 } 00:16:37.703 ], 00:16:37.703 "driver_specific": {} 00:16:37.703 } 00:16:37.703 ] 00:16:37.703 23:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:37.703 23:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:37.703 23:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:37.703 23:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:37.703 23:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:37.703 23:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:37.703 23:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:37.703 23:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:37.703 23:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:37.703 23:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:37.703 23:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:37.703 23:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.703 23:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.962 23:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:37.962 "name": "Existed_Raid", 00:16:37.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.962 "strip_size_kb": 64, 00:16:37.962 "state": "configuring", 00:16:37.962 "raid_level": "raid0", 00:16:37.962 "superblock": false, 00:16:37.962 "num_base_bdevs": 3, 00:16:37.962 "num_base_bdevs_discovered": 1, 00:16:37.962 "num_base_bdevs_operational": 3, 00:16:37.962 "base_bdevs_list": [ 00:16:37.962 { 00:16:37.962 "name": "BaseBdev1", 00:16:37.962 "uuid": "32012ca5-f2c3-4053-94ca-7336bb196d75", 00:16:37.962 "is_configured": true, 00:16:37.962 "data_offset": 0, 00:16:37.962 "data_size": 65536 00:16:37.962 }, 00:16:37.962 { 00:16:37.962 "name": "BaseBdev2", 00:16:37.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.962 "is_configured": false, 00:16:37.962 "data_offset": 0, 00:16:37.962 "data_size": 0 00:16:37.962 }, 00:16:37.962 { 00:16:37.962 "name": "BaseBdev3", 00:16:37.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.962 "is_configured": false, 00:16:37.962 "data_offset": 0, 00:16:37.962 "data_size": 0 00:16:37.962 } 00:16:37.962 ] 00:16:37.962 }' 00:16:37.962 23:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:37.962 23:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.531 23:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:38.789 [2024-07-13 23:02:28.114417] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:38.789 [2024-07-13 23:02:28.114505] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:16:38.789 23:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:39.048 [2024-07-13 23:02:28.330473] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:39.048 [2024-07-13 23:02:28.332736] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:39.048 [2024-07-13 23:02:28.332802] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:39.048 [2024-07-13 23:02:28.332814] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:39.048 [2024-07-13 23:02:28.332841] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:39.048 23:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:39.048 23:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:39.048 23:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:39.048 23:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:39.048 23:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:39.048 23:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:39.048 23:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:39.048 23:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:39.048 23:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:39.048 23:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:39.048 23:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:39.048 23:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:39.048 23:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.048 23:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.307 23:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:39.307 "name": "Existed_Raid", 00:16:39.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.307 "strip_size_kb": 64, 00:16:39.307 "state": "configuring", 00:16:39.307 "raid_level": "raid0", 00:16:39.307 "superblock": false, 00:16:39.307 "num_base_bdevs": 3, 00:16:39.307 "num_base_bdevs_discovered": 1, 00:16:39.307 "num_base_bdevs_operational": 3, 00:16:39.307 "base_bdevs_list": [ 00:16:39.307 { 00:16:39.307 "name": "BaseBdev1", 00:16:39.307 "uuid": "32012ca5-f2c3-4053-94ca-7336bb196d75", 00:16:39.307 "is_configured": true, 00:16:39.307 "data_offset": 0, 00:16:39.307 "data_size": 65536 00:16:39.307 }, 00:16:39.307 { 00:16:39.307 "name": "BaseBdev2", 00:16:39.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.307 "is_configured": false, 00:16:39.307 "data_offset": 0, 00:16:39.307 "data_size": 0 00:16:39.307 }, 00:16:39.307 { 00:16:39.307 "name": "BaseBdev3", 00:16:39.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.307 "is_configured": false, 00:16:39.307 "data_offset": 0, 00:16:39.307 "data_size": 0 00:16:39.307 } 00:16:39.307 ] 00:16:39.307 }' 00:16:39.307 23:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:39.307 23:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.873 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:40.132 [2024-07-13 23:02:29.409565] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:40.132 BaseBdev2 00:16:40.132 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:40.132 23:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:40.132 23:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:40.132 23:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:40.132 23:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:40.132 23:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:40.132 23:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:40.391 23:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:40.650 [ 00:16:40.650 { 00:16:40.650 "name": "BaseBdev2", 00:16:40.650 "aliases": [ 00:16:40.650 "34453bba-e00e-4a27-b07b-da6f395e58ad" 00:16:40.650 ], 00:16:40.650 "product_name": "Malloc disk", 00:16:40.650 "block_size": 512, 00:16:40.650 "num_blocks": 65536, 00:16:40.650 "uuid": "34453bba-e00e-4a27-b07b-da6f395e58ad", 00:16:40.650 "assigned_rate_limits": { 00:16:40.650 "rw_ios_per_sec": 0, 00:16:40.650 "rw_mbytes_per_sec": 0, 00:16:40.650 "r_mbytes_per_sec": 0, 00:16:40.650 "w_mbytes_per_sec": 0 00:16:40.650 }, 00:16:40.650 "claimed": true, 00:16:40.650 "claim_type": "exclusive_write", 00:16:40.650 "zoned": false, 00:16:40.650 "supported_io_types": { 00:16:40.650 "read": true, 00:16:40.650 "write": true, 00:16:40.650 "unmap": true, 00:16:40.650 "flush": true, 00:16:40.650 "reset": true, 00:16:40.650 "nvme_admin": false, 00:16:40.650 "nvme_io": false, 00:16:40.650 "nvme_io_md": false, 00:16:40.650 "write_zeroes": true, 00:16:40.650 "zcopy": true, 00:16:40.650 "get_zone_info": false, 00:16:40.650 "zone_management": false, 00:16:40.650 "zone_append": false, 00:16:40.650 "compare": false, 00:16:40.650 "compare_and_write": false, 00:16:40.650 "abort": true, 00:16:40.650 "seek_hole": false, 00:16:40.650 "seek_data": false, 00:16:40.650 "copy": true, 00:16:40.650 "nvme_iov_md": false 00:16:40.650 }, 00:16:40.650 "memory_domains": [ 00:16:40.650 { 00:16:40.650 "dma_device_id": "system", 00:16:40.650 "dma_device_type": 1 00:16:40.650 }, 00:16:40.650 { 00:16:40.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.650 "dma_device_type": 2 00:16:40.650 } 00:16:40.650 ], 00:16:40.650 "driver_specific": {} 00:16:40.650 } 00:16:40.650 ] 00:16:40.650 23:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:40.650 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:40.650 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:40.650 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:40.650 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:40.650 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:40.650 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:40.650 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:40.650 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:40.650 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:40.650 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:40.650 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:40.650 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:40.650 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.650 23:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.909 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:40.909 "name": "Existed_Raid", 00:16:40.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.909 "strip_size_kb": 64, 00:16:40.909 "state": "configuring", 00:16:40.909 "raid_level": "raid0", 00:16:40.909 "superblock": false, 00:16:40.909 "num_base_bdevs": 3, 00:16:40.909 "num_base_bdevs_discovered": 2, 00:16:40.909 "num_base_bdevs_operational": 3, 00:16:40.909 "base_bdevs_list": [ 00:16:40.909 { 00:16:40.909 "name": "BaseBdev1", 00:16:40.909 "uuid": "32012ca5-f2c3-4053-94ca-7336bb196d75", 00:16:40.909 "is_configured": true, 00:16:40.909 "data_offset": 0, 00:16:40.909 "data_size": 65536 00:16:40.909 }, 00:16:40.909 { 00:16:40.909 "name": "BaseBdev2", 00:16:40.909 "uuid": "34453bba-e00e-4a27-b07b-da6f395e58ad", 00:16:40.909 "is_configured": true, 00:16:40.909 "data_offset": 0, 00:16:40.909 "data_size": 65536 00:16:40.909 }, 00:16:40.909 { 00:16:40.909 "name": "BaseBdev3", 00:16:40.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.909 "is_configured": false, 00:16:40.909 "data_offset": 0, 00:16:40.909 "data_size": 0 00:16:40.909 } 00:16:40.909 ] 00:16:40.909 }' 00:16:40.909 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:40.909 23:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.475 23:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:41.733 [2024-07-13 23:02:31.053446] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:41.733 [2024-07-13 23:02:31.053771] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:16:41.733 [2024-07-13 23:02:31.053818] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:41.733 [2024-07-13 23:02:31.054090] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:16:41.733 [2024-07-13 23:02:31.054619] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:16:41.733 [2024-07-13 23:02:31.054760] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:16:41.733 [2024-07-13 23:02:31.055140] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.733 BaseBdev3 00:16:41.733 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:41.733 23:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:41.733 23:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:41.733 23:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:41.733 23:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:41.733 23:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:41.733 23:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:41.990 23:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:42.248 [ 00:16:42.248 { 00:16:42.248 "name": "BaseBdev3", 00:16:42.248 "aliases": [ 00:16:42.248 "2aac0be5-1f0a-45ea-bd4a-acf1d24e53e3" 00:16:42.248 ], 00:16:42.248 "product_name": "Malloc disk", 00:16:42.248 "block_size": 512, 00:16:42.248 "num_blocks": 65536, 00:16:42.248 "uuid": "2aac0be5-1f0a-45ea-bd4a-acf1d24e53e3", 00:16:42.248 "assigned_rate_limits": { 00:16:42.248 "rw_ios_per_sec": 0, 00:16:42.248 "rw_mbytes_per_sec": 0, 00:16:42.248 "r_mbytes_per_sec": 0, 00:16:42.248 "w_mbytes_per_sec": 0 00:16:42.248 }, 00:16:42.248 "claimed": true, 00:16:42.248 "claim_type": "exclusive_write", 00:16:42.248 "zoned": false, 00:16:42.248 "supported_io_types": { 00:16:42.248 "read": true, 00:16:42.248 "write": true, 00:16:42.248 "unmap": true, 00:16:42.248 "flush": true, 00:16:42.248 "reset": true, 00:16:42.248 "nvme_admin": false, 00:16:42.248 "nvme_io": false, 00:16:42.248 "nvme_io_md": false, 00:16:42.248 "write_zeroes": true, 00:16:42.248 "zcopy": true, 00:16:42.248 "get_zone_info": false, 00:16:42.248 "zone_management": false, 00:16:42.248 "zone_append": false, 00:16:42.248 "compare": false, 00:16:42.248 "compare_and_write": false, 00:16:42.248 "abort": true, 00:16:42.248 "seek_hole": false, 00:16:42.248 "seek_data": false, 00:16:42.248 "copy": true, 00:16:42.248 "nvme_iov_md": false 00:16:42.248 }, 00:16:42.248 "memory_domains": [ 00:16:42.248 { 00:16:42.248 "dma_device_id": "system", 00:16:42.248 "dma_device_type": 1 00:16:42.248 }, 00:16:42.248 { 00:16:42.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.248 "dma_device_type": 2 00:16:42.248 } 00:16:42.248 ], 00:16:42.248 "driver_specific": {} 00:16:42.248 } 00:16:42.248 ] 00:16:42.248 23:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:42.248 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:42.248 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:42.248 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:16:42.248 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:42.248 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:42.248 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:42.248 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:42.248 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:42.248 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:42.248 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:42.248 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:42.248 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:42.248 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.248 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.506 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:42.506 "name": "Existed_Raid", 00:16:42.506 "uuid": "7fd6dcad-438a-4e96-99eb-49e213efeeac", 00:16:42.506 "strip_size_kb": 64, 00:16:42.506 "state": "online", 00:16:42.506 "raid_level": "raid0", 00:16:42.506 "superblock": false, 00:16:42.506 "num_base_bdevs": 3, 00:16:42.506 "num_base_bdevs_discovered": 3, 00:16:42.506 "num_base_bdevs_operational": 3, 00:16:42.506 "base_bdevs_list": [ 00:16:42.506 { 00:16:42.506 "name": "BaseBdev1", 00:16:42.506 "uuid": "32012ca5-f2c3-4053-94ca-7336bb196d75", 00:16:42.506 "is_configured": true, 00:16:42.506 "data_offset": 0, 00:16:42.506 "data_size": 65536 00:16:42.506 }, 00:16:42.506 { 00:16:42.506 "name": "BaseBdev2", 00:16:42.506 "uuid": "34453bba-e00e-4a27-b07b-da6f395e58ad", 00:16:42.506 "is_configured": true, 00:16:42.506 "data_offset": 0, 00:16:42.506 "data_size": 65536 00:16:42.506 }, 00:16:42.506 { 00:16:42.506 "name": "BaseBdev3", 00:16:42.506 "uuid": "2aac0be5-1f0a-45ea-bd4a-acf1d24e53e3", 00:16:42.507 "is_configured": true, 00:16:42.507 "data_offset": 0, 00:16:42.507 "data_size": 65536 00:16:42.507 } 00:16:42.507 ] 00:16:42.507 }' 00:16:42.507 23:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:42.507 23:02:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.442 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:43.442 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:43.442 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:43.442 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:43.442 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:43.442 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:43.442 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:43.442 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:43.442 [2024-07-13 23:02:32.746067] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:43.442 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:43.442 "name": "Existed_Raid", 00:16:43.442 "aliases": [ 00:16:43.442 "7fd6dcad-438a-4e96-99eb-49e213efeeac" 00:16:43.442 ], 00:16:43.442 "product_name": "Raid Volume", 00:16:43.442 "block_size": 512, 00:16:43.442 "num_blocks": 196608, 00:16:43.442 "uuid": "7fd6dcad-438a-4e96-99eb-49e213efeeac", 00:16:43.442 "assigned_rate_limits": { 00:16:43.442 "rw_ios_per_sec": 0, 00:16:43.442 "rw_mbytes_per_sec": 0, 00:16:43.442 "r_mbytes_per_sec": 0, 00:16:43.442 "w_mbytes_per_sec": 0 00:16:43.442 }, 00:16:43.442 "claimed": false, 00:16:43.442 "zoned": false, 00:16:43.442 "supported_io_types": { 00:16:43.442 "read": true, 00:16:43.442 "write": true, 00:16:43.442 "unmap": true, 00:16:43.442 "flush": true, 00:16:43.442 "reset": true, 00:16:43.442 "nvme_admin": false, 00:16:43.442 "nvme_io": false, 00:16:43.442 "nvme_io_md": false, 00:16:43.442 "write_zeroes": true, 00:16:43.442 "zcopy": false, 00:16:43.442 "get_zone_info": false, 00:16:43.442 "zone_management": false, 00:16:43.442 "zone_append": false, 00:16:43.442 "compare": false, 00:16:43.442 "compare_and_write": false, 00:16:43.442 "abort": false, 00:16:43.442 "seek_hole": false, 00:16:43.442 "seek_data": false, 00:16:43.442 "copy": false, 00:16:43.442 "nvme_iov_md": false 00:16:43.442 }, 00:16:43.442 "memory_domains": [ 00:16:43.442 { 00:16:43.442 "dma_device_id": "system", 00:16:43.442 "dma_device_type": 1 00:16:43.442 }, 00:16:43.442 { 00:16:43.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.442 "dma_device_type": 2 00:16:43.442 }, 00:16:43.442 { 00:16:43.442 "dma_device_id": "system", 00:16:43.442 "dma_device_type": 1 00:16:43.442 }, 00:16:43.442 { 00:16:43.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.442 "dma_device_type": 2 00:16:43.442 }, 00:16:43.442 { 00:16:43.442 "dma_device_id": "system", 00:16:43.442 "dma_device_type": 1 00:16:43.442 }, 00:16:43.442 { 00:16:43.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.442 "dma_device_type": 2 00:16:43.442 } 00:16:43.442 ], 00:16:43.442 "driver_specific": { 00:16:43.442 "raid": { 00:16:43.442 "uuid": "7fd6dcad-438a-4e96-99eb-49e213efeeac", 00:16:43.442 "strip_size_kb": 64, 00:16:43.442 "state": "online", 00:16:43.442 "raid_level": "raid0", 00:16:43.442 "superblock": false, 00:16:43.442 "num_base_bdevs": 3, 00:16:43.442 "num_base_bdevs_discovered": 3, 00:16:43.442 "num_base_bdevs_operational": 3, 00:16:43.442 "base_bdevs_list": [ 00:16:43.442 { 00:16:43.442 "name": "BaseBdev1", 00:16:43.442 "uuid": "32012ca5-f2c3-4053-94ca-7336bb196d75", 00:16:43.442 "is_configured": true, 00:16:43.442 "data_offset": 0, 00:16:43.442 "data_size": 65536 00:16:43.442 }, 00:16:43.442 { 00:16:43.442 "name": "BaseBdev2", 00:16:43.442 "uuid": "34453bba-e00e-4a27-b07b-da6f395e58ad", 00:16:43.442 "is_configured": true, 00:16:43.442 "data_offset": 0, 00:16:43.442 "data_size": 65536 00:16:43.442 }, 00:16:43.442 { 00:16:43.442 "name": "BaseBdev3", 00:16:43.442 "uuid": "2aac0be5-1f0a-45ea-bd4a-acf1d24e53e3", 00:16:43.442 "is_configured": true, 00:16:43.442 "data_offset": 0, 00:16:43.442 "data_size": 65536 00:16:43.442 } 00:16:43.442 ] 00:16:43.442 } 00:16:43.442 } 00:16:43.442 }' 00:16:43.442 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:43.442 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:43.442 BaseBdev2 00:16:43.442 BaseBdev3' 00:16:43.442 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:43.442 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:43.442 23:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:43.701 23:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:43.701 "name": "BaseBdev1", 00:16:43.701 "aliases": [ 00:16:43.701 "32012ca5-f2c3-4053-94ca-7336bb196d75" 00:16:43.701 ], 00:16:43.701 "product_name": "Malloc disk", 00:16:43.701 "block_size": 512, 00:16:43.701 "num_blocks": 65536, 00:16:43.701 "uuid": "32012ca5-f2c3-4053-94ca-7336bb196d75", 00:16:43.701 "assigned_rate_limits": { 00:16:43.701 "rw_ios_per_sec": 0, 00:16:43.701 "rw_mbytes_per_sec": 0, 00:16:43.701 "r_mbytes_per_sec": 0, 00:16:43.701 "w_mbytes_per_sec": 0 00:16:43.701 }, 00:16:43.701 "claimed": true, 00:16:43.701 "claim_type": "exclusive_write", 00:16:43.701 "zoned": false, 00:16:43.701 "supported_io_types": { 00:16:43.701 "read": true, 00:16:43.701 "write": true, 00:16:43.701 "unmap": true, 00:16:43.701 "flush": true, 00:16:43.701 "reset": true, 00:16:43.701 "nvme_admin": false, 00:16:43.701 "nvme_io": false, 00:16:43.701 "nvme_io_md": false, 00:16:43.701 "write_zeroes": true, 00:16:43.701 "zcopy": true, 00:16:43.701 "get_zone_info": false, 00:16:43.701 "zone_management": false, 00:16:43.701 "zone_append": false, 00:16:43.701 "compare": false, 00:16:43.701 "compare_and_write": false, 00:16:43.701 "abort": true, 00:16:43.701 "seek_hole": false, 00:16:43.701 "seek_data": false, 00:16:43.701 "copy": true, 00:16:43.701 "nvme_iov_md": false 00:16:43.701 }, 00:16:43.701 "memory_domains": [ 00:16:43.701 { 00:16:43.701 "dma_device_id": "system", 00:16:43.701 "dma_device_type": 1 00:16:43.701 }, 00:16:43.701 { 00:16:43.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.701 "dma_device_type": 2 00:16:43.701 } 00:16:43.701 ], 00:16:43.701 "driver_specific": {} 00:16:43.701 }' 00:16:43.701 23:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:43.958 23:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:43.958 23:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:43.958 23:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:43.958 23:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:43.958 23:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:43.958 23:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:43.958 23:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:44.215 23:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:44.215 23:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:44.215 23:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:44.215 23:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:44.215 23:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:44.215 23:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:44.215 23:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:44.473 23:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:44.473 "name": "BaseBdev2", 00:16:44.473 "aliases": [ 00:16:44.473 "34453bba-e00e-4a27-b07b-da6f395e58ad" 00:16:44.473 ], 00:16:44.473 "product_name": "Malloc disk", 00:16:44.473 "block_size": 512, 00:16:44.473 "num_blocks": 65536, 00:16:44.473 "uuid": "34453bba-e00e-4a27-b07b-da6f395e58ad", 00:16:44.473 "assigned_rate_limits": { 00:16:44.473 "rw_ios_per_sec": 0, 00:16:44.473 "rw_mbytes_per_sec": 0, 00:16:44.473 "r_mbytes_per_sec": 0, 00:16:44.473 "w_mbytes_per_sec": 0 00:16:44.473 }, 00:16:44.473 "claimed": true, 00:16:44.473 "claim_type": "exclusive_write", 00:16:44.473 "zoned": false, 00:16:44.473 "supported_io_types": { 00:16:44.473 "read": true, 00:16:44.473 "write": true, 00:16:44.473 "unmap": true, 00:16:44.473 "flush": true, 00:16:44.473 "reset": true, 00:16:44.473 "nvme_admin": false, 00:16:44.473 "nvme_io": false, 00:16:44.473 "nvme_io_md": false, 00:16:44.473 "write_zeroes": true, 00:16:44.473 "zcopy": true, 00:16:44.473 "get_zone_info": false, 00:16:44.473 "zone_management": false, 00:16:44.473 "zone_append": false, 00:16:44.473 "compare": false, 00:16:44.473 "compare_and_write": false, 00:16:44.473 "abort": true, 00:16:44.473 "seek_hole": false, 00:16:44.473 "seek_data": false, 00:16:44.473 "copy": true, 00:16:44.473 "nvme_iov_md": false 00:16:44.473 }, 00:16:44.473 "memory_domains": [ 00:16:44.473 { 00:16:44.473 "dma_device_id": "system", 00:16:44.473 "dma_device_type": 1 00:16:44.473 }, 00:16:44.473 { 00:16:44.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.473 "dma_device_type": 2 00:16:44.473 } 00:16:44.473 ], 00:16:44.473 "driver_specific": {} 00:16:44.473 }' 00:16:44.473 23:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:44.473 23:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:44.473 23:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:44.473 23:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:44.731 23:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:44.731 23:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:44.731 23:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:44.731 23:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:44.731 23:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:44.731 23:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:44.731 23:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:44.731 23:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:44.731 23:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:44.731 23:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:44.731 23:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:44.990 23:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:44.990 "name": "BaseBdev3", 00:16:44.990 "aliases": [ 00:16:44.990 "2aac0be5-1f0a-45ea-bd4a-acf1d24e53e3" 00:16:44.990 ], 00:16:44.990 "product_name": "Malloc disk", 00:16:44.990 "block_size": 512, 00:16:44.990 "num_blocks": 65536, 00:16:44.990 "uuid": "2aac0be5-1f0a-45ea-bd4a-acf1d24e53e3", 00:16:44.990 "assigned_rate_limits": { 00:16:44.990 "rw_ios_per_sec": 0, 00:16:44.990 "rw_mbytes_per_sec": 0, 00:16:44.990 "r_mbytes_per_sec": 0, 00:16:44.990 "w_mbytes_per_sec": 0 00:16:44.990 }, 00:16:44.990 "claimed": true, 00:16:44.990 "claim_type": "exclusive_write", 00:16:44.990 "zoned": false, 00:16:44.990 "supported_io_types": { 00:16:44.990 "read": true, 00:16:44.990 "write": true, 00:16:44.990 "unmap": true, 00:16:44.990 "flush": true, 00:16:44.990 "reset": true, 00:16:44.990 "nvme_admin": false, 00:16:44.990 "nvme_io": false, 00:16:44.990 "nvme_io_md": false, 00:16:44.990 "write_zeroes": true, 00:16:44.990 "zcopy": true, 00:16:44.990 "get_zone_info": false, 00:16:44.990 "zone_management": false, 00:16:44.990 "zone_append": false, 00:16:44.990 "compare": false, 00:16:44.990 "compare_and_write": false, 00:16:44.990 "abort": true, 00:16:44.990 "seek_hole": false, 00:16:44.990 "seek_data": false, 00:16:44.990 "copy": true, 00:16:44.990 "nvme_iov_md": false 00:16:44.990 }, 00:16:44.990 "memory_domains": [ 00:16:44.990 { 00:16:44.990 "dma_device_id": "system", 00:16:44.990 "dma_device_type": 1 00:16:44.990 }, 00:16:44.990 { 00:16:44.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.990 "dma_device_type": 2 00:16:44.990 } 00:16:44.990 ], 00:16:44.990 "driver_specific": {} 00:16:44.990 }' 00:16:44.990 23:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:45.248 23:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:45.248 23:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:45.248 23:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:45.248 23:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:45.248 23:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:45.248 23:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:45.248 23:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:45.248 23:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:45.248 23:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:45.506 23:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:45.506 23:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:45.506 23:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:45.764 [2024-07-13 23:02:34.998394] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:45.764 [2024-07-13 23:02:34.998656] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:45.764 [2024-07-13 23:02:34.998850] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:45.764 23:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:45.764 23:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:16:45.764 23:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:45.764 23:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:45.764 23:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:16:45.764 23:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:16:45.764 23:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:45.764 23:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:16:45.764 23:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:45.764 23:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:45.764 23:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:45.764 23:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:45.764 23:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:45.764 23:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:45.764 23:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:45.764 23:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.764 23:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.022 23:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:46.022 "name": "Existed_Raid", 00:16:46.022 "uuid": "7fd6dcad-438a-4e96-99eb-49e213efeeac", 00:16:46.022 "strip_size_kb": 64, 00:16:46.022 "state": "offline", 00:16:46.022 "raid_level": "raid0", 00:16:46.022 "superblock": false, 00:16:46.022 "num_base_bdevs": 3, 00:16:46.022 "num_base_bdevs_discovered": 2, 00:16:46.022 "num_base_bdevs_operational": 2, 00:16:46.022 "base_bdevs_list": [ 00:16:46.023 { 00:16:46.023 "name": null, 00:16:46.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.023 "is_configured": false, 00:16:46.023 "data_offset": 0, 00:16:46.023 "data_size": 65536 00:16:46.023 }, 00:16:46.023 { 00:16:46.023 "name": "BaseBdev2", 00:16:46.023 "uuid": "34453bba-e00e-4a27-b07b-da6f395e58ad", 00:16:46.023 "is_configured": true, 00:16:46.023 "data_offset": 0, 00:16:46.023 "data_size": 65536 00:16:46.023 }, 00:16:46.023 { 00:16:46.023 "name": "BaseBdev3", 00:16:46.023 "uuid": "2aac0be5-1f0a-45ea-bd4a-acf1d24e53e3", 00:16:46.023 "is_configured": true, 00:16:46.023 "data_offset": 0, 00:16:46.023 "data_size": 65536 00:16:46.023 } 00:16:46.023 ] 00:16:46.023 }' 00:16:46.023 23:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:46.023 23:02:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.591 23:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:46.591 23:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:46.591 23:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.591 23:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:46.854 23:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:46.854 23:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:46.854 23:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:47.132 [2024-07-13 23:02:36.316110] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:47.132 23:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:47.132 23:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:47.132 23:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.132 23:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:47.399 23:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:47.399 23:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:47.400 23:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:47.400 [2024-07-13 23:02:36.802747] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:47.400 [2024-07-13 23:02:36.802979] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:16:47.658 23:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:47.658 23:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:47.658 23:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.658 23:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:47.916 23:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:47.916 23:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:47.916 23:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:16:47.916 23:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:47.916 23:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:47.916 23:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:48.174 BaseBdev2 00:16:48.174 23:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:48.174 23:02:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:48.174 23:02:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:48.174 23:02:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:48.174 23:02:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:48.174 23:02:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:48.174 23:02:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:48.174 23:02:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:48.433 [ 00:16:48.433 { 00:16:48.433 "name": "BaseBdev2", 00:16:48.433 "aliases": [ 00:16:48.433 "39bac579-e071-4693-b111-4f847edee0b2" 00:16:48.433 ], 00:16:48.433 "product_name": "Malloc disk", 00:16:48.433 "block_size": 512, 00:16:48.433 "num_blocks": 65536, 00:16:48.433 "uuid": "39bac579-e071-4693-b111-4f847edee0b2", 00:16:48.433 "assigned_rate_limits": { 00:16:48.433 "rw_ios_per_sec": 0, 00:16:48.433 "rw_mbytes_per_sec": 0, 00:16:48.433 "r_mbytes_per_sec": 0, 00:16:48.433 "w_mbytes_per_sec": 0 00:16:48.433 }, 00:16:48.433 "claimed": false, 00:16:48.433 "zoned": false, 00:16:48.433 "supported_io_types": { 00:16:48.433 "read": true, 00:16:48.433 "write": true, 00:16:48.433 "unmap": true, 00:16:48.433 "flush": true, 00:16:48.433 "reset": true, 00:16:48.433 "nvme_admin": false, 00:16:48.433 "nvme_io": false, 00:16:48.433 "nvme_io_md": false, 00:16:48.433 "write_zeroes": true, 00:16:48.433 "zcopy": true, 00:16:48.433 "get_zone_info": false, 00:16:48.433 "zone_management": false, 00:16:48.433 "zone_append": false, 00:16:48.433 "compare": false, 00:16:48.433 "compare_and_write": false, 00:16:48.433 "abort": true, 00:16:48.433 "seek_hole": false, 00:16:48.433 "seek_data": false, 00:16:48.433 "copy": true, 00:16:48.433 "nvme_iov_md": false 00:16:48.433 }, 00:16:48.433 "memory_domains": [ 00:16:48.433 { 00:16:48.433 "dma_device_id": "system", 00:16:48.433 "dma_device_type": 1 00:16:48.433 }, 00:16:48.433 { 00:16:48.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.433 "dma_device_type": 2 00:16:48.433 } 00:16:48.433 ], 00:16:48.433 "driver_specific": {} 00:16:48.433 } 00:16:48.433 ] 00:16:48.433 23:02:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:48.433 23:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:48.433 23:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:48.433 23:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:48.691 BaseBdev3 00:16:48.691 23:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:48.691 23:02:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:48.691 23:02:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:48.691 23:02:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:48.691 23:02:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:48.691 23:02:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:48.691 23:02:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:48.949 23:02:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:49.207 [ 00:16:49.207 { 00:16:49.207 "name": "BaseBdev3", 00:16:49.207 "aliases": [ 00:16:49.207 "bb160a0a-76c7-4341-b468-7ae05b6cea17" 00:16:49.207 ], 00:16:49.207 "product_name": "Malloc disk", 00:16:49.207 "block_size": 512, 00:16:49.207 "num_blocks": 65536, 00:16:49.207 "uuid": "bb160a0a-76c7-4341-b468-7ae05b6cea17", 00:16:49.207 "assigned_rate_limits": { 00:16:49.207 "rw_ios_per_sec": 0, 00:16:49.207 "rw_mbytes_per_sec": 0, 00:16:49.207 "r_mbytes_per_sec": 0, 00:16:49.207 "w_mbytes_per_sec": 0 00:16:49.207 }, 00:16:49.207 "claimed": false, 00:16:49.207 "zoned": false, 00:16:49.207 "supported_io_types": { 00:16:49.207 "read": true, 00:16:49.207 "write": true, 00:16:49.207 "unmap": true, 00:16:49.207 "flush": true, 00:16:49.207 "reset": true, 00:16:49.207 "nvme_admin": false, 00:16:49.207 "nvme_io": false, 00:16:49.207 "nvme_io_md": false, 00:16:49.207 "write_zeroes": true, 00:16:49.207 "zcopy": true, 00:16:49.207 "get_zone_info": false, 00:16:49.207 "zone_management": false, 00:16:49.207 "zone_append": false, 00:16:49.207 "compare": false, 00:16:49.207 "compare_and_write": false, 00:16:49.207 "abort": true, 00:16:49.207 "seek_hole": false, 00:16:49.207 "seek_data": false, 00:16:49.207 "copy": true, 00:16:49.207 "nvme_iov_md": false 00:16:49.207 }, 00:16:49.207 "memory_domains": [ 00:16:49.207 { 00:16:49.207 "dma_device_id": "system", 00:16:49.207 "dma_device_type": 1 00:16:49.207 }, 00:16:49.207 { 00:16:49.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.207 "dma_device_type": 2 00:16:49.207 } 00:16:49.207 ], 00:16:49.207 "driver_specific": {} 00:16:49.207 } 00:16:49.207 ] 00:16:49.207 23:02:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:49.207 23:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:49.207 23:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:49.207 23:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:49.465 [2024-07-13 23:02:38.723849] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:49.465 [2024-07-13 23:02:38.724234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:49.465 [2024-07-13 23:02:38.724419] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:49.465 [2024-07-13 23:02:38.726841] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:49.465 23:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:49.465 23:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:49.465 23:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:49.465 23:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:49.465 23:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:49.465 23:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:49.466 23:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:49.466 23:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:49.466 23:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:49.466 23:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:49.466 23:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.466 23:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.724 23:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:49.724 "name": "Existed_Raid", 00:16:49.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.724 "strip_size_kb": 64, 00:16:49.724 "state": "configuring", 00:16:49.724 "raid_level": "raid0", 00:16:49.724 "superblock": false, 00:16:49.724 "num_base_bdevs": 3, 00:16:49.724 "num_base_bdevs_discovered": 2, 00:16:49.724 "num_base_bdevs_operational": 3, 00:16:49.724 "base_bdevs_list": [ 00:16:49.724 { 00:16:49.724 "name": "BaseBdev1", 00:16:49.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.724 "is_configured": false, 00:16:49.724 "data_offset": 0, 00:16:49.724 "data_size": 0 00:16:49.724 }, 00:16:49.724 { 00:16:49.724 "name": "BaseBdev2", 00:16:49.724 "uuid": "39bac579-e071-4693-b111-4f847edee0b2", 00:16:49.724 "is_configured": true, 00:16:49.724 "data_offset": 0, 00:16:49.724 "data_size": 65536 00:16:49.724 }, 00:16:49.724 { 00:16:49.724 "name": "BaseBdev3", 00:16:49.724 "uuid": "bb160a0a-76c7-4341-b468-7ae05b6cea17", 00:16:49.724 "is_configured": true, 00:16:49.724 "data_offset": 0, 00:16:49.724 "data_size": 65536 00:16:49.724 } 00:16:49.724 ] 00:16:49.724 }' 00:16:49.724 23:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:49.724 23:02:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.291 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:50.550 [2024-07-13 23:02:39.768121] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:50.550 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:50.550 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:50.550 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:50.550 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:50.550 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:50.550 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:50.550 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:50.550 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:50.550 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:50.550 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:50.550 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.550 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.809 23:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:50.809 "name": "Existed_Raid", 00:16:50.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.809 "strip_size_kb": 64, 00:16:50.809 "state": "configuring", 00:16:50.809 "raid_level": "raid0", 00:16:50.809 "superblock": false, 00:16:50.809 "num_base_bdevs": 3, 00:16:50.809 "num_base_bdevs_discovered": 1, 00:16:50.809 "num_base_bdevs_operational": 3, 00:16:50.809 "base_bdevs_list": [ 00:16:50.809 { 00:16:50.809 "name": "BaseBdev1", 00:16:50.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.809 "is_configured": false, 00:16:50.809 "data_offset": 0, 00:16:50.809 "data_size": 0 00:16:50.809 }, 00:16:50.809 { 00:16:50.809 "name": null, 00:16:50.809 "uuid": "39bac579-e071-4693-b111-4f847edee0b2", 00:16:50.809 "is_configured": false, 00:16:50.809 "data_offset": 0, 00:16:50.809 "data_size": 65536 00:16:50.809 }, 00:16:50.809 { 00:16:50.809 "name": "BaseBdev3", 00:16:50.809 "uuid": "bb160a0a-76c7-4341-b468-7ae05b6cea17", 00:16:50.809 "is_configured": true, 00:16:50.809 "data_offset": 0, 00:16:50.809 "data_size": 65536 00:16:50.809 } 00:16:50.809 ] 00:16:50.809 }' 00:16:50.809 23:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:50.809 23:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.376 23:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.376 23:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:51.635 23:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:51.635 23:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:51.893 [2024-07-13 23:02:41.185358] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:51.893 BaseBdev1 00:16:51.893 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:51.893 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:51.893 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:51.893 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:51.893 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:51.893 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:51.893 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:52.152 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:52.436 [ 00:16:52.436 { 00:16:52.436 "name": "BaseBdev1", 00:16:52.436 "aliases": [ 00:16:52.436 "51f8c274-1435-49c5-af75-732239dead91" 00:16:52.436 ], 00:16:52.436 "product_name": "Malloc disk", 00:16:52.436 "block_size": 512, 00:16:52.436 "num_blocks": 65536, 00:16:52.436 "uuid": "51f8c274-1435-49c5-af75-732239dead91", 00:16:52.436 "assigned_rate_limits": { 00:16:52.436 "rw_ios_per_sec": 0, 00:16:52.436 "rw_mbytes_per_sec": 0, 00:16:52.436 "r_mbytes_per_sec": 0, 00:16:52.436 "w_mbytes_per_sec": 0 00:16:52.436 }, 00:16:52.436 "claimed": true, 00:16:52.436 "claim_type": "exclusive_write", 00:16:52.436 "zoned": false, 00:16:52.436 "supported_io_types": { 00:16:52.436 "read": true, 00:16:52.436 "write": true, 00:16:52.436 "unmap": true, 00:16:52.436 "flush": true, 00:16:52.436 "reset": true, 00:16:52.436 "nvme_admin": false, 00:16:52.436 "nvme_io": false, 00:16:52.436 "nvme_io_md": false, 00:16:52.436 "write_zeroes": true, 00:16:52.436 "zcopy": true, 00:16:52.436 "get_zone_info": false, 00:16:52.436 "zone_management": false, 00:16:52.436 "zone_append": false, 00:16:52.436 "compare": false, 00:16:52.436 "compare_and_write": false, 00:16:52.436 "abort": true, 00:16:52.436 "seek_hole": false, 00:16:52.436 "seek_data": false, 00:16:52.436 "copy": true, 00:16:52.436 "nvme_iov_md": false 00:16:52.436 }, 00:16:52.436 "memory_domains": [ 00:16:52.436 { 00:16:52.436 "dma_device_id": "system", 00:16:52.436 "dma_device_type": 1 00:16:52.436 }, 00:16:52.436 { 00:16:52.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.436 "dma_device_type": 2 00:16:52.436 } 00:16:52.436 ], 00:16:52.436 "driver_specific": {} 00:16:52.436 } 00:16:52.436 ] 00:16:52.436 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:52.436 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:52.436 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:52.436 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:52.436 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:52.436 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:52.436 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:52.436 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:52.436 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:52.436 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:52.436 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:52.436 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.436 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.694 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:52.694 "name": "Existed_Raid", 00:16:52.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.694 "strip_size_kb": 64, 00:16:52.694 "state": "configuring", 00:16:52.694 "raid_level": "raid0", 00:16:52.694 "superblock": false, 00:16:52.694 "num_base_bdevs": 3, 00:16:52.694 "num_base_bdevs_discovered": 2, 00:16:52.694 "num_base_bdevs_operational": 3, 00:16:52.694 "base_bdevs_list": [ 00:16:52.694 { 00:16:52.694 "name": "BaseBdev1", 00:16:52.694 "uuid": "51f8c274-1435-49c5-af75-732239dead91", 00:16:52.694 "is_configured": true, 00:16:52.694 "data_offset": 0, 00:16:52.694 "data_size": 65536 00:16:52.694 }, 00:16:52.694 { 00:16:52.694 "name": null, 00:16:52.694 "uuid": "39bac579-e071-4693-b111-4f847edee0b2", 00:16:52.694 "is_configured": false, 00:16:52.694 "data_offset": 0, 00:16:52.694 "data_size": 65536 00:16:52.694 }, 00:16:52.694 { 00:16:52.694 "name": "BaseBdev3", 00:16:52.694 "uuid": "bb160a0a-76c7-4341-b468-7ae05b6cea17", 00:16:52.694 "is_configured": true, 00:16:52.694 "data_offset": 0, 00:16:52.694 "data_size": 65536 00:16:52.694 } 00:16:52.694 ] 00:16:52.694 }' 00:16:52.695 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:52.695 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.260 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.260 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:53.517 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:16:53.517 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:53.775 [2024-07-13 23:02:43.122155] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:53.775 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:53.775 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:53.775 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:53.775 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:53.775 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:53.775 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:53.775 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:53.775 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:53.775 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:53.775 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:53.775 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.775 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.032 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:54.032 "name": "Existed_Raid", 00:16:54.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.032 "strip_size_kb": 64, 00:16:54.032 "state": "configuring", 00:16:54.032 "raid_level": "raid0", 00:16:54.032 "superblock": false, 00:16:54.032 "num_base_bdevs": 3, 00:16:54.032 "num_base_bdevs_discovered": 1, 00:16:54.032 "num_base_bdevs_operational": 3, 00:16:54.032 "base_bdevs_list": [ 00:16:54.032 { 00:16:54.032 "name": "BaseBdev1", 00:16:54.032 "uuid": "51f8c274-1435-49c5-af75-732239dead91", 00:16:54.032 "is_configured": true, 00:16:54.032 "data_offset": 0, 00:16:54.032 "data_size": 65536 00:16:54.032 }, 00:16:54.032 { 00:16:54.032 "name": null, 00:16:54.032 "uuid": "39bac579-e071-4693-b111-4f847edee0b2", 00:16:54.032 "is_configured": false, 00:16:54.032 "data_offset": 0, 00:16:54.032 "data_size": 65536 00:16:54.032 }, 00:16:54.032 { 00:16:54.032 "name": null, 00:16:54.032 "uuid": "bb160a0a-76c7-4341-b468-7ae05b6cea17", 00:16:54.032 "is_configured": false, 00:16:54.032 "data_offset": 0, 00:16:54.032 "data_size": 65536 00:16:54.032 } 00:16:54.032 ] 00:16:54.032 }' 00:16:54.032 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:54.032 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.963 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.964 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:54.964 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:54.964 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:55.221 [2024-07-13 23:02:44.469378] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:55.221 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:55.221 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:55.221 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:55.221 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:55.221 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:55.221 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:55.221 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:55.221 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:55.221 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:55.221 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:55.222 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.222 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.479 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:55.479 "name": "Existed_Raid", 00:16:55.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.479 "strip_size_kb": 64, 00:16:55.479 "state": "configuring", 00:16:55.479 "raid_level": "raid0", 00:16:55.479 "superblock": false, 00:16:55.479 "num_base_bdevs": 3, 00:16:55.479 "num_base_bdevs_discovered": 2, 00:16:55.479 "num_base_bdevs_operational": 3, 00:16:55.479 "base_bdevs_list": [ 00:16:55.479 { 00:16:55.479 "name": "BaseBdev1", 00:16:55.479 "uuid": "51f8c274-1435-49c5-af75-732239dead91", 00:16:55.479 "is_configured": true, 00:16:55.479 "data_offset": 0, 00:16:55.479 "data_size": 65536 00:16:55.479 }, 00:16:55.479 { 00:16:55.479 "name": null, 00:16:55.479 "uuid": "39bac579-e071-4693-b111-4f847edee0b2", 00:16:55.480 "is_configured": false, 00:16:55.480 "data_offset": 0, 00:16:55.480 "data_size": 65536 00:16:55.480 }, 00:16:55.480 { 00:16:55.480 "name": "BaseBdev3", 00:16:55.480 "uuid": "bb160a0a-76c7-4341-b468-7ae05b6cea17", 00:16:55.480 "is_configured": true, 00:16:55.480 "data_offset": 0, 00:16:55.480 "data_size": 65536 00:16:55.480 } 00:16:55.480 ] 00:16:55.480 }' 00:16:55.480 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:55.480 23:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.045 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.045 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:56.303 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:56.303 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:56.562 [2024-07-13 23:02:45.881823] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:56.562 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:56.562 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:56.562 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:56.562 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:56.562 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:56.562 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:56.562 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:56.562 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:56.562 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:56.562 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:56.562 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.562 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.821 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:56.821 "name": "Existed_Raid", 00:16:56.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.821 "strip_size_kb": 64, 00:16:56.821 "state": "configuring", 00:16:56.821 "raid_level": "raid0", 00:16:56.821 "superblock": false, 00:16:56.821 "num_base_bdevs": 3, 00:16:56.821 "num_base_bdevs_discovered": 1, 00:16:56.821 "num_base_bdevs_operational": 3, 00:16:56.821 "base_bdevs_list": [ 00:16:56.821 { 00:16:56.821 "name": null, 00:16:56.821 "uuid": "51f8c274-1435-49c5-af75-732239dead91", 00:16:56.821 "is_configured": false, 00:16:56.821 "data_offset": 0, 00:16:56.821 "data_size": 65536 00:16:56.821 }, 00:16:56.821 { 00:16:56.821 "name": null, 00:16:56.821 "uuid": "39bac579-e071-4693-b111-4f847edee0b2", 00:16:56.821 "is_configured": false, 00:16:56.821 "data_offset": 0, 00:16:56.821 "data_size": 65536 00:16:56.821 }, 00:16:56.821 { 00:16:56.821 "name": "BaseBdev3", 00:16:56.821 "uuid": "bb160a0a-76c7-4341-b468-7ae05b6cea17", 00:16:56.821 "is_configured": true, 00:16:56.821 "data_offset": 0, 00:16:56.821 "data_size": 65536 00:16:56.821 } 00:16:56.821 ] 00:16:56.821 }' 00:16:56.821 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:56.821 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.755 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.755 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:57.756 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:57.756 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:58.014 [2024-07-13 23:02:47.325330] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:58.014 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:58.014 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:58.014 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:58.014 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:58.014 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:58.014 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:58.014 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:58.014 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:58.014 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:58.014 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:58.014 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.014 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.272 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:58.272 "name": "Existed_Raid", 00:16:58.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.272 "strip_size_kb": 64, 00:16:58.272 "state": "configuring", 00:16:58.272 "raid_level": "raid0", 00:16:58.272 "superblock": false, 00:16:58.272 "num_base_bdevs": 3, 00:16:58.272 "num_base_bdevs_discovered": 2, 00:16:58.272 "num_base_bdevs_operational": 3, 00:16:58.272 "base_bdevs_list": [ 00:16:58.272 { 00:16:58.272 "name": null, 00:16:58.272 "uuid": "51f8c274-1435-49c5-af75-732239dead91", 00:16:58.272 "is_configured": false, 00:16:58.272 "data_offset": 0, 00:16:58.272 "data_size": 65536 00:16:58.272 }, 00:16:58.272 { 00:16:58.272 "name": "BaseBdev2", 00:16:58.272 "uuid": "39bac579-e071-4693-b111-4f847edee0b2", 00:16:58.272 "is_configured": true, 00:16:58.272 "data_offset": 0, 00:16:58.272 "data_size": 65536 00:16:58.272 }, 00:16:58.272 { 00:16:58.272 "name": "BaseBdev3", 00:16:58.272 "uuid": "bb160a0a-76c7-4341-b468-7ae05b6cea17", 00:16:58.272 "is_configured": true, 00:16:58.272 "data_offset": 0, 00:16:58.272 "data_size": 65536 00:16:58.272 } 00:16:58.272 ] 00:16:58.272 }' 00:16:58.272 23:02:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:58.272 23:02:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.884 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.884 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:59.143 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:16:59.143 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:59.143 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.401 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 51f8c274-1435-49c5-af75-732239dead91 00:16:59.660 [2024-07-13 23:02:48.862062] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:59.660 [2024-07-13 23:02:48.862336] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:16:59.660 [2024-07-13 23:02:48.862384] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:59.660 [2024-07-13 23:02:48.862575] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:16:59.660 [2024-07-13 23:02:48.863040] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:16:59.660 [2024-07-13 23:02:48.863223] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:16:59.660 [2024-07-13 23:02:48.863545] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.660 NewBaseBdev 00:16:59.660 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:16:59.660 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:16:59.660 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:59.660 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:59.660 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:59.660 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:59.660 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:59.919 23:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:00.177 [ 00:17:00.177 { 00:17:00.177 "name": "NewBaseBdev", 00:17:00.177 "aliases": [ 00:17:00.177 "51f8c274-1435-49c5-af75-732239dead91" 00:17:00.177 ], 00:17:00.177 "product_name": "Malloc disk", 00:17:00.177 "block_size": 512, 00:17:00.177 "num_blocks": 65536, 00:17:00.177 "uuid": "51f8c274-1435-49c5-af75-732239dead91", 00:17:00.177 "assigned_rate_limits": { 00:17:00.177 "rw_ios_per_sec": 0, 00:17:00.177 "rw_mbytes_per_sec": 0, 00:17:00.177 "r_mbytes_per_sec": 0, 00:17:00.177 "w_mbytes_per_sec": 0 00:17:00.177 }, 00:17:00.177 "claimed": true, 00:17:00.177 "claim_type": "exclusive_write", 00:17:00.177 "zoned": false, 00:17:00.177 "supported_io_types": { 00:17:00.177 "read": true, 00:17:00.177 "write": true, 00:17:00.177 "unmap": true, 00:17:00.177 "flush": true, 00:17:00.177 "reset": true, 00:17:00.177 "nvme_admin": false, 00:17:00.177 "nvme_io": false, 00:17:00.177 "nvme_io_md": false, 00:17:00.177 "write_zeroes": true, 00:17:00.177 "zcopy": true, 00:17:00.178 "get_zone_info": false, 00:17:00.178 "zone_management": false, 00:17:00.178 "zone_append": false, 00:17:00.178 "compare": false, 00:17:00.178 "compare_and_write": false, 00:17:00.178 "abort": true, 00:17:00.178 "seek_hole": false, 00:17:00.178 "seek_data": false, 00:17:00.178 "copy": true, 00:17:00.178 "nvme_iov_md": false 00:17:00.178 }, 00:17:00.178 "memory_domains": [ 00:17:00.178 { 00:17:00.178 "dma_device_id": "system", 00:17:00.178 "dma_device_type": 1 00:17:00.178 }, 00:17:00.178 { 00:17:00.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.178 "dma_device_type": 2 00:17:00.178 } 00:17:00.178 ], 00:17:00.178 "driver_specific": {} 00:17:00.178 } 00:17:00.178 ] 00:17:00.178 23:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:00.178 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:17:00.178 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:00.178 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:00.178 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:00.178 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:00.178 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:00.178 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:00.178 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:00.178 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:00.178 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:00.178 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.178 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.436 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:00.436 "name": "Existed_Raid", 00:17:00.436 "uuid": "45ca30cb-e735-4b93-9ce4-923ada6373ca", 00:17:00.436 "strip_size_kb": 64, 00:17:00.436 "state": "online", 00:17:00.436 "raid_level": "raid0", 00:17:00.436 "superblock": false, 00:17:00.436 "num_base_bdevs": 3, 00:17:00.436 "num_base_bdevs_discovered": 3, 00:17:00.436 "num_base_bdevs_operational": 3, 00:17:00.436 "base_bdevs_list": [ 00:17:00.436 { 00:17:00.436 "name": "NewBaseBdev", 00:17:00.436 "uuid": "51f8c274-1435-49c5-af75-732239dead91", 00:17:00.436 "is_configured": true, 00:17:00.436 "data_offset": 0, 00:17:00.436 "data_size": 65536 00:17:00.436 }, 00:17:00.436 { 00:17:00.436 "name": "BaseBdev2", 00:17:00.436 "uuid": "39bac579-e071-4693-b111-4f847edee0b2", 00:17:00.436 "is_configured": true, 00:17:00.436 "data_offset": 0, 00:17:00.436 "data_size": 65536 00:17:00.436 }, 00:17:00.436 { 00:17:00.436 "name": "BaseBdev3", 00:17:00.436 "uuid": "bb160a0a-76c7-4341-b468-7ae05b6cea17", 00:17:00.436 "is_configured": true, 00:17:00.436 "data_offset": 0, 00:17:00.436 "data_size": 65536 00:17:00.436 } 00:17:00.436 ] 00:17:00.436 }' 00:17:00.436 23:02:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:00.436 23:02:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.002 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:17:01.002 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:01.002 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:01.002 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:01.003 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:01.003 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:01.003 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:01.003 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:01.261 [2024-07-13 23:02:50.542845] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:01.261 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:01.261 "name": "Existed_Raid", 00:17:01.261 "aliases": [ 00:17:01.261 "45ca30cb-e735-4b93-9ce4-923ada6373ca" 00:17:01.261 ], 00:17:01.261 "product_name": "Raid Volume", 00:17:01.261 "block_size": 512, 00:17:01.261 "num_blocks": 196608, 00:17:01.261 "uuid": "45ca30cb-e735-4b93-9ce4-923ada6373ca", 00:17:01.261 "assigned_rate_limits": { 00:17:01.261 "rw_ios_per_sec": 0, 00:17:01.261 "rw_mbytes_per_sec": 0, 00:17:01.261 "r_mbytes_per_sec": 0, 00:17:01.261 "w_mbytes_per_sec": 0 00:17:01.261 }, 00:17:01.261 "claimed": false, 00:17:01.261 "zoned": false, 00:17:01.261 "supported_io_types": { 00:17:01.261 "read": true, 00:17:01.261 "write": true, 00:17:01.261 "unmap": true, 00:17:01.261 "flush": true, 00:17:01.261 "reset": true, 00:17:01.261 "nvme_admin": false, 00:17:01.261 "nvme_io": false, 00:17:01.261 "nvme_io_md": false, 00:17:01.261 "write_zeroes": true, 00:17:01.261 "zcopy": false, 00:17:01.261 "get_zone_info": false, 00:17:01.261 "zone_management": false, 00:17:01.261 "zone_append": false, 00:17:01.261 "compare": false, 00:17:01.261 "compare_and_write": false, 00:17:01.261 "abort": false, 00:17:01.261 "seek_hole": false, 00:17:01.261 "seek_data": false, 00:17:01.261 "copy": false, 00:17:01.261 "nvme_iov_md": false 00:17:01.261 }, 00:17:01.262 "memory_domains": [ 00:17:01.262 { 00:17:01.262 "dma_device_id": "system", 00:17:01.262 "dma_device_type": 1 00:17:01.262 }, 00:17:01.262 { 00:17:01.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.262 "dma_device_type": 2 00:17:01.262 }, 00:17:01.262 { 00:17:01.262 "dma_device_id": "system", 00:17:01.262 "dma_device_type": 1 00:17:01.262 }, 00:17:01.262 { 00:17:01.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.262 "dma_device_type": 2 00:17:01.262 }, 00:17:01.262 { 00:17:01.262 "dma_device_id": "system", 00:17:01.262 "dma_device_type": 1 00:17:01.262 }, 00:17:01.262 { 00:17:01.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.262 "dma_device_type": 2 00:17:01.262 } 00:17:01.262 ], 00:17:01.262 "driver_specific": { 00:17:01.262 "raid": { 00:17:01.262 "uuid": "45ca30cb-e735-4b93-9ce4-923ada6373ca", 00:17:01.262 "strip_size_kb": 64, 00:17:01.262 "state": "online", 00:17:01.262 "raid_level": "raid0", 00:17:01.262 "superblock": false, 00:17:01.262 "num_base_bdevs": 3, 00:17:01.262 "num_base_bdevs_discovered": 3, 00:17:01.262 "num_base_bdevs_operational": 3, 00:17:01.262 "base_bdevs_list": [ 00:17:01.262 { 00:17:01.262 "name": "NewBaseBdev", 00:17:01.262 "uuid": "51f8c274-1435-49c5-af75-732239dead91", 00:17:01.262 "is_configured": true, 00:17:01.262 "data_offset": 0, 00:17:01.262 "data_size": 65536 00:17:01.262 }, 00:17:01.262 { 00:17:01.262 "name": "BaseBdev2", 00:17:01.262 "uuid": "39bac579-e071-4693-b111-4f847edee0b2", 00:17:01.262 "is_configured": true, 00:17:01.262 "data_offset": 0, 00:17:01.262 "data_size": 65536 00:17:01.262 }, 00:17:01.262 { 00:17:01.262 "name": "BaseBdev3", 00:17:01.262 "uuid": "bb160a0a-76c7-4341-b468-7ae05b6cea17", 00:17:01.262 "is_configured": true, 00:17:01.262 "data_offset": 0, 00:17:01.262 "data_size": 65536 00:17:01.262 } 00:17:01.262 ] 00:17:01.262 } 00:17:01.262 } 00:17:01.262 }' 00:17:01.262 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:01.262 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:17:01.262 BaseBdev2 00:17:01.262 BaseBdev3' 00:17:01.262 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:01.262 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:01.262 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:01.521 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:01.521 "name": "NewBaseBdev", 00:17:01.521 "aliases": [ 00:17:01.521 "51f8c274-1435-49c5-af75-732239dead91" 00:17:01.521 ], 00:17:01.521 "product_name": "Malloc disk", 00:17:01.521 "block_size": 512, 00:17:01.521 "num_blocks": 65536, 00:17:01.521 "uuid": "51f8c274-1435-49c5-af75-732239dead91", 00:17:01.521 "assigned_rate_limits": { 00:17:01.521 "rw_ios_per_sec": 0, 00:17:01.521 "rw_mbytes_per_sec": 0, 00:17:01.521 "r_mbytes_per_sec": 0, 00:17:01.521 "w_mbytes_per_sec": 0 00:17:01.521 }, 00:17:01.521 "claimed": true, 00:17:01.521 "claim_type": "exclusive_write", 00:17:01.521 "zoned": false, 00:17:01.521 "supported_io_types": { 00:17:01.521 "read": true, 00:17:01.521 "write": true, 00:17:01.521 "unmap": true, 00:17:01.521 "flush": true, 00:17:01.521 "reset": true, 00:17:01.521 "nvme_admin": false, 00:17:01.521 "nvme_io": false, 00:17:01.521 "nvme_io_md": false, 00:17:01.521 "write_zeroes": true, 00:17:01.521 "zcopy": true, 00:17:01.521 "get_zone_info": false, 00:17:01.521 "zone_management": false, 00:17:01.521 "zone_append": false, 00:17:01.521 "compare": false, 00:17:01.521 "compare_and_write": false, 00:17:01.521 "abort": true, 00:17:01.521 "seek_hole": false, 00:17:01.521 "seek_data": false, 00:17:01.521 "copy": true, 00:17:01.521 "nvme_iov_md": false 00:17:01.521 }, 00:17:01.521 "memory_domains": [ 00:17:01.521 { 00:17:01.521 "dma_device_id": "system", 00:17:01.521 "dma_device_type": 1 00:17:01.521 }, 00:17:01.521 { 00:17:01.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.521 "dma_device_type": 2 00:17:01.521 } 00:17:01.521 ], 00:17:01.521 "driver_specific": {} 00:17:01.521 }' 00:17:01.521 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:01.521 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:01.780 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:01.780 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:01.780 23:02:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:01.780 23:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:01.780 23:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:01.780 23:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:01.780 23:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:01.780 23:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:02.039 23:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:02.039 23:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:02.039 23:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:02.039 23:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:02.039 23:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:02.299 23:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:02.299 "name": "BaseBdev2", 00:17:02.299 "aliases": [ 00:17:02.299 "39bac579-e071-4693-b111-4f847edee0b2" 00:17:02.299 ], 00:17:02.299 "product_name": "Malloc disk", 00:17:02.299 "block_size": 512, 00:17:02.299 "num_blocks": 65536, 00:17:02.299 "uuid": "39bac579-e071-4693-b111-4f847edee0b2", 00:17:02.299 "assigned_rate_limits": { 00:17:02.299 "rw_ios_per_sec": 0, 00:17:02.299 "rw_mbytes_per_sec": 0, 00:17:02.299 "r_mbytes_per_sec": 0, 00:17:02.299 "w_mbytes_per_sec": 0 00:17:02.299 }, 00:17:02.299 "claimed": true, 00:17:02.299 "claim_type": "exclusive_write", 00:17:02.299 "zoned": false, 00:17:02.299 "supported_io_types": { 00:17:02.299 "read": true, 00:17:02.299 "write": true, 00:17:02.299 "unmap": true, 00:17:02.299 "flush": true, 00:17:02.299 "reset": true, 00:17:02.299 "nvme_admin": false, 00:17:02.299 "nvme_io": false, 00:17:02.299 "nvme_io_md": false, 00:17:02.299 "write_zeroes": true, 00:17:02.299 "zcopy": true, 00:17:02.299 "get_zone_info": false, 00:17:02.299 "zone_management": false, 00:17:02.299 "zone_append": false, 00:17:02.299 "compare": false, 00:17:02.299 "compare_and_write": false, 00:17:02.299 "abort": true, 00:17:02.299 "seek_hole": false, 00:17:02.299 "seek_data": false, 00:17:02.299 "copy": true, 00:17:02.299 "nvme_iov_md": false 00:17:02.299 }, 00:17:02.299 "memory_domains": [ 00:17:02.299 { 00:17:02.299 "dma_device_id": "system", 00:17:02.299 "dma_device_type": 1 00:17:02.299 }, 00:17:02.299 { 00:17:02.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.299 "dma_device_type": 2 00:17:02.299 } 00:17:02.299 ], 00:17:02.299 "driver_specific": {} 00:17:02.299 }' 00:17:02.299 23:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:02.299 23:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:02.299 23:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:02.299 23:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:02.299 23:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:02.299 23:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:02.299 23:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:02.558 23:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:02.558 23:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:02.558 23:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:02.558 23:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:02.558 23:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:02.558 23:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:02.558 23:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:02.558 23:02:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:02.817 23:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:02.817 "name": "BaseBdev3", 00:17:02.817 "aliases": [ 00:17:02.817 "bb160a0a-76c7-4341-b468-7ae05b6cea17" 00:17:02.817 ], 00:17:02.817 "product_name": "Malloc disk", 00:17:02.817 "block_size": 512, 00:17:02.817 "num_blocks": 65536, 00:17:02.817 "uuid": "bb160a0a-76c7-4341-b468-7ae05b6cea17", 00:17:02.817 "assigned_rate_limits": { 00:17:02.817 "rw_ios_per_sec": 0, 00:17:02.817 "rw_mbytes_per_sec": 0, 00:17:02.817 "r_mbytes_per_sec": 0, 00:17:02.817 "w_mbytes_per_sec": 0 00:17:02.817 }, 00:17:02.817 "claimed": true, 00:17:02.817 "claim_type": "exclusive_write", 00:17:02.817 "zoned": false, 00:17:02.817 "supported_io_types": { 00:17:02.817 "read": true, 00:17:02.817 "write": true, 00:17:02.817 "unmap": true, 00:17:02.817 "flush": true, 00:17:02.817 "reset": true, 00:17:02.817 "nvme_admin": false, 00:17:02.817 "nvme_io": false, 00:17:02.817 "nvme_io_md": false, 00:17:02.817 "write_zeroes": true, 00:17:02.817 "zcopy": true, 00:17:02.817 "get_zone_info": false, 00:17:02.817 "zone_management": false, 00:17:02.817 "zone_append": false, 00:17:02.817 "compare": false, 00:17:02.817 "compare_and_write": false, 00:17:02.817 "abort": true, 00:17:02.817 "seek_hole": false, 00:17:02.817 "seek_data": false, 00:17:02.817 "copy": true, 00:17:02.817 "nvme_iov_md": false 00:17:02.817 }, 00:17:02.817 "memory_domains": [ 00:17:02.817 { 00:17:02.817 "dma_device_id": "system", 00:17:02.817 "dma_device_type": 1 00:17:02.817 }, 00:17:02.817 { 00:17:02.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.817 "dma_device_type": 2 00:17:02.817 } 00:17:02.817 ], 00:17:02.817 "driver_specific": {} 00:17:02.817 }' 00:17:02.817 23:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:02.817 23:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:02.817 23:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:02.817 23:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:03.076 23:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:03.076 23:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:03.076 23:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:03.076 23:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:03.076 23:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:03.076 23:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:03.076 23:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:03.335 23:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:03.335 23:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:03.335 [2024-07-13 23:02:52.702904] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:03.335 [2024-07-13 23:02:52.703262] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:03.335 [2024-07-13 23:02:52.703480] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:03.335 [2024-07-13 23:02:52.703651] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:03.335 [2024-07-13 23:02:52.703756] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:17:03.335 23:02:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 135313 00:17:03.335 23:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 135313 ']' 00:17:03.335 23:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 135313 00:17:03.335 23:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:17:03.335 23:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:03.335 23:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 135313 00:17:03.594 23:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:03.594 23:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:03.594 23:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 135313' 00:17:03.594 killing process with pid 135313 00:17:03.594 23:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 135313 00:17:03.594 [2024-07-13 23:02:52.748889] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:03.594 23:02:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 135313 00:17:03.594 [2024-07-13 23:02:52.776233] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:17:03.854 00:17:03.854 real 0m29.468s 00:17:03.854 user 0m55.982s 00:17:03.854 sys 0m3.531s 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.854 ************************************ 00:17:03.854 END TEST raid_state_function_test 00:17:03.854 ************************************ 00:17:03.854 23:02:53 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:03.854 23:02:53 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:17:03.854 23:02:53 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:03.854 23:02:53 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:03.854 23:02:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:03.854 ************************************ 00:17:03.854 START TEST raid_state_function_test_sb 00:17:03.854 ************************************ 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 true 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=136303 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 136303' 00:17:03.854 Process raid pid: 136303 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 136303 /var/tmp/spdk-raid.sock 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 136303 ']' 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:03.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:03.854 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.854 [2024-07-13 23:02:53.154821] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:03.854 [2024-07-13 23:02:53.155081] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.112 [2024-07-13 23:02:53.306337] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.112 [2024-07-13 23:02:53.377948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.112 [2024-07-13 23:02:53.435468] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:04.676 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:04.676 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:17:04.676 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:04.933 [2024-07-13 23:02:54.290197] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:04.933 [2024-07-13 23:02:54.290295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:04.933 [2024-07-13 23:02:54.290325] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:04.933 [2024-07-13 23:02:54.290345] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:04.933 [2024-07-13 23:02:54.290353] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:04.933 [2024-07-13 23:02:54.290392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:04.933 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:04.933 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:04.933 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:04.934 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:04.934 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:04.934 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:04.934 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:04.934 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:04.934 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:04.934 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:04.934 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.934 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.192 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:05.192 "name": "Existed_Raid", 00:17:05.192 "uuid": "940aeab8-e81b-4d10-b647-e9bb2fe980bb", 00:17:05.192 "strip_size_kb": 64, 00:17:05.192 "state": "configuring", 00:17:05.192 "raid_level": "raid0", 00:17:05.192 "superblock": true, 00:17:05.192 "num_base_bdevs": 3, 00:17:05.192 "num_base_bdevs_discovered": 0, 00:17:05.192 "num_base_bdevs_operational": 3, 00:17:05.192 "base_bdevs_list": [ 00:17:05.192 { 00:17:05.192 "name": "BaseBdev1", 00:17:05.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.192 "is_configured": false, 00:17:05.192 "data_offset": 0, 00:17:05.192 "data_size": 0 00:17:05.192 }, 00:17:05.192 { 00:17:05.192 "name": "BaseBdev2", 00:17:05.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.192 "is_configured": false, 00:17:05.192 "data_offset": 0, 00:17:05.192 "data_size": 0 00:17:05.192 }, 00:17:05.192 { 00:17:05.192 "name": "BaseBdev3", 00:17:05.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.192 "is_configured": false, 00:17:05.192 "data_offset": 0, 00:17:05.192 "data_size": 0 00:17:05.192 } 00:17:05.192 ] 00:17:05.192 }' 00:17:05.192 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:05.192 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.126 23:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:06.126 [2024-07-13 23:02:55.470277] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:06.126 [2024-07-13 23:02:55.470341] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:17:06.126 23:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:06.384 [2024-07-13 23:02:55.738337] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:06.384 [2024-07-13 23:02:55.738424] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:06.384 [2024-07-13 23:02:55.738453] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:06.384 [2024-07-13 23:02:55.738472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:06.384 [2024-07-13 23:02:55.738479] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:06.384 [2024-07-13 23:02:55.738503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:06.384 23:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:06.643 [2024-07-13 23:02:55.997332] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:06.643 BaseBdev1 00:17:06.643 23:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:06.643 23:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:06.643 23:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:06.643 23:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:06.643 23:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:06.643 23:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:06.643 23:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:06.901 23:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:07.159 [ 00:17:07.159 { 00:17:07.159 "name": "BaseBdev1", 00:17:07.159 "aliases": [ 00:17:07.159 "17be858d-79f8-4b66-98d3-37e598c19dc8" 00:17:07.159 ], 00:17:07.159 "product_name": "Malloc disk", 00:17:07.159 "block_size": 512, 00:17:07.159 "num_blocks": 65536, 00:17:07.159 "uuid": "17be858d-79f8-4b66-98d3-37e598c19dc8", 00:17:07.159 "assigned_rate_limits": { 00:17:07.159 "rw_ios_per_sec": 0, 00:17:07.159 "rw_mbytes_per_sec": 0, 00:17:07.159 "r_mbytes_per_sec": 0, 00:17:07.159 "w_mbytes_per_sec": 0 00:17:07.159 }, 00:17:07.159 "claimed": true, 00:17:07.159 "claim_type": "exclusive_write", 00:17:07.159 "zoned": false, 00:17:07.159 "supported_io_types": { 00:17:07.159 "read": true, 00:17:07.159 "write": true, 00:17:07.159 "unmap": true, 00:17:07.159 "flush": true, 00:17:07.159 "reset": true, 00:17:07.159 "nvme_admin": false, 00:17:07.159 "nvme_io": false, 00:17:07.159 "nvme_io_md": false, 00:17:07.159 "write_zeroes": true, 00:17:07.159 "zcopy": true, 00:17:07.159 "get_zone_info": false, 00:17:07.159 "zone_management": false, 00:17:07.159 "zone_append": false, 00:17:07.159 "compare": false, 00:17:07.159 "compare_and_write": false, 00:17:07.159 "abort": true, 00:17:07.159 "seek_hole": false, 00:17:07.159 "seek_data": false, 00:17:07.159 "copy": true, 00:17:07.159 "nvme_iov_md": false 00:17:07.159 }, 00:17:07.159 "memory_domains": [ 00:17:07.159 { 00:17:07.159 "dma_device_id": "system", 00:17:07.159 "dma_device_type": 1 00:17:07.159 }, 00:17:07.159 { 00:17:07.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.159 "dma_device_type": 2 00:17:07.159 } 00:17:07.159 ], 00:17:07.159 "driver_specific": {} 00:17:07.159 } 00:17:07.159 ] 00:17:07.159 23:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:07.159 23:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:07.159 23:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:07.159 23:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:07.159 23:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:07.159 23:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:07.159 23:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:07.159 23:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:07.159 23:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:07.159 23:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:07.159 23:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:07.159 23:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.159 23:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.418 23:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:07.418 "name": "Existed_Raid", 00:17:07.418 "uuid": "e47e37ca-37bf-4a9e-a899-2d4d594dabee", 00:17:07.418 "strip_size_kb": 64, 00:17:07.418 "state": "configuring", 00:17:07.418 "raid_level": "raid0", 00:17:07.418 "superblock": true, 00:17:07.418 "num_base_bdevs": 3, 00:17:07.418 "num_base_bdevs_discovered": 1, 00:17:07.418 "num_base_bdevs_operational": 3, 00:17:07.418 "base_bdevs_list": [ 00:17:07.418 { 00:17:07.418 "name": "BaseBdev1", 00:17:07.418 "uuid": "17be858d-79f8-4b66-98d3-37e598c19dc8", 00:17:07.418 "is_configured": true, 00:17:07.418 "data_offset": 2048, 00:17:07.418 "data_size": 63488 00:17:07.418 }, 00:17:07.418 { 00:17:07.418 "name": "BaseBdev2", 00:17:07.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.418 "is_configured": false, 00:17:07.418 "data_offset": 0, 00:17:07.418 "data_size": 0 00:17:07.418 }, 00:17:07.418 { 00:17:07.418 "name": "BaseBdev3", 00:17:07.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.418 "is_configured": false, 00:17:07.418 "data_offset": 0, 00:17:07.418 "data_size": 0 00:17:07.418 } 00:17:07.418 ] 00:17:07.418 }' 00:17:07.418 23:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:07.418 23:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.984 23:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:08.242 [2024-07-13 23:02:57.618238] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:08.242 [2024-07-13 23:02:57.618326] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:17:08.242 23:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:08.500 [2024-07-13 23:02:57.882314] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:08.500 [2024-07-13 23:02:57.884410] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:08.500 [2024-07-13 23:02:57.884485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:08.500 [2024-07-13 23:02:57.884513] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:08.500 [2024-07-13 23:02:57.884538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:08.500 23:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:08.500 23:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:08.500 23:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:08.500 23:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:08.500 23:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:08.500 23:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:08.500 23:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:08.500 23:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:08.500 23:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:08.500 23:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:08.500 23:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:08.500 23:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:08.500 23:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.500 23:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.758 23:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:08.758 "name": "Existed_Raid", 00:17:08.758 "uuid": "d07c7806-56db-4567-a599-7209fdc5624a", 00:17:08.758 "strip_size_kb": 64, 00:17:08.758 "state": "configuring", 00:17:08.758 "raid_level": "raid0", 00:17:08.758 "superblock": true, 00:17:08.758 "num_base_bdevs": 3, 00:17:08.758 "num_base_bdevs_discovered": 1, 00:17:08.758 "num_base_bdevs_operational": 3, 00:17:08.758 "base_bdevs_list": [ 00:17:08.758 { 00:17:08.758 "name": "BaseBdev1", 00:17:08.758 "uuid": "17be858d-79f8-4b66-98d3-37e598c19dc8", 00:17:08.758 "is_configured": true, 00:17:08.758 "data_offset": 2048, 00:17:08.758 "data_size": 63488 00:17:08.758 }, 00:17:08.758 { 00:17:08.758 "name": "BaseBdev2", 00:17:08.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.758 "is_configured": false, 00:17:08.758 "data_offset": 0, 00:17:08.758 "data_size": 0 00:17:08.758 }, 00:17:08.758 { 00:17:08.758 "name": "BaseBdev3", 00:17:08.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.758 "is_configured": false, 00:17:08.758 "data_offset": 0, 00:17:08.758 "data_size": 0 00:17:08.758 } 00:17:08.758 ] 00:17:08.758 }' 00:17:08.758 23:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:08.758 23:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.692 23:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:09.692 [2024-07-13 23:02:58.980493] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:09.692 BaseBdev2 00:17:09.692 23:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:09.692 23:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:09.692 23:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:09.692 23:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:09.692 23:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:09.692 23:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:09.692 23:02:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:09.951 23:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:10.209 [ 00:17:10.209 { 00:17:10.209 "name": "BaseBdev2", 00:17:10.209 "aliases": [ 00:17:10.209 "d90aea91-c962-4414-9f33-28b5ba95f47e" 00:17:10.209 ], 00:17:10.209 "product_name": "Malloc disk", 00:17:10.209 "block_size": 512, 00:17:10.209 "num_blocks": 65536, 00:17:10.209 "uuid": "d90aea91-c962-4414-9f33-28b5ba95f47e", 00:17:10.209 "assigned_rate_limits": { 00:17:10.209 "rw_ios_per_sec": 0, 00:17:10.209 "rw_mbytes_per_sec": 0, 00:17:10.209 "r_mbytes_per_sec": 0, 00:17:10.209 "w_mbytes_per_sec": 0 00:17:10.209 }, 00:17:10.209 "claimed": true, 00:17:10.209 "claim_type": "exclusive_write", 00:17:10.209 "zoned": false, 00:17:10.209 "supported_io_types": { 00:17:10.209 "read": true, 00:17:10.209 "write": true, 00:17:10.209 "unmap": true, 00:17:10.209 "flush": true, 00:17:10.209 "reset": true, 00:17:10.209 "nvme_admin": false, 00:17:10.209 "nvme_io": false, 00:17:10.209 "nvme_io_md": false, 00:17:10.209 "write_zeroes": true, 00:17:10.209 "zcopy": true, 00:17:10.209 "get_zone_info": false, 00:17:10.209 "zone_management": false, 00:17:10.209 "zone_append": false, 00:17:10.209 "compare": false, 00:17:10.209 "compare_and_write": false, 00:17:10.209 "abort": true, 00:17:10.209 "seek_hole": false, 00:17:10.209 "seek_data": false, 00:17:10.209 "copy": true, 00:17:10.209 "nvme_iov_md": false 00:17:10.209 }, 00:17:10.209 "memory_domains": [ 00:17:10.209 { 00:17:10.209 "dma_device_id": "system", 00:17:10.209 "dma_device_type": 1 00:17:10.209 }, 00:17:10.209 { 00:17:10.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.209 "dma_device_type": 2 00:17:10.209 } 00:17:10.209 ], 00:17:10.209 "driver_specific": {} 00:17:10.209 } 00:17:10.209 ] 00:17:10.209 23:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:10.209 23:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:10.209 23:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:10.209 23:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:10.209 23:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:10.209 23:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:10.209 23:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:10.209 23:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:10.209 23:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:10.209 23:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:10.209 23:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:10.209 23:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:10.209 23:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:10.209 23:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.209 23:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.467 23:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:10.467 "name": "Existed_Raid", 00:17:10.467 "uuid": "d07c7806-56db-4567-a599-7209fdc5624a", 00:17:10.467 "strip_size_kb": 64, 00:17:10.467 "state": "configuring", 00:17:10.467 "raid_level": "raid0", 00:17:10.467 "superblock": true, 00:17:10.467 "num_base_bdevs": 3, 00:17:10.467 "num_base_bdevs_discovered": 2, 00:17:10.467 "num_base_bdevs_operational": 3, 00:17:10.467 "base_bdevs_list": [ 00:17:10.467 { 00:17:10.467 "name": "BaseBdev1", 00:17:10.467 "uuid": "17be858d-79f8-4b66-98d3-37e598c19dc8", 00:17:10.467 "is_configured": true, 00:17:10.467 "data_offset": 2048, 00:17:10.467 "data_size": 63488 00:17:10.467 }, 00:17:10.467 { 00:17:10.467 "name": "BaseBdev2", 00:17:10.467 "uuid": "d90aea91-c962-4414-9f33-28b5ba95f47e", 00:17:10.467 "is_configured": true, 00:17:10.467 "data_offset": 2048, 00:17:10.467 "data_size": 63488 00:17:10.467 }, 00:17:10.467 { 00:17:10.467 "name": "BaseBdev3", 00:17:10.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.467 "is_configured": false, 00:17:10.467 "data_offset": 0, 00:17:10.468 "data_size": 0 00:17:10.468 } 00:17:10.468 ] 00:17:10.468 }' 00:17:10.468 23:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:10.468 23:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.034 23:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:11.292 [2024-07-13 23:03:00.693851] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:11.292 [2024-07-13 23:03:00.694092] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:17:11.292 [2024-07-13 23:03:00.694108] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:11.292 [2024-07-13 23:03:00.694268] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:17:11.292 [2024-07-13 23:03:00.694695] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:17:11.292 [2024-07-13 23:03:00.694722] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:17:11.292 BaseBdev3 00:17:11.292 [2024-07-13 23:03:00.694907] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.550 23:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:17:11.550 23:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:11.550 23:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:11.550 23:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:11.550 23:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:11.550 23:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:11.550 23:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:11.550 23:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:11.808 [ 00:17:11.808 { 00:17:11.808 "name": "BaseBdev3", 00:17:11.808 "aliases": [ 00:17:11.808 "90313664-18c6-4436-a377-1d5d11a51053" 00:17:11.808 ], 00:17:11.808 "product_name": "Malloc disk", 00:17:11.808 "block_size": 512, 00:17:11.808 "num_blocks": 65536, 00:17:11.808 "uuid": "90313664-18c6-4436-a377-1d5d11a51053", 00:17:11.808 "assigned_rate_limits": { 00:17:11.808 "rw_ios_per_sec": 0, 00:17:11.808 "rw_mbytes_per_sec": 0, 00:17:11.808 "r_mbytes_per_sec": 0, 00:17:11.808 "w_mbytes_per_sec": 0 00:17:11.808 }, 00:17:11.808 "claimed": true, 00:17:11.808 "claim_type": "exclusive_write", 00:17:11.808 "zoned": false, 00:17:11.808 "supported_io_types": { 00:17:11.808 "read": true, 00:17:11.808 "write": true, 00:17:11.808 "unmap": true, 00:17:11.808 "flush": true, 00:17:11.808 "reset": true, 00:17:11.808 "nvme_admin": false, 00:17:11.808 "nvme_io": false, 00:17:11.808 "nvme_io_md": false, 00:17:11.808 "write_zeroes": true, 00:17:11.808 "zcopy": true, 00:17:11.808 "get_zone_info": false, 00:17:11.808 "zone_management": false, 00:17:11.808 "zone_append": false, 00:17:11.808 "compare": false, 00:17:11.808 "compare_and_write": false, 00:17:11.808 "abort": true, 00:17:11.808 "seek_hole": false, 00:17:11.808 "seek_data": false, 00:17:11.808 "copy": true, 00:17:11.808 "nvme_iov_md": false 00:17:11.808 }, 00:17:11.808 "memory_domains": [ 00:17:11.808 { 00:17:11.808 "dma_device_id": "system", 00:17:11.808 "dma_device_type": 1 00:17:11.808 }, 00:17:11.808 { 00:17:11.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.808 "dma_device_type": 2 00:17:11.808 } 00:17:11.808 ], 00:17:11.808 "driver_specific": {} 00:17:11.808 } 00:17:11.808 ] 00:17:11.808 23:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:11.808 23:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:11.808 23:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:11.808 23:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:17:11.808 23:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:11.808 23:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:11.808 23:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:11.808 23:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:11.808 23:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:11.808 23:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:11.808 23:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:11.808 23:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:11.808 23:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:11.808 23:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.808 23:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.065 23:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:12.065 "name": "Existed_Raid", 00:17:12.065 "uuid": "d07c7806-56db-4567-a599-7209fdc5624a", 00:17:12.065 "strip_size_kb": 64, 00:17:12.065 "state": "online", 00:17:12.065 "raid_level": "raid0", 00:17:12.065 "superblock": true, 00:17:12.065 "num_base_bdevs": 3, 00:17:12.065 "num_base_bdevs_discovered": 3, 00:17:12.065 "num_base_bdevs_operational": 3, 00:17:12.065 "base_bdevs_list": [ 00:17:12.065 { 00:17:12.065 "name": "BaseBdev1", 00:17:12.065 "uuid": "17be858d-79f8-4b66-98d3-37e598c19dc8", 00:17:12.065 "is_configured": true, 00:17:12.065 "data_offset": 2048, 00:17:12.065 "data_size": 63488 00:17:12.065 }, 00:17:12.065 { 00:17:12.065 "name": "BaseBdev2", 00:17:12.065 "uuid": "d90aea91-c962-4414-9f33-28b5ba95f47e", 00:17:12.065 "is_configured": true, 00:17:12.065 "data_offset": 2048, 00:17:12.065 "data_size": 63488 00:17:12.065 }, 00:17:12.065 { 00:17:12.065 "name": "BaseBdev3", 00:17:12.065 "uuid": "90313664-18c6-4436-a377-1d5d11a51053", 00:17:12.065 "is_configured": true, 00:17:12.065 "data_offset": 2048, 00:17:12.065 "data_size": 63488 00:17:12.065 } 00:17:12.065 ] 00:17:12.065 }' 00:17:12.065 23:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:12.065 23:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.633 23:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:12.633 23:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:12.633 23:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:12.633 23:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:12.633 23:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:12.633 23:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:12.633 23:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:12.633 23:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:12.891 [2024-07-13 23:03:02.222651] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:12.891 23:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:12.891 "name": "Existed_Raid", 00:17:12.891 "aliases": [ 00:17:12.891 "d07c7806-56db-4567-a599-7209fdc5624a" 00:17:12.891 ], 00:17:12.891 "product_name": "Raid Volume", 00:17:12.891 "block_size": 512, 00:17:12.891 "num_blocks": 190464, 00:17:12.891 "uuid": "d07c7806-56db-4567-a599-7209fdc5624a", 00:17:12.891 "assigned_rate_limits": { 00:17:12.891 "rw_ios_per_sec": 0, 00:17:12.891 "rw_mbytes_per_sec": 0, 00:17:12.891 "r_mbytes_per_sec": 0, 00:17:12.891 "w_mbytes_per_sec": 0 00:17:12.891 }, 00:17:12.891 "claimed": false, 00:17:12.891 "zoned": false, 00:17:12.891 "supported_io_types": { 00:17:12.891 "read": true, 00:17:12.891 "write": true, 00:17:12.891 "unmap": true, 00:17:12.891 "flush": true, 00:17:12.891 "reset": true, 00:17:12.891 "nvme_admin": false, 00:17:12.891 "nvme_io": false, 00:17:12.891 "nvme_io_md": false, 00:17:12.891 "write_zeroes": true, 00:17:12.891 "zcopy": false, 00:17:12.891 "get_zone_info": false, 00:17:12.891 "zone_management": false, 00:17:12.891 "zone_append": false, 00:17:12.891 "compare": false, 00:17:12.891 "compare_and_write": false, 00:17:12.891 "abort": false, 00:17:12.891 "seek_hole": false, 00:17:12.891 "seek_data": false, 00:17:12.891 "copy": false, 00:17:12.891 "nvme_iov_md": false 00:17:12.891 }, 00:17:12.891 "memory_domains": [ 00:17:12.891 { 00:17:12.891 "dma_device_id": "system", 00:17:12.891 "dma_device_type": 1 00:17:12.891 }, 00:17:12.891 { 00:17:12.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.891 "dma_device_type": 2 00:17:12.891 }, 00:17:12.891 { 00:17:12.891 "dma_device_id": "system", 00:17:12.891 "dma_device_type": 1 00:17:12.891 }, 00:17:12.891 { 00:17:12.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.891 "dma_device_type": 2 00:17:12.891 }, 00:17:12.891 { 00:17:12.891 "dma_device_id": "system", 00:17:12.891 "dma_device_type": 1 00:17:12.891 }, 00:17:12.891 { 00:17:12.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.891 "dma_device_type": 2 00:17:12.891 } 00:17:12.891 ], 00:17:12.891 "driver_specific": { 00:17:12.891 "raid": { 00:17:12.891 "uuid": "d07c7806-56db-4567-a599-7209fdc5624a", 00:17:12.891 "strip_size_kb": 64, 00:17:12.891 "state": "online", 00:17:12.891 "raid_level": "raid0", 00:17:12.891 "superblock": true, 00:17:12.891 "num_base_bdevs": 3, 00:17:12.891 "num_base_bdevs_discovered": 3, 00:17:12.891 "num_base_bdevs_operational": 3, 00:17:12.891 "base_bdevs_list": [ 00:17:12.891 { 00:17:12.891 "name": "BaseBdev1", 00:17:12.891 "uuid": "17be858d-79f8-4b66-98d3-37e598c19dc8", 00:17:12.891 "is_configured": true, 00:17:12.891 "data_offset": 2048, 00:17:12.891 "data_size": 63488 00:17:12.891 }, 00:17:12.891 { 00:17:12.891 "name": "BaseBdev2", 00:17:12.891 "uuid": "d90aea91-c962-4414-9f33-28b5ba95f47e", 00:17:12.891 "is_configured": true, 00:17:12.891 "data_offset": 2048, 00:17:12.891 "data_size": 63488 00:17:12.891 }, 00:17:12.891 { 00:17:12.891 "name": "BaseBdev3", 00:17:12.891 "uuid": "90313664-18c6-4436-a377-1d5d11a51053", 00:17:12.891 "is_configured": true, 00:17:12.891 "data_offset": 2048, 00:17:12.891 "data_size": 63488 00:17:12.891 } 00:17:12.891 ] 00:17:12.891 } 00:17:12.891 } 00:17:12.891 }' 00:17:12.891 23:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:12.891 23:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:12.891 BaseBdev2 00:17:12.891 BaseBdev3' 00:17:12.891 23:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:12.891 23:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:12.891 23:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:13.149 23:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:13.149 "name": "BaseBdev1", 00:17:13.149 "aliases": [ 00:17:13.149 "17be858d-79f8-4b66-98d3-37e598c19dc8" 00:17:13.149 ], 00:17:13.149 "product_name": "Malloc disk", 00:17:13.149 "block_size": 512, 00:17:13.149 "num_blocks": 65536, 00:17:13.149 "uuid": "17be858d-79f8-4b66-98d3-37e598c19dc8", 00:17:13.149 "assigned_rate_limits": { 00:17:13.149 "rw_ios_per_sec": 0, 00:17:13.149 "rw_mbytes_per_sec": 0, 00:17:13.149 "r_mbytes_per_sec": 0, 00:17:13.149 "w_mbytes_per_sec": 0 00:17:13.149 }, 00:17:13.149 "claimed": true, 00:17:13.149 "claim_type": "exclusive_write", 00:17:13.149 "zoned": false, 00:17:13.149 "supported_io_types": { 00:17:13.149 "read": true, 00:17:13.149 "write": true, 00:17:13.149 "unmap": true, 00:17:13.149 "flush": true, 00:17:13.149 "reset": true, 00:17:13.149 "nvme_admin": false, 00:17:13.149 "nvme_io": false, 00:17:13.149 "nvme_io_md": false, 00:17:13.149 "write_zeroes": true, 00:17:13.149 "zcopy": true, 00:17:13.149 "get_zone_info": false, 00:17:13.149 "zone_management": false, 00:17:13.149 "zone_append": false, 00:17:13.149 "compare": false, 00:17:13.149 "compare_and_write": false, 00:17:13.149 "abort": true, 00:17:13.149 "seek_hole": false, 00:17:13.149 "seek_data": false, 00:17:13.149 "copy": true, 00:17:13.149 "nvme_iov_md": false 00:17:13.149 }, 00:17:13.149 "memory_domains": [ 00:17:13.149 { 00:17:13.149 "dma_device_id": "system", 00:17:13.149 "dma_device_type": 1 00:17:13.149 }, 00:17:13.149 { 00:17:13.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.149 "dma_device_type": 2 00:17:13.149 } 00:17:13.149 ], 00:17:13.149 "driver_specific": {} 00:17:13.149 }' 00:17:13.149 23:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:13.407 23:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:13.407 23:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:13.407 23:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:13.407 23:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:13.407 23:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:13.407 23:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:13.407 23:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:13.666 23:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:13.666 23:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:13.666 23:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:13.666 23:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:13.666 23:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:13.666 23:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:13.666 23:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:13.976 23:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:13.976 "name": "BaseBdev2", 00:17:13.976 "aliases": [ 00:17:13.976 "d90aea91-c962-4414-9f33-28b5ba95f47e" 00:17:13.976 ], 00:17:13.976 "product_name": "Malloc disk", 00:17:13.976 "block_size": 512, 00:17:13.977 "num_blocks": 65536, 00:17:13.977 "uuid": "d90aea91-c962-4414-9f33-28b5ba95f47e", 00:17:13.977 "assigned_rate_limits": { 00:17:13.977 "rw_ios_per_sec": 0, 00:17:13.977 "rw_mbytes_per_sec": 0, 00:17:13.977 "r_mbytes_per_sec": 0, 00:17:13.977 "w_mbytes_per_sec": 0 00:17:13.977 }, 00:17:13.977 "claimed": true, 00:17:13.977 "claim_type": "exclusive_write", 00:17:13.977 "zoned": false, 00:17:13.977 "supported_io_types": { 00:17:13.977 "read": true, 00:17:13.977 "write": true, 00:17:13.977 "unmap": true, 00:17:13.977 "flush": true, 00:17:13.977 "reset": true, 00:17:13.977 "nvme_admin": false, 00:17:13.977 "nvme_io": false, 00:17:13.977 "nvme_io_md": false, 00:17:13.977 "write_zeroes": true, 00:17:13.977 "zcopy": true, 00:17:13.977 "get_zone_info": false, 00:17:13.977 "zone_management": false, 00:17:13.977 "zone_append": false, 00:17:13.977 "compare": false, 00:17:13.977 "compare_and_write": false, 00:17:13.977 "abort": true, 00:17:13.977 "seek_hole": false, 00:17:13.977 "seek_data": false, 00:17:13.977 "copy": true, 00:17:13.977 "nvme_iov_md": false 00:17:13.977 }, 00:17:13.977 "memory_domains": [ 00:17:13.977 { 00:17:13.977 "dma_device_id": "system", 00:17:13.977 "dma_device_type": 1 00:17:13.977 }, 00:17:13.977 { 00:17:13.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.977 "dma_device_type": 2 00:17:13.977 } 00:17:13.977 ], 00:17:13.977 "driver_specific": {} 00:17:13.977 }' 00:17:13.977 23:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:13.977 23:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:13.977 23:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:13.977 23:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:13.977 23:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:13.977 23:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:13.977 23:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:14.246 23:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:14.246 23:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:14.246 23:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:14.246 23:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:14.246 23:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:14.246 23:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:14.246 23:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:14.246 23:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:14.505 23:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:14.505 "name": "BaseBdev3", 00:17:14.505 "aliases": [ 00:17:14.505 "90313664-18c6-4436-a377-1d5d11a51053" 00:17:14.505 ], 00:17:14.505 "product_name": "Malloc disk", 00:17:14.505 "block_size": 512, 00:17:14.505 "num_blocks": 65536, 00:17:14.505 "uuid": "90313664-18c6-4436-a377-1d5d11a51053", 00:17:14.505 "assigned_rate_limits": { 00:17:14.505 "rw_ios_per_sec": 0, 00:17:14.505 "rw_mbytes_per_sec": 0, 00:17:14.505 "r_mbytes_per_sec": 0, 00:17:14.505 "w_mbytes_per_sec": 0 00:17:14.505 }, 00:17:14.505 "claimed": true, 00:17:14.505 "claim_type": "exclusive_write", 00:17:14.505 "zoned": false, 00:17:14.505 "supported_io_types": { 00:17:14.505 "read": true, 00:17:14.505 "write": true, 00:17:14.505 "unmap": true, 00:17:14.505 "flush": true, 00:17:14.505 "reset": true, 00:17:14.505 "nvme_admin": false, 00:17:14.505 "nvme_io": false, 00:17:14.505 "nvme_io_md": false, 00:17:14.505 "write_zeroes": true, 00:17:14.505 "zcopy": true, 00:17:14.505 "get_zone_info": false, 00:17:14.505 "zone_management": false, 00:17:14.505 "zone_append": false, 00:17:14.505 "compare": false, 00:17:14.505 "compare_and_write": false, 00:17:14.505 "abort": true, 00:17:14.505 "seek_hole": false, 00:17:14.505 "seek_data": false, 00:17:14.505 "copy": true, 00:17:14.505 "nvme_iov_md": false 00:17:14.505 }, 00:17:14.505 "memory_domains": [ 00:17:14.505 { 00:17:14.505 "dma_device_id": "system", 00:17:14.505 "dma_device_type": 1 00:17:14.505 }, 00:17:14.505 { 00:17:14.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.505 "dma_device_type": 2 00:17:14.505 } 00:17:14.505 ], 00:17:14.505 "driver_specific": {} 00:17:14.505 }' 00:17:14.505 23:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:14.505 23:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:14.764 23:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:14.764 23:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:14.764 23:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:14.764 23:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:14.764 23:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:14.764 23:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:14.764 23:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:14.764 23:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:15.021 23:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:15.021 23:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:15.021 23:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:15.279 [2024-07-13 23:03:04.506921] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:15.279 [2024-07-13 23:03:04.506958] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:15.279 [2024-07-13 23:03:04.507077] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:15.279 23:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:15.279 23:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:17:15.279 23:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:15.279 23:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:17:15.279 23:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:17:15.279 23:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:17:15.279 23:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:15.279 23:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:17:15.279 23:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:15.279 23:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:15.279 23:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:15.279 23:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:15.279 23:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:15.279 23:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:15.279 23:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:15.279 23:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.279 23:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.537 23:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:15.537 "name": "Existed_Raid", 00:17:15.538 "uuid": "d07c7806-56db-4567-a599-7209fdc5624a", 00:17:15.538 "strip_size_kb": 64, 00:17:15.538 "state": "offline", 00:17:15.538 "raid_level": "raid0", 00:17:15.538 "superblock": true, 00:17:15.538 "num_base_bdevs": 3, 00:17:15.538 "num_base_bdevs_discovered": 2, 00:17:15.538 "num_base_bdevs_operational": 2, 00:17:15.538 "base_bdevs_list": [ 00:17:15.538 { 00:17:15.538 "name": null, 00:17:15.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.538 "is_configured": false, 00:17:15.538 "data_offset": 2048, 00:17:15.538 "data_size": 63488 00:17:15.538 }, 00:17:15.538 { 00:17:15.538 "name": "BaseBdev2", 00:17:15.538 "uuid": "d90aea91-c962-4414-9f33-28b5ba95f47e", 00:17:15.538 "is_configured": true, 00:17:15.538 "data_offset": 2048, 00:17:15.538 "data_size": 63488 00:17:15.538 }, 00:17:15.538 { 00:17:15.538 "name": "BaseBdev3", 00:17:15.538 "uuid": "90313664-18c6-4436-a377-1d5d11a51053", 00:17:15.538 "is_configured": true, 00:17:15.538 "data_offset": 2048, 00:17:15.538 "data_size": 63488 00:17:15.538 } 00:17:15.538 ] 00:17:15.538 }' 00:17:15.538 23:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:15.538 23:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.105 23:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:16.105 23:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:16.105 23:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:16.105 23:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.364 23:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:16.364 23:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:16.364 23:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:16.623 [2024-07-13 23:03:05.854597] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:16.623 23:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:16.623 23:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:16.623 23:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:16.623 23:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.882 23:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:16.882 23:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:16.882 23:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:17.141 [2024-07-13 23:03:06.361011] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:17.141 [2024-07-13 23:03:06.361066] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:17:17.141 23:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:17.141 23:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:17.141 23:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.141 23:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:17.400 23:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:17.400 23:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:17.400 23:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:17:17.400 23:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:17:17.400 23:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:17.400 23:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:17.400 BaseBdev2 00:17:17.659 23:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:17:17.659 23:03:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:17.659 23:03:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:17.659 23:03:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:17.659 23:03:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:17.659 23:03:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:17.659 23:03:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:17.659 23:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:17.918 [ 00:17:17.918 { 00:17:17.918 "name": "BaseBdev2", 00:17:17.918 "aliases": [ 00:17:17.918 "96529b47-66c3-4bdb-a09a-7454cfd4a620" 00:17:17.918 ], 00:17:17.918 "product_name": "Malloc disk", 00:17:17.918 "block_size": 512, 00:17:17.918 "num_blocks": 65536, 00:17:17.918 "uuid": "96529b47-66c3-4bdb-a09a-7454cfd4a620", 00:17:17.918 "assigned_rate_limits": { 00:17:17.918 "rw_ios_per_sec": 0, 00:17:17.918 "rw_mbytes_per_sec": 0, 00:17:17.918 "r_mbytes_per_sec": 0, 00:17:17.918 "w_mbytes_per_sec": 0 00:17:17.918 }, 00:17:17.918 "claimed": false, 00:17:17.918 "zoned": false, 00:17:17.918 "supported_io_types": { 00:17:17.918 "read": true, 00:17:17.918 "write": true, 00:17:17.918 "unmap": true, 00:17:17.918 "flush": true, 00:17:17.918 "reset": true, 00:17:17.918 "nvme_admin": false, 00:17:17.918 "nvme_io": false, 00:17:17.918 "nvme_io_md": false, 00:17:17.918 "write_zeroes": true, 00:17:17.918 "zcopy": true, 00:17:17.918 "get_zone_info": false, 00:17:17.918 "zone_management": false, 00:17:17.918 "zone_append": false, 00:17:17.918 "compare": false, 00:17:17.918 "compare_and_write": false, 00:17:17.918 "abort": true, 00:17:17.918 "seek_hole": false, 00:17:17.918 "seek_data": false, 00:17:17.918 "copy": true, 00:17:17.918 "nvme_iov_md": false 00:17:17.918 }, 00:17:17.918 "memory_domains": [ 00:17:17.918 { 00:17:17.918 "dma_device_id": "system", 00:17:17.918 "dma_device_type": 1 00:17:17.918 }, 00:17:17.918 { 00:17:17.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.918 "dma_device_type": 2 00:17:17.918 } 00:17:17.918 ], 00:17:17.919 "driver_specific": {} 00:17:17.919 } 00:17:17.919 ] 00:17:17.919 23:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:17.919 23:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:17.919 23:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:17.919 23:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:18.178 BaseBdev3 00:17:18.178 23:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:18.178 23:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:18.178 23:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:18.178 23:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:18.178 23:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:18.178 23:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:18.178 23:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:18.436 23:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:18.696 [ 00:17:18.696 { 00:17:18.696 "name": "BaseBdev3", 00:17:18.696 "aliases": [ 00:17:18.696 "c5dd2077-9507-4af3-8cf4-2a2c2e765d5e" 00:17:18.696 ], 00:17:18.696 "product_name": "Malloc disk", 00:17:18.696 "block_size": 512, 00:17:18.696 "num_blocks": 65536, 00:17:18.696 "uuid": "c5dd2077-9507-4af3-8cf4-2a2c2e765d5e", 00:17:18.696 "assigned_rate_limits": { 00:17:18.696 "rw_ios_per_sec": 0, 00:17:18.696 "rw_mbytes_per_sec": 0, 00:17:18.696 "r_mbytes_per_sec": 0, 00:17:18.696 "w_mbytes_per_sec": 0 00:17:18.696 }, 00:17:18.696 "claimed": false, 00:17:18.696 "zoned": false, 00:17:18.696 "supported_io_types": { 00:17:18.696 "read": true, 00:17:18.696 "write": true, 00:17:18.696 "unmap": true, 00:17:18.696 "flush": true, 00:17:18.696 "reset": true, 00:17:18.696 "nvme_admin": false, 00:17:18.696 "nvme_io": false, 00:17:18.696 "nvme_io_md": false, 00:17:18.696 "write_zeroes": true, 00:17:18.696 "zcopy": true, 00:17:18.696 "get_zone_info": false, 00:17:18.696 "zone_management": false, 00:17:18.696 "zone_append": false, 00:17:18.696 "compare": false, 00:17:18.696 "compare_and_write": false, 00:17:18.696 "abort": true, 00:17:18.696 "seek_hole": false, 00:17:18.696 "seek_data": false, 00:17:18.696 "copy": true, 00:17:18.696 "nvme_iov_md": false 00:17:18.696 }, 00:17:18.696 "memory_domains": [ 00:17:18.696 { 00:17:18.696 "dma_device_id": "system", 00:17:18.696 "dma_device_type": 1 00:17:18.696 }, 00:17:18.696 { 00:17:18.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.696 "dma_device_type": 2 00:17:18.696 } 00:17:18.696 ], 00:17:18.696 "driver_specific": {} 00:17:18.696 } 00:17:18.696 ] 00:17:18.696 23:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:18.696 23:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:18.696 23:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:18.696 23:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:18.954 [2024-07-13 23:03:08.153435] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:18.954 [2024-07-13 23:03:08.153535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:18.954 [2024-07-13 23:03:08.153590] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:18.954 [2024-07-13 23:03:08.155890] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:18.954 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:18.954 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:18.954 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:18.954 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:18.954 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:18.954 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:18.954 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:18.954 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:18.954 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:18.954 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:18.954 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.954 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.213 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:19.213 "name": "Existed_Raid", 00:17:19.213 "uuid": "e6ca7c41-30f3-4cae-b438-b99f4bd38528", 00:17:19.213 "strip_size_kb": 64, 00:17:19.213 "state": "configuring", 00:17:19.213 "raid_level": "raid0", 00:17:19.213 "superblock": true, 00:17:19.213 "num_base_bdevs": 3, 00:17:19.213 "num_base_bdevs_discovered": 2, 00:17:19.213 "num_base_bdevs_operational": 3, 00:17:19.213 "base_bdevs_list": [ 00:17:19.213 { 00:17:19.213 "name": "BaseBdev1", 00:17:19.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.213 "is_configured": false, 00:17:19.213 "data_offset": 0, 00:17:19.213 "data_size": 0 00:17:19.213 }, 00:17:19.213 { 00:17:19.213 "name": "BaseBdev2", 00:17:19.213 "uuid": "96529b47-66c3-4bdb-a09a-7454cfd4a620", 00:17:19.213 "is_configured": true, 00:17:19.213 "data_offset": 2048, 00:17:19.213 "data_size": 63488 00:17:19.213 }, 00:17:19.213 { 00:17:19.213 "name": "BaseBdev3", 00:17:19.213 "uuid": "c5dd2077-9507-4af3-8cf4-2a2c2e765d5e", 00:17:19.213 "is_configured": true, 00:17:19.213 "data_offset": 2048, 00:17:19.213 "data_size": 63488 00:17:19.213 } 00:17:19.213 ] 00:17:19.213 }' 00:17:19.213 23:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:19.213 23:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.780 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:20.038 [2024-07-13 23:03:09.309684] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:20.038 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:20.038 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:20.038 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:20.038 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:20.038 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:20.038 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:20.038 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:20.038 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:20.038 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:20.038 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:20.038 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.038 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:20.297 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:20.297 "name": "Existed_Raid", 00:17:20.297 "uuid": "e6ca7c41-30f3-4cae-b438-b99f4bd38528", 00:17:20.297 "strip_size_kb": 64, 00:17:20.297 "state": "configuring", 00:17:20.297 "raid_level": "raid0", 00:17:20.297 "superblock": true, 00:17:20.297 "num_base_bdevs": 3, 00:17:20.297 "num_base_bdevs_discovered": 1, 00:17:20.297 "num_base_bdevs_operational": 3, 00:17:20.297 "base_bdevs_list": [ 00:17:20.297 { 00:17:20.297 "name": "BaseBdev1", 00:17:20.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.297 "is_configured": false, 00:17:20.297 "data_offset": 0, 00:17:20.297 "data_size": 0 00:17:20.297 }, 00:17:20.297 { 00:17:20.297 "name": null, 00:17:20.297 "uuid": "96529b47-66c3-4bdb-a09a-7454cfd4a620", 00:17:20.297 "is_configured": false, 00:17:20.297 "data_offset": 2048, 00:17:20.297 "data_size": 63488 00:17:20.297 }, 00:17:20.297 { 00:17:20.297 "name": "BaseBdev3", 00:17:20.297 "uuid": "c5dd2077-9507-4af3-8cf4-2a2c2e765d5e", 00:17:20.297 "is_configured": true, 00:17:20.297 "data_offset": 2048, 00:17:20.297 "data_size": 63488 00:17:20.297 } 00:17:20.297 ] 00:17:20.297 }' 00:17:20.297 23:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:20.297 23:03:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.864 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.864 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:21.122 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:21.122 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:21.381 [2024-07-13 23:03:10.618913] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:21.381 BaseBdev1 00:17:21.381 23:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:21.381 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:21.381 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:21.381 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:21.381 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:21.381 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:21.381 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:21.639 23:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:21.639 [ 00:17:21.639 { 00:17:21.639 "name": "BaseBdev1", 00:17:21.639 "aliases": [ 00:17:21.639 "98bb264b-1259-46fd-b8f3-8506f92e8027" 00:17:21.639 ], 00:17:21.639 "product_name": "Malloc disk", 00:17:21.639 "block_size": 512, 00:17:21.639 "num_blocks": 65536, 00:17:21.639 "uuid": "98bb264b-1259-46fd-b8f3-8506f92e8027", 00:17:21.639 "assigned_rate_limits": { 00:17:21.639 "rw_ios_per_sec": 0, 00:17:21.639 "rw_mbytes_per_sec": 0, 00:17:21.639 "r_mbytes_per_sec": 0, 00:17:21.639 "w_mbytes_per_sec": 0 00:17:21.639 }, 00:17:21.639 "claimed": true, 00:17:21.639 "claim_type": "exclusive_write", 00:17:21.639 "zoned": false, 00:17:21.639 "supported_io_types": { 00:17:21.639 "read": true, 00:17:21.639 "write": true, 00:17:21.639 "unmap": true, 00:17:21.639 "flush": true, 00:17:21.639 "reset": true, 00:17:21.639 "nvme_admin": false, 00:17:21.639 "nvme_io": false, 00:17:21.639 "nvme_io_md": false, 00:17:21.639 "write_zeroes": true, 00:17:21.639 "zcopy": true, 00:17:21.639 "get_zone_info": false, 00:17:21.639 "zone_management": false, 00:17:21.639 "zone_append": false, 00:17:21.639 "compare": false, 00:17:21.639 "compare_and_write": false, 00:17:21.639 "abort": true, 00:17:21.639 "seek_hole": false, 00:17:21.639 "seek_data": false, 00:17:21.639 "copy": true, 00:17:21.639 "nvme_iov_md": false 00:17:21.639 }, 00:17:21.639 "memory_domains": [ 00:17:21.639 { 00:17:21.639 "dma_device_id": "system", 00:17:21.639 "dma_device_type": 1 00:17:21.639 }, 00:17:21.639 { 00:17:21.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.639 "dma_device_type": 2 00:17:21.639 } 00:17:21.639 ], 00:17:21.639 "driver_specific": {} 00:17:21.639 } 00:17:21.639 ] 00:17:21.897 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:21.897 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:21.897 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:21.897 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:21.897 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:21.897 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:21.897 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:21.897 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:21.897 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:21.897 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:21.897 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:21.897 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.897 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:22.155 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:22.155 "name": "Existed_Raid", 00:17:22.155 "uuid": "e6ca7c41-30f3-4cae-b438-b99f4bd38528", 00:17:22.155 "strip_size_kb": 64, 00:17:22.155 "state": "configuring", 00:17:22.155 "raid_level": "raid0", 00:17:22.155 "superblock": true, 00:17:22.155 "num_base_bdevs": 3, 00:17:22.155 "num_base_bdevs_discovered": 2, 00:17:22.155 "num_base_bdevs_operational": 3, 00:17:22.155 "base_bdevs_list": [ 00:17:22.155 { 00:17:22.155 "name": "BaseBdev1", 00:17:22.155 "uuid": "98bb264b-1259-46fd-b8f3-8506f92e8027", 00:17:22.155 "is_configured": true, 00:17:22.155 "data_offset": 2048, 00:17:22.155 "data_size": 63488 00:17:22.155 }, 00:17:22.155 { 00:17:22.155 "name": null, 00:17:22.155 "uuid": "96529b47-66c3-4bdb-a09a-7454cfd4a620", 00:17:22.155 "is_configured": false, 00:17:22.155 "data_offset": 2048, 00:17:22.155 "data_size": 63488 00:17:22.155 }, 00:17:22.155 { 00:17:22.155 "name": "BaseBdev3", 00:17:22.155 "uuid": "c5dd2077-9507-4af3-8cf4-2a2c2e765d5e", 00:17:22.155 "is_configured": true, 00:17:22.155 "data_offset": 2048, 00:17:22.155 "data_size": 63488 00:17:22.155 } 00:17:22.155 ] 00:17:22.155 }' 00:17:22.155 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:22.155 23:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.721 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.721 23:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:22.979 23:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:22.979 23:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:23.237 [2024-07-13 23:03:12.411461] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:23.237 23:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:23.237 23:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:23.237 23:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:23.237 23:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:23.237 23:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:23.237 23:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:23.237 23:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:23.237 23:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:23.237 23:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:23.237 23:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:23.237 23:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.237 23:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.495 23:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:23.495 "name": "Existed_Raid", 00:17:23.495 "uuid": "e6ca7c41-30f3-4cae-b438-b99f4bd38528", 00:17:23.495 "strip_size_kb": 64, 00:17:23.495 "state": "configuring", 00:17:23.495 "raid_level": "raid0", 00:17:23.495 "superblock": true, 00:17:23.495 "num_base_bdevs": 3, 00:17:23.495 "num_base_bdevs_discovered": 1, 00:17:23.495 "num_base_bdevs_operational": 3, 00:17:23.495 "base_bdevs_list": [ 00:17:23.495 { 00:17:23.496 "name": "BaseBdev1", 00:17:23.496 "uuid": "98bb264b-1259-46fd-b8f3-8506f92e8027", 00:17:23.496 "is_configured": true, 00:17:23.496 "data_offset": 2048, 00:17:23.496 "data_size": 63488 00:17:23.496 }, 00:17:23.496 { 00:17:23.496 "name": null, 00:17:23.496 "uuid": "96529b47-66c3-4bdb-a09a-7454cfd4a620", 00:17:23.496 "is_configured": false, 00:17:23.496 "data_offset": 2048, 00:17:23.496 "data_size": 63488 00:17:23.496 }, 00:17:23.496 { 00:17:23.496 "name": null, 00:17:23.496 "uuid": "c5dd2077-9507-4af3-8cf4-2a2c2e765d5e", 00:17:23.496 "is_configured": false, 00:17:23.496 "data_offset": 2048, 00:17:23.496 "data_size": 63488 00:17:23.496 } 00:17:23.496 ] 00:17:23.496 }' 00:17:23.496 23:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:23.496 23:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.061 23:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.061 23:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:24.318 23:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:24.318 23:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:24.575 [2024-07-13 23:03:13.815841] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:24.575 23:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:24.575 23:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:24.575 23:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:24.575 23:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:24.575 23:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:24.575 23:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:24.575 23:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:24.575 23:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:24.575 23:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:24.575 23:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:24.575 23:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.575 23:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.833 23:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:24.833 "name": "Existed_Raid", 00:17:24.833 "uuid": "e6ca7c41-30f3-4cae-b438-b99f4bd38528", 00:17:24.833 "strip_size_kb": 64, 00:17:24.833 "state": "configuring", 00:17:24.833 "raid_level": "raid0", 00:17:24.833 "superblock": true, 00:17:24.833 "num_base_bdevs": 3, 00:17:24.833 "num_base_bdevs_discovered": 2, 00:17:24.833 "num_base_bdevs_operational": 3, 00:17:24.833 "base_bdevs_list": [ 00:17:24.833 { 00:17:24.833 "name": "BaseBdev1", 00:17:24.833 "uuid": "98bb264b-1259-46fd-b8f3-8506f92e8027", 00:17:24.833 "is_configured": true, 00:17:24.833 "data_offset": 2048, 00:17:24.833 "data_size": 63488 00:17:24.833 }, 00:17:24.833 { 00:17:24.833 "name": null, 00:17:24.833 "uuid": "96529b47-66c3-4bdb-a09a-7454cfd4a620", 00:17:24.833 "is_configured": false, 00:17:24.833 "data_offset": 2048, 00:17:24.833 "data_size": 63488 00:17:24.833 }, 00:17:24.833 { 00:17:24.833 "name": "BaseBdev3", 00:17:24.833 "uuid": "c5dd2077-9507-4af3-8cf4-2a2c2e765d5e", 00:17:24.833 "is_configured": true, 00:17:24.833 "data_offset": 2048, 00:17:24.833 "data_size": 63488 00:17:24.833 } 00:17:24.833 ] 00:17:24.833 }' 00:17:24.833 23:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:24.833 23:03:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.397 23:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.397 23:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:25.654 23:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:25.654 23:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:25.912 [2024-07-13 23:03:15.287714] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:25.912 23:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:25.912 23:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:25.912 23:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:25.912 23:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:25.912 23:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:25.912 23:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:25.912 23:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:25.912 23:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:25.912 23:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:25.912 23:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:25.912 23:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.912 23:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.169 23:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:26.170 "name": "Existed_Raid", 00:17:26.170 "uuid": "e6ca7c41-30f3-4cae-b438-b99f4bd38528", 00:17:26.170 "strip_size_kb": 64, 00:17:26.170 "state": "configuring", 00:17:26.170 "raid_level": "raid0", 00:17:26.170 "superblock": true, 00:17:26.170 "num_base_bdevs": 3, 00:17:26.170 "num_base_bdevs_discovered": 1, 00:17:26.170 "num_base_bdevs_operational": 3, 00:17:26.170 "base_bdevs_list": [ 00:17:26.170 { 00:17:26.170 "name": null, 00:17:26.170 "uuid": "98bb264b-1259-46fd-b8f3-8506f92e8027", 00:17:26.170 "is_configured": false, 00:17:26.170 "data_offset": 2048, 00:17:26.170 "data_size": 63488 00:17:26.170 }, 00:17:26.170 { 00:17:26.170 "name": null, 00:17:26.170 "uuid": "96529b47-66c3-4bdb-a09a-7454cfd4a620", 00:17:26.170 "is_configured": false, 00:17:26.170 "data_offset": 2048, 00:17:26.170 "data_size": 63488 00:17:26.170 }, 00:17:26.170 { 00:17:26.170 "name": "BaseBdev3", 00:17:26.170 "uuid": "c5dd2077-9507-4af3-8cf4-2a2c2e765d5e", 00:17:26.170 "is_configured": true, 00:17:26.170 "data_offset": 2048, 00:17:26.170 "data_size": 63488 00:17:26.170 } 00:17:26.170 ] 00:17:26.170 }' 00:17:26.170 23:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:26.170 23:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.104 23:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.104 23:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:27.104 23:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:17:27.104 23:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:27.362 [2024-07-13 23:03:16.682071] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:27.362 23:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:27.362 23:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:27.362 23:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:27.362 23:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:27.362 23:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:27.362 23:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:27.362 23:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:27.362 23:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:27.362 23:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:27.362 23:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:27.362 23:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.362 23:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.630 23:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:27.630 "name": "Existed_Raid", 00:17:27.630 "uuid": "e6ca7c41-30f3-4cae-b438-b99f4bd38528", 00:17:27.630 "strip_size_kb": 64, 00:17:27.630 "state": "configuring", 00:17:27.630 "raid_level": "raid0", 00:17:27.630 "superblock": true, 00:17:27.630 "num_base_bdevs": 3, 00:17:27.630 "num_base_bdevs_discovered": 2, 00:17:27.630 "num_base_bdevs_operational": 3, 00:17:27.630 "base_bdevs_list": [ 00:17:27.630 { 00:17:27.630 "name": null, 00:17:27.630 "uuid": "98bb264b-1259-46fd-b8f3-8506f92e8027", 00:17:27.630 "is_configured": false, 00:17:27.630 "data_offset": 2048, 00:17:27.630 "data_size": 63488 00:17:27.630 }, 00:17:27.630 { 00:17:27.630 "name": "BaseBdev2", 00:17:27.630 "uuid": "96529b47-66c3-4bdb-a09a-7454cfd4a620", 00:17:27.630 "is_configured": true, 00:17:27.630 "data_offset": 2048, 00:17:27.630 "data_size": 63488 00:17:27.630 }, 00:17:27.630 { 00:17:27.630 "name": "BaseBdev3", 00:17:27.630 "uuid": "c5dd2077-9507-4af3-8cf4-2a2c2e765d5e", 00:17:27.630 "is_configured": true, 00:17:27.630 "data_offset": 2048, 00:17:27.630 "data_size": 63488 00:17:27.630 } 00:17:27.630 ] 00:17:27.630 }' 00:17:27.630 23:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:27.630 23:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.258 23:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.258 23:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:28.517 23:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:17:28.517 23:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.517 23:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:28.775 23:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 98bb264b-1259-46fd-b8f3-8506f92e8027 00:17:29.034 [2024-07-13 23:03:18.342811] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:29.034 [2024-07-13 23:03:18.343030] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:17:29.034 [2024-07-13 23:03:18.343045] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:29.034 [2024-07-13 23:03:18.343122] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:17:29.034 [2024-07-13 23:03:18.343507] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:17:29.034 [2024-07-13 23:03:18.343533] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:17:29.034 NewBaseBdev 00:17:29.034 [2024-07-13 23:03:18.343642] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:29.034 23:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:17:29.034 23:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:17:29.034 23:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:29.034 23:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:29.034 23:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:29.034 23:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:29.034 23:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:29.292 23:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:29.550 [ 00:17:29.550 { 00:17:29.550 "name": "NewBaseBdev", 00:17:29.550 "aliases": [ 00:17:29.550 "98bb264b-1259-46fd-b8f3-8506f92e8027" 00:17:29.550 ], 00:17:29.550 "product_name": "Malloc disk", 00:17:29.550 "block_size": 512, 00:17:29.550 "num_blocks": 65536, 00:17:29.550 "uuid": "98bb264b-1259-46fd-b8f3-8506f92e8027", 00:17:29.550 "assigned_rate_limits": { 00:17:29.550 "rw_ios_per_sec": 0, 00:17:29.550 "rw_mbytes_per_sec": 0, 00:17:29.550 "r_mbytes_per_sec": 0, 00:17:29.550 "w_mbytes_per_sec": 0 00:17:29.550 }, 00:17:29.550 "claimed": true, 00:17:29.550 "claim_type": "exclusive_write", 00:17:29.550 "zoned": false, 00:17:29.550 "supported_io_types": { 00:17:29.550 "read": true, 00:17:29.550 "write": true, 00:17:29.550 "unmap": true, 00:17:29.550 "flush": true, 00:17:29.550 "reset": true, 00:17:29.550 "nvme_admin": false, 00:17:29.550 "nvme_io": false, 00:17:29.550 "nvme_io_md": false, 00:17:29.550 "write_zeroes": true, 00:17:29.550 "zcopy": true, 00:17:29.550 "get_zone_info": false, 00:17:29.550 "zone_management": false, 00:17:29.550 "zone_append": false, 00:17:29.550 "compare": false, 00:17:29.550 "compare_and_write": false, 00:17:29.550 "abort": true, 00:17:29.550 "seek_hole": false, 00:17:29.550 "seek_data": false, 00:17:29.550 "copy": true, 00:17:29.550 "nvme_iov_md": false 00:17:29.550 }, 00:17:29.550 "memory_domains": [ 00:17:29.550 { 00:17:29.550 "dma_device_id": "system", 00:17:29.550 "dma_device_type": 1 00:17:29.550 }, 00:17:29.550 { 00:17:29.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.550 "dma_device_type": 2 00:17:29.550 } 00:17:29.550 ], 00:17:29.550 "driver_specific": {} 00:17:29.550 } 00:17:29.550 ] 00:17:29.550 23:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:29.550 23:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:17:29.550 23:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:29.550 23:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:29.550 23:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:29.550 23:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:29.550 23:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:29.550 23:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:29.550 23:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:29.550 23:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:29.550 23:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:29.550 23:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.550 23:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.809 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:29.809 "name": "Existed_Raid", 00:17:29.809 "uuid": "e6ca7c41-30f3-4cae-b438-b99f4bd38528", 00:17:29.809 "strip_size_kb": 64, 00:17:29.809 "state": "online", 00:17:29.809 "raid_level": "raid0", 00:17:29.809 "superblock": true, 00:17:29.809 "num_base_bdevs": 3, 00:17:29.809 "num_base_bdevs_discovered": 3, 00:17:29.809 "num_base_bdevs_operational": 3, 00:17:29.809 "base_bdevs_list": [ 00:17:29.809 { 00:17:29.809 "name": "NewBaseBdev", 00:17:29.809 "uuid": "98bb264b-1259-46fd-b8f3-8506f92e8027", 00:17:29.809 "is_configured": true, 00:17:29.809 "data_offset": 2048, 00:17:29.809 "data_size": 63488 00:17:29.809 }, 00:17:29.809 { 00:17:29.809 "name": "BaseBdev2", 00:17:29.809 "uuid": "96529b47-66c3-4bdb-a09a-7454cfd4a620", 00:17:29.809 "is_configured": true, 00:17:29.809 "data_offset": 2048, 00:17:29.809 "data_size": 63488 00:17:29.809 }, 00:17:29.809 { 00:17:29.809 "name": "BaseBdev3", 00:17:29.809 "uuid": "c5dd2077-9507-4af3-8cf4-2a2c2e765d5e", 00:17:29.809 "is_configured": true, 00:17:29.809 "data_offset": 2048, 00:17:29.809 "data_size": 63488 00:17:29.809 } 00:17:29.809 ] 00:17:29.809 }' 00:17:29.809 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:29.809 23:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.376 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:17:30.376 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:30.376 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:30.376 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:30.376 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:30.376 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:30.376 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:30.376 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:30.634 [2024-07-13 23:03:19.919502] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:30.634 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:30.634 "name": "Existed_Raid", 00:17:30.634 "aliases": [ 00:17:30.634 "e6ca7c41-30f3-4cae-b438-b99f4bd38528" 00:17:30.634 ], 00:17:30.634 "product_name": "Raid Volume", 00:17:30.634 "block_size": 512, 00:17:30.634 "num_blocks": 190464, 00:17:30.634 "uuid": "e6ca7c41-30f3-4cae-b438-b99f4bd38528", 00:17:30.634 "assigned_rate_limits": { 00:17:30.634 "rw_ios_per_sec": 0, 00:17:30.634 "rw_mbytes_per_sec": 0, 00:17:30.634 "r_mbytes_per_sec": 0, 00:17:30.634 "w_mbytes_per_sec": 0 00:17:30.634 }, 00:17:30.634 "claimed": false, 00:17:30.634 "zoned": false, 00:17:30.634 "supported_io_types": { 00:17:30.634 "read": true, 00:17:30.634 "write": true, 00:17:30.634 "unmap": true, 00:17:30.634 "flush": true, 00:17:30.634 "reset": true, 00:17:30.634 "nvme_admin": false, 00:17:30.634 "nvme_io": false, 00:17:30.634 "nvme_io_md": false, 00:17:30.634 "write_zeroes": true, 00:17:30.634 "zcopy": false, 00:17:30.634 "get_zone_info": false, 00:17:30.634 "zone_management": false, 00:17:30.634 "zone_append": false, 00:17:30.634 "compare": false, 00:17:30.634 "compare_and_write": false, 00:17:30.634 "abort": false, 00:17:30.634 "seek_hole": false, 00:17:30.634 "seek_data": false, 00:17:30.634 "copy": false, 00:17:30.634 "nvme_iov_md": false 00:17:30.634 }, 00:17:30.634 "memory_domains": [ 00:17:30.634 { 00:17:30.634 "dma_device_id": "system", 00:17:30.634 "dma_device_type": 1 00:17:30.634 }, 00:17:30.634 { 00:17:30.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.634 "dma_device_type": 2 00:17:30.634 }, 00:17:30.634 { 00:17:30.634 "dma_device_id": "system", 00:17:30.634 "dma_device_type": 1 00:17:30.634 }, 00:17:30.634 { 00:17:30.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.634 "dma_device_type": 2 00:17:30.634 }, 00:17:30.634 { 00:17:30.634 "dma_device_id": "system", 00:17:30.634 "dma_device_type": 1 00:17:30.634 }, 00:17:30.634 { 00:17:30.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.634 "dma_device_type": 2 00:17:30.634 } 00:17:30.634 ], 00:17:30.634 "driver_specific": { 00:17:30.634 "raid": { 00:17:30.634 "uuid": "e6ca7c41-30f3-4cae-b438-b99f4bd38528", 00:17:30.634 "strip_size_kb": 64, 00:17:30.634 "state": "online", 00:17:30.634 "raid_level": "raid0", 00:17:30.634 "superblock": true, 00:17:30.634 "num_base_bdevs": 3, 00:17:30.634 "num_base_bdevs_discovered": 3, 00:17:30.634 "num_base_bdevs_operational": 3, 00:17:30.634 "base_bdevs_list": [ 00:17:30.634 { 00:17:30.635 "name": "NewBaseBdev", 00:17:30.635 "uuid": "98bb264b-1259-46fd-b8f3-8506f92e8027", 00:17:30.635 "is_configured": true, 00:17:30.635 "data_offset": 2048, 00:17:30.635 "data_size": 63488 00:17:30.635 }, 00:17:30.635 { 00:17:30.635 "name": "BaseBdev2", 00:17:30.635 "uuid": "96529b47-66c3-4bdb-a09a-7454cfd4a620", 00:17:30.635 "is_configured": true, 00:17:30.635 "data_offset": 2048, 00:17:30.635 "data_size": 63488 00:17:30.635 }, 00:17:30.635 { 00:17:30.635 "name": "BaseBdev3", 00:17:30.635 "uuid": "c5dd2077-9507-4af3-8cf4-2a2c2e765d5e", 00:17:30.635 "is_configured": true, 00:17:30.635 "data_offset": 2048, 00:17:30.635 "data_size": 63488 00:17:30.635 } 00:17:30.635 ] 00:17:30.635 } 00:17:30.635 } 00:17:30.635 }' 00:17:30.635 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:30.635 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:17:30.635 BaseBdev2 00:17:30.635 BaseBdev3' 00:17:30.635 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:30.635 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:30.635 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:30.892 23:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:30.892 "name": "NewBaseBdev", 00:17:30.892 "aliases": [ 00:17:30.892 "98bb264b-1259-46fd-b8f3-8506f92e8027" 00:17:30.892 ], 00:17:30.892 "product_name": "Malloc disk", 00:17:30.892 "block_size": 512, 00:17:30.892 "num_blocks": 65536, 00:17:30.892 "uuid": "98bb264b-1259-46fd-b8f3-8506f92e8027", 00:17:30.892 "assigned_rate_limits": { 00:17:30.892 "rw_ios_per_sec": 0, 00:17:30.892 "rw_mbytes_per_sec": 0, 00:17:30.892 "r_mbytes_per_sec": 0, 00:17:30.892 "w_mbytes_per_sec": 0 00:17:30.892 }, 00:17:30.892 "claimed": true, 00:17:30.892 "claim_type": "exclusive_write", 00:17:30.892 "zoned": false, 00:17:30.892 "supported_io_types": { 00:17:30.892 "read": true, 00:17:30.892 "write": true, 00:17:30.892 "unmap": true, 00:17:30.892 "flush": true, 00:17:30.892 "reset": true, 00:17:30.892 "nvme_admin": false, 00:17:30.892 "nvme_io": false, 00:17:30.892 "nvme_io_md": false, 00:17:30.892 "write_zeroes": true, 00:17:30.892 "zcopy": true, 00:17:30.892 "get_zone_info": false, 00:17:30.892 "zone_management": false, 00:17:30.892 "zone_append": false, 00:17:30.892 "compare": false, 00:17:30.892 "compare_and_write": false, 00:17:30.892 "abort": true, 00:17:30.892 "seek_hole": false, 00:17:30.892 "seek_data": false, 00:17:30.892 "copy": true, 00:17:30.892 "nvme_iov_md": false 00:17:30.892 }, 00:17:30.892 "memory_domains": [ 00:17:30.892 { 00:17:30.892 "dma_device_id": "system", 00:17:30.892 "dma_device_type": 1 00:17:30.892 }, 00:17:30.892 { 00:17:30.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.892 "dma_device_type": 2 00:17:30.892 } 00:17:30.892 ], 00:17:30.892 "driver_specific": {} 00:17:30.892 }' 00:17:30.892 23:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:31.150 23:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:31.150 23:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:31.150 23:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:31.150 23:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:31.150 23:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:31.150 23:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:31.150 23:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:31.150 23:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:31.150 23:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:31.409 23:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:31.409 23:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:31.409 23:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:31.409 23:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:31.409 23:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:31.667 23:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:31.667 "name": "BaseBdev2", 00:17:31.667 "aliases": [ 00:17:31.667 "96529b47-66c3-4bdb-a09a-7454cfd4a620" 00:17:31.667 ], 00:17:31.667 "product_name": "Malloc disk", 00:17:31.667 "block_size": 512, 00:17:31.667 "num_blocks": 65536, 00:17:31.667 "uuid": "96529b47-66c3-4bdb-a09a-7454cfd4a620", 00:17:31.667 "assigned_rate_limits": { 00:17:31.667 "rw_ios_per_sec": 0, 00:17:31.667 "rw_mbytes_per_sec": 0, 00:17:31.667 "r_mbytes_per_sec": 0, 00:17:31.667 "w_mbytes_per_sec": 0 00:17:31.667 }, 00:17:31.667 "claimed": true, 00:17:31.667 "claim_type": "exclusive_write", 00:17:31.667 "zoned": false, 00:17:31.667 "supported_io_types": { 00:17:31.667 "read": true, 00:17:31.667 "write": true, 00:17:31.667 "unmap": true, 00:17:31.667 "flush": true, 00:17:31.667 "reset": true, 00:17:31.667 "nvme_admin": false, 00:17:31.667 "nvme_io": false, 00:17:31.667 "nvme_io_md": false, 00:17:31.667 "write_zeroes": true, 00:17:31.667 "zcopy": true, 00:17:31.667 "get_zone_info": false, 00:17:31.667 "zone_management": false, 00:17:31.667 "zone_append": false, 00:17:31.667 "compare": false, 00:17:31.667 "compare_and_write": false, 00:17:31.667 "abort": true, 00:17:31.667 "seek_hole": false, 00:17:31.667 "seek_data": false, 00:17:31.667 "copy": true, 00:17:31.667 "nvme_iov_md": false 00:17:31.667 }, 00:17:31.667 "memory_domains": [ 00:17:31.667 { 00:17:31.667 "dma_device_id": "system", 00:17:31.667 "dma_device_type": 1 00:17:31.667 }, 00:17:31.667 { 00:17:31.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.667 "dma_device_type": 2 00:17:31.667 } 00:17:31.667 ], 00:17:31.667 "driver_specific": {} 00:17:31.667 }' 00:17:31.667 23:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:31.667 23:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:31.667 23:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:31.667 23:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:31.667 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:31.667 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:31.667 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:31.925 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:31.925 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:31.925 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:31.925 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:31.925 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:31.925 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:31.925 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:31.925 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:32.184 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:32.184 "name": "BaseBdev3", 00:17:32.184 "aliases": [ 00:17:32.184 "c5dd2077-9507-4af3-8cf4-2a2c2e765d5e" 00:17:32.184 ], 00:17:32.184 "product_name": "Malloc disk", 00:17:32.184 "block_size": 512, 00:17:32.184 "num_blocks": 65536, 00:17:32.184 "uuid": "c5dd2077-9507-4af3-8cf4-2a2c2e765d5e", 00:17:32.184 "assigned_rate_limits": { 00:17:32.184 "rw_ios_per_sec": 0, 00:17:32.184 "rw_mbytes_per_sec": 0, 00:17:32.184 "r_mbytes_per_sec": 0, 00:17:32.184 "w_mbytes_per_sec": 0 00:17:32.184 }, 00:17:32.184 "claimed": true, 00:17:32.184 "claim_type": "exclusive_write", 00:17:32.184 "zoned": false, 00:17:32.184 "supported_io_types": { 00:17:32.184 "read": true, 00:17:32.184 "write": true, 00:17:32.184 "unmap": true, 00:17:32.184 "flush": true, 00:17:32.184 "reset": true, 00:17:32.184 "nvme_admin": false, 00:17:32.184 "nvme_io": false, 00:17:32.184 "nvme_io_md": false, 00:17:32.184 "write_zeroes": true, 00:17:32.184 "zcopy": true, 00:17:32.184 "get_zone_info": false, 00:17:32.184 "zone_management": false, 00:17:32.184 "zone_append": false, 00:17:32.184 "compare": false, 00:17:32.184 "compare_and_write": false, 00:17:32.184 "abort": true, 00:17:32.184 "seek_hole": false, 00:17:32.184 "seek_data": false, 00:17:32.184 "copy": true, 00:17:32.184 "nvme_iov_md": false 00:17:32.184 }, 00:17:32.184 "memory_domains": [ 00:17:32.184 { 00:17:32.184 "dma_device_id": "system", 00:17:32.184 "dma_device_type": 1 00:17:32.184 }, 00:17:32.184 { 00:17:32.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.184 "dma_device_type": 2 00:17:32.184 } 00:17:32.184 ], 00:17:32.184 "driver_specific": {} 00:17:32.184 }' 00:17:32.184 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:32.184 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:32.442 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:32.442 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:32.442 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:32.442 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:32.442 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:32.442 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:32.442 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:32.442 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:32.701 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:32.701 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:32.701 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:32.959 [2024-07-13 23:03:22.175758] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:32.959 [2024-07-13 23:03:22.175794] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:32.959 [2024-07-13 23:03:22.175942] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:32.959 [2024-07-13 23:03:22.176009] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:32.959 [2024-07-13 23:03:22.176022] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:17:32.959 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 136303 00:17:32.959 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 136303 ']' 00:17:32.959 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 136303 00:17:32.959 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:17:32.959 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:32.959 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 136303 00:17:32.959 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:32.959 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:32.959 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 136303' 00:17:32.959 killing process with pid 136303 00:17:32.959 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 136303 00:17:32.960 [2024-07-13 23:03:22.215576] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:32.960 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 136303 00:17:32.960 [2024-07-13 23:03:22.241976] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:33.218 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:17:33.218 00:17:33.218 real 0m29.372s 00:17:33.218 user 0m55.989s 00:17:33.218 sys 0m3.380s 00:17:33.218 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:33.218 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.218 ************************************ 00:17:33.218 END TEST raid_state_function_test_sb 00:17:33.218 ************************************ 00:17:33.218 23:03:22 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:33.218 23:03:22 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:17:33.218 23:03:22 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:33.218 23:03:22 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:33.218 23:03:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:33.218 ************************************ 00:17:33.218 START TEST raid_superblock_test 00:17:33.218 ************************************ 00:17:33.218 23:03:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 3 00:17:33.218 23:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:17:33.218 23:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:17:33.218 23:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:17:33.218 23:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:17:33.218 23:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:17:33.218 23:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:17:33.218 23:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:17:33.218 23:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:17:33.218 23:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:17:33.218 23:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:17:33.218 23:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:17:33.218 23:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:17:33.218 23:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:17:33.218 23:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:17:33.218 23:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:17:33.218 23:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:17:33.218 23:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=137273 00:17:33.218 23:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 137273 /var/tmp/spdk-raid.sock 00:17:33.218 23:03:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 137273 ']' 00:17:33.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:33.218 23:03:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:33.218 23:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:33.218 23:03:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:33.218 23:03:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:33.218 23:03:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:33.218 23:03:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.218 [2024-07-13 23:03:22.582231] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:33.218 [2024-07-13 23:03:22.582521] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137273 ] 00:17:33.477 [2024-07-13 23:03:22.731432] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.477 [2024-07-13 23:03:22.813529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.477 [2024-07-13 23:03:22.873327] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:34.408 23:03:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:34.408 23:03:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:17:34.408 23:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:17:34.408 23:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:34.408 23:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:17:34.408 23:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:17:34.408 23:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:34.408 23:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:34.408 23:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:34.408 23:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:34.408 23:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:34.408 malloc1 00:17:34.408 23:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:34.665 [2024-07-13 23:03:23.952282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:34.665 [2024-07-13 23:03:23.952397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.665 [2024-07-13 23:03:23.952436] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:17:34.665 [2024-07-13 23:03:23.952484] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.665 [2024-07-13 23:03:23.954985] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.665 [2024-07-13 23:03:23.955040] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:34.666 pt1 00:17:34.666 23:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:34.666 23:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:34.666 23:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:17:34.666 23:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:17:34.666 23:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:34.666 23:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:34.666 23:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:34.666 23:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:34.666 23:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:34.922 malloc2 00:17:34.923 23:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:35.179 [2024-07-13 23:03:24.406436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:35.179 [2024-07-13 23:03:24.406538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.179 [2024-07-13 23:03:24.406578] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:17:35.179 [2024-07-13 23:03:24.406628] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.179 [2024-07-13 23:03:24.409014] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.179 [2024-07-13 23:03:24.409085] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:35.179 pt2 00:17:35.179 23:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:35.179 23:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:35.179 23:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:17:35.179 23:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:17:35.179 23:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:35.179 23:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:35.179 23:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:35.179 23:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:35.180 23:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:35.436 malloc3 00:17:35.436 23:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:35.693 [2024-07-13 23:03:24.859667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:35.693 [2024-07-13 23:03:24.859773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.693 [2024-07-13 23:03:24.859816] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:35.693 [2024-07-13 23:03:24.859892] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.693 [2024-07-13 23:03:24.862318] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.693 [2024-07-13 23:03:24.862391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:35.693 pt3 00:17:35.693 23:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:35.693 23:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:35.694 23:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:17:35.694 [2024-07-13 23:03:25.067747] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:35.694 [2024-07-13 23:03:25.069747] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:35.694 [2024-07-13 23:03:25.069829] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:35.694 [2024-07-13 23:03:25.070063] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:17:35.694 [2024-07-13 23:03:25.070079] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:35.694 [2024-07-13 23:03:25.070205] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:17:35.694 [2024-07-13 23:03:25.070635] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:17:35.694 [2024-07-13 23:03:25.070660] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880 00:17:35.694 [2024-07-13 23:03:25.070810] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.694 23:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:35.694 23:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:35.694 23:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:35.694 23:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:35.694 23:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:35.694 23:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:35.694 23:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:35.694 23:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:35.694 23:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:35.694 23:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:35.694 23:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.694 23:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.951 23:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:35.951 "name": "raid_bdev1", 00:17:35.951 "uuid": "8a8034e4-bd8a-4f96-9e2b-3d35d43d5e24", 00:17:35.951 "strip_size_kb": 64, 00:17:35.951 "state": "online", 00:17:35.951 "raid_level": "raid0", 00:17:35.951 "superblock": true, 00:17:35.951 "num_base_bdevs": 3, 00:17:35.951 "num_base_bdevs_discovered": 3, 00:17:35.951 "num_base_bdevs_operational": 3, 00:17:35.951 "base_bdevs_list": [ 00:17:35.951 { 00:17:35.951 "name": "pt1", 00:17:35.951 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:35.951 "is_configured": true, 00:17:35.951 "data_offset": 2048, 00:17:35.951 "data_size": 63488 00:17:35.951 }, 00:17:35.951 { 00:17:35.951 "name": "pt2", 00:17:35.951 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:35.951 "is_configured": true, 00:17:35.951 "data_offset": 2048, 00:17:35.951 "data_size": 63488 00:17:35.951 }, 00:17:35.951 { 00:17:35.951 "name": "pt3", 00:17:35.951 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:35.951 "is_configured": true, 00:17:35.951 "data_offset": 2048, 00:17:35.951 "data_size": 63488 00:17:35.951 } 00:17:35.951 ] 00:17:35.951 }' 00:17:35.951 23:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:35.951 23:03:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.516 23:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:17:36.516 23:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:36.516 23:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:36.516 23:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:36.516 23:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:36.516 23:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:36.516 23:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:36.516 23:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:36.773 [2024-07-13 23:03:26.040234] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:36.773 23:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:36.773 "name": "raid_bdev1", 00:17:36.773 "aliases": [ 00:17:36.773 "8a8034e4-bd8a-4f96-9e2b-3d35d43d5e24" 00:17:36.773 ], 00:17:36.773 "product_name": "Raid Volume", 00:17:36.773 "block_size": 512, 00:17:36.773 "num_blocks": 190464, 00:17:36.773 "uuid": "8a8034e4-bd8a-4f96-9e2b-3d35d43d5e24", 00:17:36.773 "assigned_rate_limits": { 00:17:36.773 "rw_ios_per_sec": 0, 00:17:36.773 "rw_mbytes_per_sec": 0, 00:17:36.773 "r_mbytes_per_sec": 0, 00:17:36.773 "w_mbytes_per_sec": 0 00:17:36.773 }, 00:17:36.773 "claimed": false, 00:17:36.773 "zoned": false, 00:17:36.773 "supported_io_types": { 00:17:36.773 "read": true, 00:17:36.773 "write": true, 00:17:36.773 "unmap": true, 00:17:36.773 "flush": true, 00:17:36.773 "reset": true, 00:17:36.773 "nvme_admin": false, 00:17:36.773 "nvme_io": false, 00:17:36.773 "nvme_io_md": false, 00:17:36.773 "write_zeroes": true, 00:17:36.773 "zcopy": false, 00:17:36.773 "get_zone_info": false, 00:17:36.773 "zone_management": false, 00:17:36.773 "zone_append": false, 00:17:36.773 "compare": false, 00:17:36.773 "compare_and_write": false, 00:17:36.773 "abort": false, 00:17:36.773 "seek_hole": false, 00:17:36.773 "seek_data": false, 00:17:36.773 "copy": false, 00:17:36.773 "nvme_iov_md": false 00:17:36.773 }, 00:17:36.773 "memory_domains": [ 00:17:36.773 { 00:17:36.773 "dma_device_id": "system", 00:17:36.773 "dma_device_type": 1 00:17:36.773 }, 00:17:36.773 { 00:17:36.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.773 "dma_device_type": 2 00:17:36.773 }, 00:17:36.773 { 00:17:36.773 "dma_device_id": "system", 00:17:36.773 "dma_device_type": 1 00:17:36.773 }, 00:17:36.773 { 00:17:36.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.773 "dma_device_type": 2 00:17:36.773 }, 00:17:36.773 { 00:17:36.773 "dma_device_id": "system", 00:17:36.773 "dma_device_type": 1 00:17:36.773 }, 00:17:36.773 { 00:17:36.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.773 "dma_device_type": 2 00:17:36.773 } 00:17:36.773 ], 00:17:36.773 "driver_specific": { 00:17:36.773 "raid": { 00:17:36.773 "uuid": "8a8034e4-bd8a-4f96-9e2b-3d35d43d5e24", 00:17:36.773 "strip_size_kb": 64, 00:17:36.773 "state": "online", 00:17:36.773 "raid_level": "raid0", 00:17:36.773 "superblock": true, 00:17:36.773 "num_base_bdevs": 3, 00:17:36.773 "num_base_bdevs_discovered": 3, 00:17:36.773 "num_base_bdevs_operational": 3, 00:17:36.773 "base_bdevs_list": [ 00:17:36.773 { 00:17:36.773 "name": "pt1", 00:17:36.773 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:36.773 "is_configured": true, 00:17:36.773 "data_offset": 2048, 00:17:36.773 "data_size": 63488 00:17:36.773 }, 00:17:36.773 { 00:17:36.773 "name": "pt2", 00:17:36.773 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:36.773 "is_configured": true, 00:17:36.773 "data_offset": 2048, 00:17:36.773 "data_size": 63488 00:17:36.773 }, 00:17:36.773 { 00:17:36.773 "name": "pt3", 00:17:36.773 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:36.773 "is_configured": true, 00:17:36.773 "data_offset": 2048, 00:17:36.773 "data_size": 63488 00:17:36.773 } 00:17:36.773 ] 00:17:36.773 } 00:17:36.773 } 00:17:36.773 }' 00:17:36.773 23:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:36.773 23:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:36.773 pt2 00:17:36.773 pt3' 00:17:36.773 23:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:36.773 23:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:36.773 23:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:37.030 23:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:37.030 "name": "pt1", 00:17:37.030 "aliases": [ 00:17:37.031 "00000000-0000-0000-0000-000000000001" 00:17:37.031 ], 00:17:37.031 "product_name": "passthru", 00:17:37.031 "block_size": 512, 00:17:37.031 "num_blocks": 65536, 00:17:37.031 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:37.031 "assigned_rate_limits": { 00:17:37.031 "rw_ios_per_sec": 0, 00:17:37.031 "rw_mbytes_per_sec": 0, 00:17:37.031 "r_mbytes_per_sec": 0, 00:17:37.031 "w_mbytes_per_sec": 0 00:17:37.031 }, 00:17:37.031 "claimed": true, 00:17:37.031 "claim_type": "exclusive_write", 00:17:37.031 "zoned": false, 00:17:37.031 "supported_io_types": { 00:17:37.031 "read": true, 00:17:37.031 "write": true, 00:17:37.031 "unmap": true, 00:17:37.031 "flush": true, 00:17:37.031 "reset": true, 00:17:37.031 "nvme_admin": false, 00:17:37.031 "nvme_io": false, 00:17:37.031 "nvme_io_md": false, 00:17:37.031 "write_zeroes": true, 00:17:37.031 "zcopy": true, 00:17:37.031 "get_zone_info": false, 00:17:37.031 "zone_management": false, 00:17:37.031 "zone_append": false, 00:17:37.031 "compare": false, 00:17:37.031 "compare_and_write": false, 00:17:37.031 "abort": true, 00:17:37.031 "seek_hole": false, 00:17:37.031 "seek_data": false, 00:17:37.031 "copy": true, 00:17:37.031 "nvme_iov_md": false 00:17:37.031 }, 00:17:37.031 "memory_domains": [ 00:17:37.031 { 00:17:37.031 "dma_device_id": "system", 00:17:37.031 "dma_device_type": 1 00:17:37.031 }, 00:17:37.031 { 00:17:37.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.031 "dma_device_type": 2 00:17:37.031 } 00:17:37.031 ], 00:17:37.031 "driver_specific": { 00:17:37.031 "passthru": { 00:17:37.031 "name": "pt1", 00:17:37.031 "base_bdev_name": "malloc1" 00:17:37.031 } 00:17:37.031 } 00:17:37.031 }' 00:17:37.031 23:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:37.031 23:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:37.031 23:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:37.288 23:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:37.288 23:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:37.288 23:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:37.288 23:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:37.288 23:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:37.288 23:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:37.288 23:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:37.288 23:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:37.288 23:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:37.288 23:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:37.288 23:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:37.288 23:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:37.544 23:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:37.544 "name": "pt2", 00:17:37.544 "aliases": [ 00:17:37.544 "00000000-0000-0000-0000-000000000002" 00:17:37.544 ], 00:17:37.544 "product_name": "passthru", 00:17:37.544 "block_size": 512, 00:17:37.544 "num_blocks": 65536, 00:17:37.544 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:37.545 "assigned_rate_limits": { 00:17:37.545 "rw_ios_per_sec": 0, 00:17:37.545 "rw_mbytes_per_sec": 0, 00:17:37.545 "r_mbytes_per_sec": 0, 00:17:37.545 "w_mbytes_per_sec": 0 00:17:37.545 }, 00:17:37.545 "claimed": true, 00:17:37.545 "claim_type": "exclusive_write", 00:17:37.545 "zoned": false, 00:17:37.545 "supported_io_types": { 00:17:37.545 "read": true, 00:17:37.545 "write": true, 00:17:37.545 "unmap": true, 00:17:37.545 "flush": true, 00:17:37.545 "reset": true, 00:17:37.545 "nvme_admin": false, 00:17:37.545 "nvme_io": false, 00:17:37.545 "nvme_io_md": false, 00:17:37.545 "write_zeroes": true, 00:17:37.545 "zcopy": true, 00:17:37.545 "get_zone_info": false, 00:17:37.545 "zone_management": false, 00:17:37.545 "zone_append": false, 00:17:37.545 "compare": false, 00:17:37.545 "compare_and_write": false, 00:17:37.545 "abort": true, 00:17:37.545 "seek_hole": false, 00:17:37.545 "seek_data": false, 00:17:37.545 "copy": true, 00:17:37.545 "nvme_iov_md": false 00:17:37.545 }, 00:17:37.545 "memory_domains": [ 00:17:37.545 { 00:17:37.545 "dma_device_id": "system", 00:17:37.545 "dma_device_type": 1 00:17:37.545 }, 00:17:37.545 { 00:17:37.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.545 "dma_device_type": 2 00:17:37.545 } 00:17:37.545 ], 00:17:37.545 "driver_specific": { 00:17:37.545 "passthru": { 00:17:37.545 "name": "pt2", 00:17:37.545 "base_bdev_name": "malloc2" 00:17:37.545 } 00:17:37.545 } 00:17:37.545 }' 00:17:37.545 23:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:37.800 23:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:37.800 23:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:37.800 23:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:37.800 23:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:37.801 23:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:37.801 23:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:37.801 23:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:38.057 23:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:38.057 23:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:38.057 23:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:38.057 23:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:38.057 23:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:38.057 23:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:38.057 23:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:38.314 23:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:38.314 "name": "pt3", 00:17:38.314 "aliases": [ 00:17:38.314 "00000000-0000-0000-0000-000000000003" 00:17:38.314 ], 00:17:38.314 "product_name": "passthru", 00:17:38.314 "block_size": 512, 00:17:38.314 "num_blocks": 65536, 00:17:38.314 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:38.314 "assigned_rate_limits": { 00:17:38.314 "rw_ios_per_sec": 0, 00:17:38.314 "rw_mbytes_per_sec": 0, 00:17:38.314 "r_mbytes_per_sec": 0, 00:17:38.314 "w_mbytes_per_sec": 0 00:17:38.314 }, 00:17:38.314 "claimed": true, 00:17:38.314 "claim_type": "exclusive_write", 00:17:38.314 "zoned": false, 00:17:38.314 "supported_io_types": { 00:17:38.314 "read": true, 00:17:38.314 "write": true, 00:17:38.314 "unmap": true, 00:17:38.314 "flush": true, 00:17:38.314 "reset": true, 00:17:38.314 "nvme_admin": false, 00:17:38.314 "nvme_io": false, 00:17:38.314 "nvme_io_md": false, 00:17:38.314 "write_zeroes": true, 00:17:38.314 "zcopy": true, 00:17:38.314 "get_zone_info": false, 00:17:38.314 "zone_management": false, 00:17:38.314 "zone_append": false, 00:17:38.314 "compare": false, 00:17:38.314 "compare_and_write": false, 00:17:38.314 "abort": true, 00:17:38.314 "seek_hole": false, 00:17:38.314 "seek_data": false, 00:17:38.314 "copy": true, 00:17:38.314 "nvme_iov_md": false 00:17:38.314 }, 00:17:38.314 "memory_domains": [ 00:17:38.314 { 00:17:38.314 "dma_device_id": "system", 00:17:38.314 "dma_device_type": 1 00:17:38.314 }, 00:17:38.314 { 00:17:38.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.314 "dma_device_type": 2 00:17:38.314 } 00:17:38.314 ], 00:17:38.314 "driver_specific": { 00:17:38.314 "passthru": { 00:17:38.314 "name": "pt3", 00:17:38.314 "base_bdev_name": "malloc3" 00:17:38.314 } 00:17:38.314 } 00:17:38.314 }' 00:17:38.314 23:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:38.314 23:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:38.314 23:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:38.314 23:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:38.572 23:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:38.572 23:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:38.572 23:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:38.572 23:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:38.572 23:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:38.572 23:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:38.572 23:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:38.572 23:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:38.572 23:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:38.572 23:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:17:38.830 [2024-07-13 23:03:28.192802] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:38.830 23:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=8a8034e4-bd8a-4f96-9e2b-3d35d43d5e24 00:17:38.830 23:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 8a8034e4-bd8a-4f96-9e2b-3d35d43d5e24 ']' 00:17:38.830 23:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:39.088 [2024-07-13 23:03:28.448585] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:39.088 [2024-07-13 23:03:28.448612] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:39.088 [2024-07-13 23:03:28.448714] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:39.088 [2024-07-13 23:03:28.448787] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:39.088 [2024-07-13 23:03:28.448801] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline 00:17:39.088 23:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.088 23:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:17:39.347 23:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:17:39.347 23:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:17:39.347 23:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:39.347 23:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:39.605 23:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:39.605 23:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:39.862 23:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:39.862 23:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:40.120 23:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:40.120 23:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:40.378 23:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:17:40.378 23:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:40.378 23:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:17:40.378 23:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:40.378 23:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:40.378 23:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:40.378 23:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:40.378 23:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:40.378 23:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:40.378 23:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:40.378 23:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:40.378 23:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:40.378 23:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:40.635 [2024-07-13 23:03:29.812836] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:40.635 [2024-07-13 23:03:29.814911] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:40.635 [2024-07-13 23:03:29.815002] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:40.635 [2024-07-13 23:03:29.815058] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:40.635 [2024-07-13 23:03:29.815159] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:40.635 [2024-07-13 23:03:29.815230] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:40.636 [2024-07-13 23:03:29.815300] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:40.636 [2024-07-13 23:03:29.815314] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state configuring 00:17:40.636 request: 00:17:40.636 { 00:17:40.636 "name": "raid_bdev1", 00:17:40.636 "raid_level": "raid0", 00:17:40.636 "base_bdevs": [ 00:17:40.636 "malloc1", 00:17:40.636 "malloc2", 00:17:40.636 "malloc3" 00:17:40.636 ], 00:17:40.636 "strip_size_kb": 64, 00:17:40.636 "superblock": false, 00:17:40.636 "method": "bdev_raid_create", 00:17:40.636 "req_id": 1 00:17:40.636 } 00:17:40.636 Got JSON-RPC error response 00:17:40.636 response: 00:17:40.636 { 00:17:40.636 "code": -17, 00:17:40.636 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:40.636 } 00:17:40.636 23:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:17:40.636 23:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:40.636 23:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:40.636 23:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:40.636 23:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:17:40.636 23:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.893 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:17:40.893 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:17:40.893 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:40.893 [2024-07-13 23:03:30.284852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:40.893 [2024-07-13 23:03:30.284985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:40.893 [2024-07-13 23:03:30.285033] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:40.893 [2024-07-13 23:03:30.285056] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:40.893 [2024-07-13 23:03:30.287450] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:40.893 [2024-07-13 23:03:30.287516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:40.893 [2024-07-13 23:03:30.287640] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:40.893 [2024-07-13 23:03:30.287709] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:40.893 pt1 00:17:41.151 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:17:41.151 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:41.151 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:41.151 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:41.151 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:41.151 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:41.151 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:41.151 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:41.151 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:41.151 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:41.151 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.151 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.151 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:41.151 "name": "raid_bdev1", 00:17:41.151 "uuid": "8a8034e4-bd8a-4f96-9e2b-3d35d43d5e24", 00:17:41.151 "strip_size_kb": 64, 00:17:41.151 "state": "configuring", 00:17:41.151 "raid_level": "raid0", 00:17:41.151 "superblock": true, 00:17:41.151 "num_base_bdevs": 3, 00:17:41.151 "num_base_bdevs_discovered": 1, 00:17:41.151 "num_base_bdevs_operational": 3, 00:17:41.151 "base_bdevs_list": [ 00:17:41.151 { 00:17:41.151 "name": "pt1", 00:17:41.151 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:41.151 "is_configured": true, 00:17:41.151 "data_offset": 2048, 00:17:41.151 "data_size": 63488 00:17:41.151 }, 00:17:41.151 { 00:17:41.151 "name": null, 00:17:41.151 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:41.151 "is_configured": false, 00:17:41.151 "data_offset": 2048, 00:17:41.151 "data_size": 63488 00:17:41.151 }, 00:17:41.151 { 00:17:41.151 "name": null, 00:17:41.151 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:41.151 "is_configured": false, 00:17:41.151 "data_offset": 2048, 00:17:41.151 "data_size": 63488 00:17:41.151 } 00:17:41.151 ] 00:17:41.151 }' 00:17:41.151 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:41.151 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.091 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:17:42.091 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:42.091 [2024-07-13 23:03:31.405262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:42.091 [2024-07-13 23:03:31.405402] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.091 [2024-07-13 23:03:31.405448] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:17:42.091 [2024-07-13 23:03:31.405472] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.091 [2024-07-13 23:03:31.406002] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.091 [2024-07-13 23:03:31.406051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:42.091 [2024-07-13 23:03:31.406174] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:42.091 [2024-07-13 23:03:31.406214] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:42.091 pt2 00:17:42.091 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:42.356 [2024-07-13 23:03:31.673349] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:42.356 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:17:42.356 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:42.356 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:42.356 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:42.356 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:42.356 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:42.356 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:42.356 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:42.356 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:42.356 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:42.356 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.356 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.614 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:42.614 "name": "raid_bdev1", 00:17:42.614 "uuid": "8a8034e4-bd8a-4f96-9e2b-3d35d43d5e24", 00:17:42.614 "strip_size_kb": 64, 00:17:42.614 "state": "configuring", 00:17:42.614 "raid_level": "raid0", 00:17:42.614 "superblock": true, 00:17:42.614 "num_base_bdevs": 3, 00:17:42.614 "num_base_bdevs_discovered": 1, 00:17:42.614 "num_base_bdevs_operational": 3, 00:17:42.614 "base_bdevs_list": [ 00:17:42.614 { 00:17:42.614 "name": "pt1", 00:17:42.614 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:42.614 "is_configured": true, 00:17:42.614 "data_offset": 2048, 00:17:42.614 "data_size": 63488 00:17:42.614 }, 00:17:42.614 { 00:17:42.614 "name": null, 00:17:42.614 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:42.614 "is_configured": false, 00:17:42.614 "data_offset": 2048, 00:17:42.614 "data_size": 63488 00:17:42.614 }, 00:17:42.614 { 00:17:42.614 "name": null, 00:17:42.614 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:42.614 "is_configured": false, 00:17:42.614 "data_offset": 2048, 00:17:42.614 "data_size": 63488 00:17:42.614 } 00:17:42.614 ] 00:17:42.614 }' 00:17:42.614 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:42.614 23:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.180 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:17:43.180 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:43.180 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:43.438 [2024-07-13 23:03:32.797657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:43.438 [2024-07-13 23:03:32.797788] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:43.438 [2024-07-13 23:03:32.797827] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:43.438 [2024-07-13 23:03:32.797861] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:43.438 [2024-07-13 23:03:32.798362] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:43.438 [2024-07-13 23:03:32.798413] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:43.438 [2024-07-13 23:03:32.798550] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:43.438 [2024-07-13 23:03:32.798579] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:43.438 pt2 00:17:43.438 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:43.438 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:43.438 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:43.696 [2024-07-13 23:03:33.085684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:43.696 [2024-07-13 23:03:33.085797] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:43.696 [2024-07-13 23:03:33.085851] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:43.696 [2024-07-13 23:03:33.085883] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:43.696 [2024-07-13 23:03:33.086415] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:43.696 [2024-07-13 23:03:33.086472] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:43.696 [2024-07-13 23:03:33.086607] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:43.696 [2024-07-13 23:03:33.086636] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:43.696 [2024-07-13 23:03:33.086771] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:17:43.696 [2024-07-13 23:03:33.086787] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:43.696 [2024-07-13 23:03:33.086870] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:17:43.696 [2024-07-13 23:03:33.087216] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:17:43.696 [2024-07-13 23:03:33.087241] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:17:43.696 [2024-07-13 23:03:33.087355] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.696 pt3 00:17:43.696 23:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:43.696 23:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:43.696 23:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:43.696 23:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:43.696 23:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:43.696 23:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:43.696 23:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:43.696 23:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:43.956 23:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:43.956 23:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:43.956 23:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:43.956 23:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:43.956 23:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.956 23:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.217 23:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:44.217 "name": "raid_bdev1", 00:17:44.217 "uuid": "8a8034e4-bd8a-4f96-9e2b-3d35d43d5e24", 00:17:44.217 "strip_size_kb": 64, 00:17:44.217 "state": "online", 00:17:44.217 "raid_level": "raid0", 00:17:44.217 "superblock": true, 00:17:44.217 "num_base_bdevs": 3, 00:17:44.217 "num_base_bdevs_discovered": 3, 00:17:44.217 "num_base_bdevs_operational": 3, 00:17:44.217 "base_bdevs_list": [ 00:17:44.217 { 00:17:44.217 "name": "pt1", 00:17:44.217 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:44.217 "is_configured": true, 00:17:44.217 "data_offset": 2048, 00:17:44.217 "data_size": 63488 00:17:44.217 }, 00:17:44.217 { 00:17:44.217 "name": "pt2", 00:17:44.217 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:44.217 "is_configured": true, 00:17:44.217 "data_offset": 2048, 00:17:44.217 "data_size": 63488 00:17:44.217 }, 00:17:44.217 { 00:17:44.217 "name": "pt3", 00:17:44.217 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:44.217 "is_configured": true, 00:17:44.217 "data_offset": 2048, 00:17:44.217 "data_size": 63488 00:17:44.217 } 00:17:44.217 ] 00:17:44.217 }' 00:17:44.217 23:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:44.217 23:03:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.782 23:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:17:44.782 23:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:44.782 23:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:44.782 23:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:44.782 23:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:44.782 23:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:44.782 23:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:44.782 23:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:45.039 [2024-07-13 23:03:34.238229] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:45.039 23:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:45.039 "name": "raid_bdev1", 00:17:45.039 "aliases": [ 00:17:45.039 "8a8034e4-bd8a-4f96-9e2b-3d35d43d5e24" 00:17:45.039 ], 00:17:45.039 "product_name": "Raid Volume", 00:17:45.039 "block_size": 512, 00:17:45.039 "num_blocks": 190464, 00:17:45.039 "uuid": "8a8034e4-bd8a-4f96-9e2b-3d35d43d5e24", 00:17:45.039 "assigned_rate_limits": { 00:17:45.039 "rw_ios_per_sec": 0, 00:17:45.039 "rw_mbytes_per_sec": 0, 00:17:45.039 "r_mbytes_per_sec": 0, 00:17:45.039 "w_mbytes_per_sec": 0 00:17:45.039 }, 00:17:45.039 "claimed": false, 00:17:45.039 "zoned": false, 00:17:45.039 "supported_io_types": { 00:17:45.039 "read": true, 00:17:45.039 "write": true, 00:17:45.039 "unmap": true, 00:17:45.039 "flush": true, 00:17:45.039 "reset": true, 00:17:45.039 "nvme_admin": false, 00:17:45.039 "nvme_io": false, 00:17:45.039 "nvme_io_md": false, 00:17:45.039 "write_zeroes": true, 00:17:45.039 "zcopy": false, 00:17:45.039 "get_zone_info": false, 00:17:45.039 "zone_management": false, 00:17:45.039 "zone_append": false, 00:17:45.039 "compare": false, 00:17:45.039 "compare_and_write": false, 00:17:45.039 "abort": false, 00:17:45.039 "seek_hole": false, 00:17:45.039 "seek_data": false, 00:17:45.039 "copy": false, 00:17:45.039 "nvme_iov_md": false 00:17:45.039 }, 00:17:45.039 "memory_domains": [ 00:17:45.039 { 00:17:45.039 "dma_device_id": "system", 00:17:45.039 "dma_device_type": 1 00:17:45.039 }, 00:17:45.039 { 00:17:45.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.039 "dma_device_type": 2 00:17:45.039 }, 00:17:45.039 { 00:17:45.039 "dma_device_id": "system", 00:17:45.039 "dma_device_type": 1 00:17:45.039 }, 00:17:45.039 { 00:17:45.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.039 "dma_device_type": 2 00:17:45.039 }, 00:17:45.039 { 00:17:45.039 "dma_device_id": "system", 00:17:45.039 "dma_device_type": 1 00:17:45.039 }, 00:17:45.039 { 00:17:45.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.039 "dma_device_type": 2 00:17:45.039 } 00:17:45.039 ], 00:17:45.039 "driver_specific": { 00:17:45.039 "raid": { 00:17:45.039 "uuid": "8a8034e4-bd8a-4f96-9e2b-3d35d43d5e24", 00:17:45.039 "strip_size_kb": 64, 00:17:45.039 "state": "online", 00:17:45.039 "raid_level": "raid0", 00:17:45.039 "superblock": true, 00:17:45.039 "num_base_bdevs": 3, 00:17:45.039 "num_base_bdevs_discovered": 3, 00:17:45.039 "num_base_bdevs_operational": 3, 00:17:45.039 "base_bdevs_list": [ 00:17:45.039 { 00:17:45.039 "name": "pt1", 00:17:45.039 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:45.039 "is_configured": true, 00:17:45.039 "data_offset": 2048, 00:17:45.039 "data_size": 63488 00:17:45.039 }, 00:17:45.039 { 00:17:45.039 "name": "pt2", 00:17:45.039 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:45.039 "is_configured": true, 00:17:45.039 "data_offset": 2048, 00:17:45.039 "data_size": 63488 00:17:45.039 }, 00:17:45.039 { 00:17:45.039 "name": "pt3", 00:17:45.039 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:45.039 "is_configured": true, 00:17:45.039 "data_offset": 2048, 00:17:45.039 "data_size": 63488 00:17:45.039 } 00:17:45.039 ] 00:17:45.039 } 00:17:45.039 } 00:17:45.039 }' 00:17:45.039 23:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:45.039 23:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:45.039 pt2 00:17:45.039 pt3' 00:17:45.039 23:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:45.039 23:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:45.039 23:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:45.296 23:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:45.296 "name": "pt1", 00:17:45.296 "aliases": [ 00:17:45.296 "00000000-0000-0000-0000-000000000001" 00:17:45.296 ], 00:17:45.296 "product_name": "passthru", 00:17:45.296 "block_size": 512, 00:17:45.296 "num_blocks": 65536, 00:17:45.296 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:45.296 "assigned_rate_limits": { 00:17:45.296 "rw_ios_per_sec": 0, 00:17:45.296 "rw_mbytes_per_sec": 0, 00:17:45.296 "r_mbytes_per_sec": 0, 00:17:45.296 "w_mbytes_per_sec": 0 00:17:45.296 }, 00:17:45.296 "claimed": true, 00:17:45.296 "claim_type": "exclusive_write", 00:17:45.296 "zoned": false, 00:17:45.296 "supported_io_types": { 00:17:45.296 "read": true, 00:17:45.296 "write": true, 00:17:45.296 "unmap": true, 00:17:45.296 "flush": true, 00:17:45.296 "reset": true, 00:17:45.296 "nvme_admin": false, 00:17:45.296 "nvme_io": false, 00:17:45.296 "nvme_io_md": false, 00:17:45.296 "write_zeroes": true, 00:17:45.296 "zcopy": true, 00:17:45.296 "get_zone_info": false, 00:17:45.296 "zone_management": false, 00:17:45.296 "zone_append": false, 00:17:45.296 "compare": false, 00:17:45.296 "compare_and_write": false, 00:17:45.296 "abort": true, 00:17:45.296 "seek_hole": false, 00:17:45.296 "seek_data": false, 00:17:45.296 "copy": true, 00:17:45.296 "nvme_iov_md": false 00:17:45.296 }, 00:17:45.296 "memory_domains": [ 00:17:45.296 { 00:17:45.296 "dma_device_id": "system", 00:17:45.296 "dma_device_type": 1 00:17:45.296 }, 00:17:45.296 { 00:17:45.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.296 "dma_device_type": 2 00:17:45.296 } 00:17:45.296 ], 00:17:45.296 "driver_specific": { 00:17:45.296 "passthru": { 00:17:45.296 "name": "pt1", 00:17:45.296 "base_bdev_name": "malloc1" 00:17:45.296 } 00:17:45.296 } 00:17:45.296 }' 00:17:45.297 23:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:45.297 23:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:45.297 23:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:45.297 23:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:45.297 23:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:45.297 23:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:45.297 23:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:45.554 23:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:45.554 23:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:45.554 23:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:45.554 23:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:45.554 23:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:45.554 23:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:45.554 23:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:45.554 23:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:45.813 23:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:45.813 "name": "pt2", 00:17:45.813 "aliases": [ 00:17:45.813 "00000000-0000-0000-0000-000000000002" 00:17:45.813 ], 00:17:45.813 "product_name": "passthru", 00:17:45.813 "block_size": 512, 00:17:45.813 "num_blocks": 65536, 00:17:45.813 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:45.813 "assigned_rate_limits": { 00:17:45.813 "rw_ios_per_sec": 0, 00:17:45.813 "rw_mbytes_per_sec": 0, 00:17:45.813 "r_mbytes_per_sec": 0, 00:17:45.813 "w_mbytes_per_sec": 0 00:17:45.813 }, 00:17:45.813 "claimed": true, 00:17:45.813 "claim_type": "exclusive_write", 00:17:45.813 "zoned": false, 00:17:45.813 "supported_io_types": { 00:17:45.813 "read": true, 00:17:45.813 "write": true, 00:17:45.813 "unmap": true, 00:17:45.813 "flush": true, 00:17:45.813 "reset": true, 00:17:45.813 "nvme_admin": false, 00:17:45.813 "nvme_io": false, 00:17:45.813 "nvme_io_md": false, 00:17:45.813 "write_zeroes": true, 00:17:45.813 "zcopy": true, 00:17:45.813 "get_zone_info": false, 00:17:45.813 "zone_management": false, 00:17:45.813 "zone_append": false, 00:17:45.813 "compare": false, 00:17:45.813 "compare_and_write": false, 00:17:45.813 "abort": true, 00:17:45.813 "seek_hole": false, 00:17:45.813 "seek_data": false, 00:17:45.813 "copy": true, 00:17:45.813 "nvme_iov_md": false 00:17:45.813 }, 00:17:45.813 "memory_domains": [ 00:17:45.813 { 00:17:45.813 "dma_device_id": "system", 00:17:45.813 "dma_device_type": 1 00:17:45.813 }, 00:17:45.813 { 00:17:45.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.813 "dma_device_type": 2 00:17:45.813 } 00:17:45.813 ], 00:17:45.813 "driver_specific": { 00:17:45.813 "passthru": { 00:17:45.813 "name": "pt2", 00:17:45.813 "base_bdev_name": "malloc2" 00:17:45.813 } 00:17:45.813 } 00:17:45.813 }' 00:17:45.813 23:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:45.813 23:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:46.071 23:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:46.071 23:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:46.071 23:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:46.071 23:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:46.071 23:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:46.071 23:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:46.071 23:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:46.071 23:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:46.330 23:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:46.330 23:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:46.330 23:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:46.330 23:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:46.330 23:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:46.589 23:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:46.589 "name": "pt3", 00:17:46.589 "aliases": [ 00:17:46.589 "00000000-0000-0000-0000-000000000003" 00:17:46.589 ], 00:17:46.589 "product_name": "passthru", 00:17:46.589 "block_size": 512, 00:17:46.589 "num_blocks": 65536, 00:17:46.589 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:46.589 "assigned_rate_limits": { 00:17:46.589 "rw_ios_per_sec": 0, 00:17:46.589 "rw_mbytes_per_sec": 0, 00:17:46.589 "r_mbytes_per_sec": 0, 00:17:46.589 "w_mbytes_per_sec": 0 00:17:46.589 }, 00:17:46.589 "claimed": true, 00:17:46.589 "claim_type": "exclusive_write", 00:17:46.589 "zoned": false, 00:17:46.589 "supported_io_types": { 00:17:46.589 "read": true, 00:17:46.589 "write": true, 00:17:46.589 "unmap": true, 00:17:46.589 "flush": true, 00:17:46.589 "reset": true, 00:17:46.589 "nvme_admin": false, 00:17:46.589 "nvme_io": false, 00:17:46.589 "nvme_io_md": false, 00:17:46.589 "write_zeroes": true, 00:17:46.589 "zcopy": true, 00:17:46.589 "get_zone_info": false, 00:17:46.589 "zone_management": false, 00:17:46.589 "zone_append": false, 00:17:46.589 "compare": false, 00:17:46.589 "compare_and_write": false, 00:17:46.589 "abort": true, 00:17:46.589 "seek_hole": false, 00:17:46.589 "seek_data": false, 00:17:46.589 "copy": true, 00:17:46.589 "nvme_iov_md": false 00:17:46.589 }, 00:17:46.589 "memory_domains": [ 00:17:46.589 { 00:17:46.589 "dma_device_id": "system", 00:17:46.589 "dma_device_type": 1 00:17:46.589 }, 00:17:46.589 { 00:17:46.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.589 "dma_device_type": 2 00:17:46.589 } 00:17:46.589 ], 00:17:46.589 "driver_specific": { 00:17:46.589 "passthru": { 00:17:46.589 "name": "pt3", 00:17:46.589 "base_bdev_name": "malloc3" 00:17:46.589 } 00:17:46.589 } 00:17:46.589 }' 00:17:46.589 23:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:46.589 23:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:46.589 23:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:46.589 23:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:46.589 23:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:46.848 23:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:46.848 23:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:46.848 23:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:46.848 23:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:46.848 23:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:46.848 23:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:46.848 23:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:46.848 23:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:17:46.848 23:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:47.106 [2024-07-13 23:03:36.506745] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:47.364 23:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 8a8034e4-bd8a-4f96-9e2b-3d35d43d5e24 '!=' 8a8034e4-bd8a-4f96-9e2b-3d35d43d5e24 ']' 00:17:47.364 23:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:17:47.364 23:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:47.364 23:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:47.364 23:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 137273 00:17:47.364 23:03:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 137273 ']' 00:17:47.364 23:03:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 137273 00:17:47.364 23:03:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:17:47.364 23:03:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:47.364 23:03:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 137273 00:17:47.364 23:03:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:47.364 23:03:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:47.364 23:03:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 137273' 00:17:47.364 killing process with pid 137273 00:17:47.364 23:03:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 137273 00:17:47.364 [2024-07-13 23:03:36.552777] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:47.364 [2024-07-13 23:03:36.552866] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:47.364 [2024-07-13 23:03:36.552982] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:47.364 [2024-07-13 23:03:36.552997] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:17:47.364 23:03:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 137273 00:17:47.364 [2024-07-13 23:03:36.582316] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:47.624 23:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:17:47.624 00:17:47.624 real 0m14.275s 00:17:47.624 user 0m26.630s 00:17:47.624 sys 0m1.743s 00:17:47.624 23:03:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:47.624 23:03:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.624 ************************************ 00:17:47.624 END TEST raid_superblock_test 00:17:47.624 ************************************ 00:17:47.624 23:03:36 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:47.624 23:03:36 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:17:47.624 23:03:36 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:47.624 23:03:36 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:47.624 23:03:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:47.624 ************************************ 00:17:47.624 START TEST raid_read_error_test 00:17:47.624 ************************************ 00:17:47.624 23:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 read 00:17:47.624 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:17:47.624 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:17:47.624 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:17:47.624 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:47.624 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:47.624 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:17:47.624 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:47.624 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:47.624 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:17:47.624 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:47.624 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:47.624 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:17:47.624 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:47.624 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:47.624 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:47.624 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:47.624 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:47.624 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:47.624 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:47.624 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:47.624 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:47.624 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:17:47.624 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:17:47.624 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:17:47.624 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:47.624 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.2NXvfRB8MO 00:17:47.624 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=137747 00:17:47.624 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 137747 /var/tmp/spdk-raid.sock 00:17:47.624 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:47.624 23:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 137747 ']' 00:17:47.624 23:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:47.624 23:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:47.624 23:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:47.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:47.624 23:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:47.624 23:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.624 [2024-07-13 23:03:36.925262] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:47.624 [2024-07-13 23:03:36.925529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137747 ] 00:17:47.883 [2024-07-13 23:03:37.061444] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.883 [2024-07-13 23:03:37.127936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.883 [2024-07-13 23:03:37.181313] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:48.821 23:03:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:48.821 23:03:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:17:48.821 23:03:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:48.821 23:03:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:48.821 BaseBdev1_malloc 00:17:48.821 23:03:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:49.079 true 00:17:49.079 23:03:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:49.338 [2024-07-13 23:03:38.605240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:49.338 [2024-07-13 23:03:38.605344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.338 [2024-07-13 23:03:38.605396] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:17:49.338 [2024-07-13 23:03:38.605444] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.338 [2024-07-13 23:03:38.607993] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.338 [2024-07-13 23:03:38.608051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:49.338 BaseBdev1 00:17:49.338 23:03:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:49.338 23:03:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:49.596 BaseBdev2_malloc 00:17:49.596 23:03:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:49.855 true 00:17:49.855 23:03:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:50.114 [2024-07-13 23:03:39.299764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:50.114 [2024-07-13 23:03:39.299898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.114 [2024-07-13 23:03:39.299951] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:17:50.114 [2024-07-13 23:03:39.300001] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.114 [2024-07-13 23:03:39.302753] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.114 [2024-07-13 23:03:39.302817] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:50.114 BaseBdev2 00:17:50.114 23:03:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:50.114 23:03:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:50.373 BaseBdev3_malloc 00:17:50.373 23:03:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:50.632 true 00:17:50.632 23:03:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:50.632 [2024-07-13 23:03:40.036549] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:50.632 [2024-07-13 23:03:40.036654] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.632 [2024-07-13 23:03:40.036703] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:17:50.632 [2024-07-13 23:03:40.036748] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.632 [2024-07-13 23:03:40.039487] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.632 [2024-07-13 23:03:40.039722] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:50.890 BaseBdev3 00:17:50.890 23:03:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:17:50.891 [2024-07-13 23:03:40.292743] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:50.891 [2024-07-13 23:03:40.294860] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:50.891 [2024-07-13 23:03:40.295146] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:50.891 [2024-07-13 23:03:40.295583] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:17:50.891 [2024-07-13 23:03:40.295741] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:50.891 [2024-07-13 23:03:40.295958] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:17:50.891 [2024-07-13 23:03:40.296516] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:17:50.891 [2024-07-13 23:03:40.296705] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:17:50.891 [2024-07-13 23:03:40.297080] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.149 23:03:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:51.149 23:03:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:51.149 23:03:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:51.149 23:03:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:51.149 23:03:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:51.149 23:03:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:51.149 23:03:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:51.149 23:03:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:51.149 23:03:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:51.149 23:03:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:51.149 23:03:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.149 23:03:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.149 23:03:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:51.149 "name": "raid_bdev1", 00:17:51.149 "uuid": "7b430d57-3b2d-424a-995e-4796cafeb52b", 00:17:51.149 "strip_size_kb": 64, 00:17:51.149 "state": "online", 00:17:51.149 "raid_level": "raid0", 00:17:51.149 "superblock": true, 00:17:51.149 "num_base_bdevs": 3, 00:17:51.149 "num_base_bdevs_discovered": 3, 00:17:51.149 "num_base_bdevs_operational": 3, 00:17:51.149 "base_bdevs_list": [ 00:17:51.149 { 00:17:51.149 "name": "BaseBdev1", 00:17:51.149 "uuid": "caa1c77e-9c0c-58bf-b480-dbc56674e499", 00:17:51.149 "is_configured": true, 00:17:51.149 "data_offset": 2048, 00:17:51.149 "data_size": 63488 00:17:51.149 }, 00:17:51.149 { 00:17:51.149 "name": "BaseBdev2", 00:17:51.149 "uuid": "6892205f-f1f8-5be8-a0e0-991739d27c4a", 00:17:51.149 "is_configured": true, 00:17:51.149 "data_offset": 2048, 00:17:51.149 "data_size": 63488 00:17:51.149 }, 00:17:51.149 { 00:17:51.149 "name": "BaseBdev3", 00:17:51.149 "uuid": "bffdd3a9-a281-579e-a6da-414eafd5abc5", 00:17:51.149 "is_configured": true, 00:17:51.149 "data_offset": 2048, 00:17:51.149 "data_size": 63488 00:17:51.149 } 00:17:51.149 ] 00:17:51.149 }' 00:17:51.149 23:03:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:51.149 23:03:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.092 23:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:52.092 23:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:52.092 [2024-07-13 23:03:41.245721] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:17:53.028 23:03:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:53.028 23:03:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:53.028 23:03:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:17:53.028 23:03:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:17:53.028 23:03:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:53.028 23:03:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:53.028 23:03:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:53.028 23:03:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:53.028 23:03:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:53.028 23:03:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:53.028 23:03:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:53.028 23:03:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:53.028 23:03:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:53.028 23:03:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:53.028 23:03:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.028 23:03:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.287 23:03:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:53.287 "name": "raid_bdev1", 00:17:53.287 "uuid": "7b430d57-3b2d-424a-995e-4796cafeb52b", 00:17:53.287 "strip_size_kb": 64, 00:17:53.287 "state": "online", 00:17:53.287 "raid_level": "raid0", 00:17:53.287 "superblock": true, 00:17:53.287 "num_base_bdevs": 3, 00:17:53.287 "num_base_bdevs_discovered": 3, 00:17:53.287 "num_base_bdevs_operational": 3, 00:17:53.287 "base_bdevs_list": [ 00:17:53.287 { 00:17:53.287 "name": "BaseBdev1", 00:17:53.287 "uuid": "caa1c77e-9c0c-58bf-b480-dbc56674e499", 00:17:53.287 "is_configured": true, 00:17:53.287 "data_offset": 2048, 00:17:53.287 "data_size": 63488 00:17:53.287 }, 00:17:53.287 { 00:17:53.287 "name": "BaseBdev2", 00:17:53.287 "uuid": "6892205f-f1f8-5be8-a0e0-991739d27c4a", 00:17:53.287 "is_configured": true, 00:17:53.287 "data_offset": 2048, 00:17:53.287 "data_size": 63488 00:17:53.287 }, 00:17:53.287 { 00:17:53.287 "name": "BaseBdev3", 00:17:53.287 "uuid": "bffdd3a9-a281-579e-a6da-414eafd5abc5", 00:17:53.287 "is_configured": true, 00:17:53.287 "data_offset": 2048, 00:17:53.287 "data_size": 63488 00:17:53.287 } 00:17:53.287 ] 00:17:53.287 }' 00:17:53.287 23:03:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:53.287 23:03:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.227 23:03:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:54.227 [2024-07-13 23:03:43.595336] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:54.227 [2024-07-13 23:03:43.595668] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:54.227 [2024-07-13 23:03:43.598452] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:54.227 [2024-07-13 23:03:43.598665] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:54.227 [2024-07-13 23:03:43.598815] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:54.227 [2024-07-13 23:03:43.598918] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:17:54.227 0 00:17:54.227 23:03:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 137747 00:17:54.227 23:03:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 137747 ']' 00:17:54.227 23:03:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 137747 00:17:54.227 23:03:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:17:54.227 23:03:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:54.227 23:03:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 137747 00:17:54.227 23:03:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:54.484 23:03:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:54.484 23:03:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 137747' 00:17:54.484 killing process with pid 137747 00:17:54.484 23:03:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 137747 00:17:54.484 [2024-07-13 23:03:43.636519] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:54.484 23:03:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 137747 00:17:54.484 [2024-07-13 23:03:43.660070] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:54.742 23:03:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.2NXvfRB8MO 00:17:54.742 23:03:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:54.742 23:03:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:54.742 23:03:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:17:54.742 23:03:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:17:54.742 23:03:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:54.742 23:03:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:54.742 23:03:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:17:54.742 00:17:54.742 real 0m7.041s 00:17:54.742 user 0m11.528s 00:17:54.742 sys 0m0.870s 00:17:54.742 23:03:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:54.742 23:03:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.742 ************************************ 00:17:54.742 END TEST raid_read_error_test 00:17:54.742 ************************************ 00:17:54.742 23:03:43 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:54.742 23:03:43 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:17:54.742 23:03:43 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:54.742 23:03:43 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:54.742 23:03:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:54.742 ************************************ 00:17:54.742 START TEST raid_write_error_test 00:17:54.742 ************************************ 00:17:54.742 23:03:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 write 00:17:54.742 23:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:17:54.742 23:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:17:54.742 23:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:17:54.742 23:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:54.742 23:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:54.742 23:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:17:54.742 23:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:54.742 23:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:54.742 23:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:17:54.742 23:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:54.742 23:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:54.742 23:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:17:54.742 23:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:54.742 23:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:54.742 23:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:54.742 23:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:54.742 23:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:54.742 23:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:54.742 23:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:54.742 23:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:54.742 23:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:54.742 23:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:17:54.742 23:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:17:54.742 23:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:17:54.742 23:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:54.742 23:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.chhvgIEQXy 00:17:54.742 23:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=137948 00:17:54.742 23:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:54.742 23:03:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 137948 /var/tmp/spdk-raid.sock 00:17:54.742 23:03:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 137948 ']' 00:17:54.742 23:03:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:54.742 23:03:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:54.742 23:03:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:54.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:54.742 23:03:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:54.742 23:03:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.742 [2024-07-13 23:03:44.032732] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:54.742 [2024-07-13 23:03:44.033428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137948 ] 00:17:54.999 [2024-07-13 23:03:44.181973] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.999 [2024-07-13 23:03:44.246500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.999 [2024-07-13 23:03:44.300307] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:55.931 23:03:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:55.931 23:03:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:17:55.931 23:03:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:55.931 23:03:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:55.931 BaseBdev1_malloc 00:17:55.931 23:03:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:56.189 true 00:17:56.189 23:03:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:56.447 [2024-07-13 23:03:45.688566] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:56.447 [2024-07-13 23:03:45.688862] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.447 [2024-07-13 23:03:45.688991] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:17:56.447 [2024-07-13 23:03:45.689175] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.447 [2024-07-13 23:03:45.691917] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.447 [2024-07-13 23:03:45.692101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:56.447 BaseBdev1 00:17:56.447 23:03:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:56.447 23:03:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:56.706 BaseBdev2_malloc 00:17:56.706 23:03:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:56.964 true 00:17:56.964 23:03:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:56.964 [2024-07-13 23:03:46.355199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:56.964 [2024-07-13 23:03:46.355465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.964 [2024-07-13 23:03:46.355639] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:17:56.964 [2024-07-13 23:03:46.355779] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.964 [2024-07-13 23:03:46.358186] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.964 [2024-07-13 23:03:46.358357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:56.964 BaseBdev2 00:17:56.964 23:03:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:56.965 23:03:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:57.223 BaseBdev3_malloc 00:17:57.223 23:03:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:57.481 true 00:17:57.481 23:03:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:57.739 [2024-07-13 23:03:47.020446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:57.739 [2024-07-13 23:03:47.020723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.739 [2024-07-13 23:03:47.020888] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:17:57.739 [2024-07-13 23:03:47.021081] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.739 [2024-07-13 23:03:47.023577] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.739 [2024-07-13 23:03:47.023761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:57.739 BaseBdev3 00:17:57.739 23:03:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:17:57.998 [2024-07-13 23:03:47.256641] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:57.998 [2024-07-13 23:03:47.258728] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:57.998 [2024-07-13 23:03:47.258993] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:57.998 [2024-07-13 23:03:47.259371] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:17:57.998 [2024-07-13 23:03:47.259540] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:57.998 [2024-07-13 23:03:47.259697] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:17:57.998 [2024-07-13 23:03:47.260255] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:17:57.998 [2024-07-13 23:03:47.260411] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:17:57.998 [2024-07-13 23:03:47.260726] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.998 23:03:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:57.998 23:03:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:57.998 23:03:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:57.998 23:03:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:57.998 23:03:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:57.998 23:03:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:57.998 23:03:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:57.998 23:03:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:57.998 23:03:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:57.998 23:03:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:57.998 23:03:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.998 23:03:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.257 23:03:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:58.257 "name": "raid_bdev1", 00:17:58.257 "uuid": "e1374eaa-d996-438d-b375-9de2de714bda", 00:17:58.257 "strip_size_kb": 64, 00:17:58.257 "state": "online", 00:17:58.257 "raid_level": "raid0", 00:17:58.257 "superblock": true, 00:17:58.257 "num_base_bdevs": 3, 00:17:58.257 "num_base_bdevs_discovered": 3, 00:17:58.257 "num_base_bdevs_operational": 3, 00:17:58.257 "base_bdevs_list": [ 00:17:58.257 { 00:17:58.257 "name": "BaseBdev1", 00:17:58.257 "uuid": "563c71e9-ce2c-5df6-85bf-aeab42f514e0", 00:17:58.257 "is_configured": true, 00:17:58.257 "data_offset": 2048, 00:17:58.257 "data_size": 63488 00:17:58.257 }, 00:17:58.257 { 00:17:58.257 "name": "BaseBdev2", 00:17:58.257 "uuid": "f03fa0cb-d4dc-5dd9-9ea7-4a294d6e4782", 00:17:58.257 "is_configured": true, 00:17:58.257 "data_offset": 2048, 00:17:58.257 "data_size": 63488 00:17:58.257 }, 00:17:58.257 { 00:17:58.257 "name": "BaseBdev3", 00:17:58.257 "uuid": "2a969ee3-e160-577d-84a3-84f630101de9", 00:17:58.257 "is_configured": true, 00:17:58.257 "data_offset": 2048, 00:17:58.257 "data_size": 63488 00:17:58.257 } 00:17:58.257 ] 00:17:58.257 }' 00:17:58.257 23:03:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:58.257 23:03:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.824 23:03:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:58.824 23:03:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:58.824 [2024-07-13 23:03:48.221390] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:17:59.760 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:00.018 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:18:00.018 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:18:00.018 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:18:00.018 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:00.018 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:00.018 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:00.018 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:00.018 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:00.018 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:00.018 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:00.018 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:00.018 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:00.018 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:00.018 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.018 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.585 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:00.585 "name": "raid_bdev1", 00:18:00.585 "uuid": "e1374eaa-d996-438d-b375-9de2de714bda", 00:18:00.585 "strip_size_kb": 64, 00:18:00.585 "state": "online", 00:18:00.585 "raid_level": "raid0", 00:18:00.585 "superblock": true, 00:18:00.585 "num_base_bdevs": 3, 00:18:00.585 "num_base_bdevs_discovered": 3, 00:18:00.585 "num_base_bdevs_operational": 3, 00:18:00.585 "base_bdevs_list": [ 00:18:00.585 { 00:18:00.585 "name": "BaseBdev1", 00:18:00.585 "uuid": "563c71e9-ce2c-5df6-85bf-aeab42f514e0", 00:18:00.585 "is_configured": true, 00:18:00.585 "data_offset": 2048, 00:18:00.585 "data_size": 63488 00:18:00.585 }, 00:18:00.585 { 00:18:00.585 "name": "BaseBdev2", 00:18:00.585 "uuid": "f03fa0cb-d4dc-5dd9-9ea7-4a294d6e4782", 00:18:00.585 "is_configured": true, 00:18:00.585 "data_offset": 2048, 00:18:00.585 "data_size": 63488 00:18:00.585 }, 00:18:00.585 { 00:18:00.585 "name": "BaseBdev3", 00:18:00.585 "uuid": "2a969ee3-e160-577d-84a3-84f630101de9", 00:18:00.585 "is_configured": true, 00:18:00.585 "data_offset": 2048, 00:18:00.585 "data_size": 63488 00:18:00.585 } 00:18:00.585 ] 00:18:00.585 }' 00:18:00.585 23:03:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:00.585 23:03:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.152 23:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:01.411 [2024-07-13 23:03:50.610546] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:01.411 [2024-07-13 23:03:50.610815] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:01.411 [2024-07-13 23:03:50.613633] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:01.411 [2024-07-13 23:03:50.613829] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.411 [2024-07-13 23:03:50.613978] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:01.411 [2024-07-13 23:03:50.614078] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:18:01.411 0 00:18:01.411 23:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 137948 00:18:01.411 23:03:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 137948 ']' 00:18:01.411 23:03:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 137948 00:18:01.411 23:03:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:18:01.411 23:03:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:01.411 23:03:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 137948 00:18:01.411 23:03:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:01.411 23:03:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:01.411 23:03:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 137948' 00:18:01.411 killing process with pid 137948 00:18:01.411 23:03:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 137948 00:18:01.411 [2024-07-13 23:03:50.653732] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:01.411 23:03:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 137948 00:18:01.411 [2024-07-13 23:03:50.677104] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:01.670 23:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.chhvgIEQXy 00:18:01.670 23:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:18:01.670 23:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:18:01.670 23:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.42 00:18:01.670 23:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:18:01.670 23:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:01.670 23:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:01.670 23:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.42 != \0\.\0\0 ]] 00:18:01.670 00:18:01.670 real 0m6.976s 00:18:01.670 user 0m11.359s 00:18:01.670 sys 0m0.811s 00:18:01.670 23:03:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:01.670 23:03:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.670 ************************************ 00:18:01.670 END TEST raid_write_error_test 00:18:01.670 ************************************ 00:18:01.670 23:03:50 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:01.670 23:03:50 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:18:01.670 23:03:50 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:18:01.670 23:03:50 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:01.670 23:03:50 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:01.670 23:03:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:01.670 ************************************ 00:18:01.670 START TEST raid_state_function_test 00:18:01.670 ************************************ 00:18:01.670 23:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 false 00:18:01.670 23:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:18:01.670 23:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:18:01.670 23:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:18:01.670 23:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:01.670 23:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:01.670 23:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:01.670 23:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:18:01.670 23:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:01.670 23:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:01.670 23:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:18:01.670 23:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:01.670 23:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:01.670 23:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:18:01.670 23:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:01.670 23:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:01.670 23:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:01.670 23:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:01.670 23:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:01.670 23:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:01.670 23:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:01.670 23:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:01.670 23:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:18:01.670 23:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:18:01.670 23:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:18:01.670 23:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:18:01.670 23:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:18:01.670 23:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=138140 00:18:01.670 23:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 138140' 00:18:01.670 23:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:01.670 Process raid pid: 138140 00:18:01.670 23:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 138140 /var/tmp/spdk-raid.sock 00:18:01.670 23:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 138140 ']' 00:18:01.670 23:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:01.670 23:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:01.670 23:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:01.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:01.670 23:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:01.670 23:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.670 [2024-07-13 23:03:51.058127] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:01.670 [2024-07-13 23:03:51.058543] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.928 [2024-07-13 23:03:51.202545] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.928 [2024-07-13 23:03:51.264332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.928 [2024-07-13 23:03:51.317591] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:02.187 23:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:02.187 23:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:18:02.187 23:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:02.446 [2024-07-13 23:03:51.610170] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:02.446 [2024-07-13 23:03:51.610469] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:02.446 [2024-07-13 23:03:51.610595] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:02.446 [2024-07-13 23:03:51.610660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:02.446 [2024-07-13 23:03:51.610763] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:02.446 [2024-07-13 23:03:51.610846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:02.446 23:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:02.446 23:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:02.446 23:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:02.446 23:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:02.446 23:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:02.446 23:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:02.446 23:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:02.446 23:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:02.446 23:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:02.446 23:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:02.446 23:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.446 23:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.446 23:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:02.446 "name": "Existed_Raid", 00:18:02.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.446 "strip_size_kb": 64, 00:18:02.446 "state": "configuring", 00:18:02.446 "raid_level": "concat", 00:18:02.446 "superblock": false, 00:18:02.446 "num_base_bdevs": 3, 00:18:02.446 "num_base_bdevs_discovered": 0, 00:18:02.446 "num_base_bdevs_operational": 3, 00:18:02.446 "base_bdevs_list": [ 00:18:02.446 { 00:18:02.446 "name": "BaseBdev1", 00:18:02.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.446 "is_configured": false, 00:18:02.446 "data_offset": 0, 00:18:02.446 "data_size": 0 00:18:02.446 }, 00:18:02.446 { 00:18:02.446 "name": "BaseBdev2", 00:18:02.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.446 "is_configured": false, 00:18:02.446 "data_offset": 0, 00:18:02.446 "data_size": 0 00:18:02.446 }, 00:18:02.446 { 00:18:02.446 "name": "BaseBdev3", 00:18:02.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.446 "is_configured": false, 00:18:02.446 "data_offset": 0, 00:18:02.446 "data_size": 0 00:18:02.446 } 00:18:02.446 ] 00:18:02.446 }' 00:18:02.446 23:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:02.446 23:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.381 23:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:03.381 [2024-07-13 23:03:52.698262] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:03.381 [2024-07-13 23:03:52.698529] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:18:03.381 23:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:03.642 [2024-07-13 23:03:52.978358] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:03.642 [2024-07-13 23:03:52.978686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:03.642 [2024-07-13 23:03:52.978806] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:03.642 [2024-07-13 23:03:52.978870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:03.642 [2024-07-13 23:03:52.978996] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:03.642 [2024-07-13 23:03:52.979068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:03.642 23:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:03.904 [2024-07-13 23:03:53.239129] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:03.904 BaseBdev1 00:18:03.904 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:03.904 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:03.904 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:03.904 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:03.904 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:03.904 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:03.904 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:04.162 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:04.419 [ 00:18:04.419 { 00:18:04.419 "name": "BaseBdev1", 00:18:04.419 "aliases": [ 00:18:04.419 "c3845a2f-3a26-4030-b979-4ca3b662f876" 00:18:04.419 ], 00:18:04.419 "product_name": "Malloc disk", 00:18:04.419 "block_size": 512, 00:18:04.419 "num_blocks": 65536, 00:18:04.419 "uuid": "c3845a2f-3a26-4030-b979-4ca3b662f876", 00:18:04.419 "assigned_rate_limits": { 00:18:04.419 "rw_ios_per_sec": 0, 00:18:04.419 "rw_mbytes_per_sec": 0, 00:18:04.419 "r_mbytes_per_sec": 0, 00:18:04.419 "w_mbytes_per_sec": 0 00:18:04.419 }, 00:18:04.419 "claimed": true, 00:18:04.419 "claim_type": "exclusive_write", 00:18:04.419 "zoned": false, 00:18:04.419 "supported_io_types": { 00:18:04.419 "read": true, 00:18:04.419 "write": true, 00:18:04.419 "unmap": true, 00:18:04.419 "flush": true, 00:18:04.419 "reset": true, 00:18:04.419 "nvme_admin": false, 00:18:04.419 "nvme_io": false, 00:18:04.419 "nvme_io_md": false, 00:18:04.419 "write_zeroes": true, 00:18:04.419 "zcopy": true, 00:18:04.419 "get_zone_info": false, 00:18:04.419 "zone_management": false, 00:18:04.419 "zone_append": false, 00:18:04.419 "compare": false, 00:18:04.419 "compare_and_write": false, 00:18:04.420 "abort": true, 00:18:04.420 "seek_hole": false, 00:18:04.420 "seek_data": false, 00:18:04.420 "copy": true, 00:18:04.420 "nvme_iov_md": false 00:18:04.420 }, 00:18:04.420 "memory_domains": [ 00:18:04.420 { 00:18:04.420 "dma_device_id": "system", 00:18:04.420 "dma_device_type": 1 00:18:04.420 }, 00:18:04.420 { 00:18:04.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.420 "dma_device_type": 2 00:18:04.420 } 00:18:04.420 ], 00:18:04.420 "driver_specific": {} 00:18:04.420 } 00:18:04.420 ] 00:18:04.420 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:04.420 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:04.420 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:04.420 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:04.420 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:04.420 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:04.420 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:04.420 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:04.420 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:04.420 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:04.420 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:04.420 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.420 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:04.677 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:04.677 "name": "Existed_Raid", 00:18:04.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.677 "strip_size_kb": 64, 00:18:04.677 "state": "configuring", 00:18:04.677 "raid_level": "concat", 00:18:04.677 "superblock": false, 00:18:04.677 "num_base_bdevs": 3, 00:18:04.677 "num_base_bdevs_discovered": 1, 00:18:04.677 "num_base_bdevs_operational": 3, 00:18:04.677 "base_bdevs_list": [ 00:18:04.677 { 00:18:04.677 "name": "BaseBdev1", 00:18:04.677 "uuid": "c3845a2f-3a26-4030-b979-4ca3b662f876", 00:18:04.677 "is_configured": true, 00:18:04.677 "data_offset": 0, 00:18:04.677 "data_size": 65536 00:18:04.677 }, 00:18:04.677 { 00:18:04.677 "name": "BaseBdev2", 00:18:04.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.677 "is_configured": false, 00:18:04.677 "data_offset": 0, 00:18:04.677 "data_size": 0 00:18:04.677 }, 00:18:04.677 { 00:18:04.677 "name": "BaseBdev3", 00:18:04.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.677 "is_configured": false, 00:18:04.677 "data_offset": 0, 00:18:04.677 "data_size": 0 00:18:04.677 } 00:18:04.677 ] 00:18:04.677 }' 00:18:04.677 23:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:04.677 23:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.244 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:05.502 [2024-07-13 23:03:54.743518] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:05.502 [2024-07-13 23:03:54.743762] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:18:05.502 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:05.761 [2024-07-13 23:03:54.951609] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:05.761 [2024-07-13 23:03:54.954001] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:05.761 [2024-07-13 23:03:54.954209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:05.761 [2024-07-13 23:03:54.954339] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:05.761 [2024-07-13 23:03:54.954407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:05.761 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:05.761 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:05.761 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:05.761 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:05.761 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:05.761 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:05.761 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:05.761 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:05.761 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:05.761 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:05.761 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:05.761 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:05.761 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.761 23:03:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.020 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:06.020 "name": "Existed_Raid", 00:18:06.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.020 "strip_size_kb": 64, 00:18:06.020 "state": "configuring", 00:18:06.020 "raid_level": "concat", 00:18:06.020 "superblock": false, 00:18:06.020 "num_base_bdevs": 3, 00:18:06.020 "num_base_bdevs_discovered": 1, 00:18:06.020 "num_base_bdevs_operational": 3, 00:18:06.020 "base_bdevs_list": [ 00:18:06.020 { 00:18:06.020 "name": "BaseBdev1", 00:18:06.020 "uuid": "c3845a2f-3a26-4030-b979-4ca3b662f876", 00:18:06.020 "is_configured": true, 00:18:06.020 "data_offset": 0, 00:18:06.020 "data_size": 65536 00:18:06.020 }, 00:18:06.020 { 00:18:06.020 "name": "BaseBdev2", 00:18:06.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.020 "is_configured": false, 00:18:06.020 "data_offset": 0, 00:18:06.020 "data_size": 0 00:18:06.020 }, 00:18:06.020 { 00:18:06.020 "name": "BaseBdev3", 00:18:06.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.020 "is_configured": false, 00:18:06.020 "data_offset": 0, 00:18:06.020 "data_size": 0 00:18:06.020 } 00:18:06.020 ] 00:18:06.020 }' 00:18:06.020 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:06.020 23:03:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.588 23:03:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:06.846 [2024-07-13 23:03:56.092989] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:06.846 BaseBdev2 00:18:06.846 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:06.846 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:06.846 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:06.846 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:06.846 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:06.846 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:06.846 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:07.105 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:07.364 [ 00:18:07.364 { 00:18:07.364 "name": "BaseBdev2", 00:18:07.364 "aliases": [ 00:18:07.364 "1a2f25c5-929d-415f-86e5-fc5152511b21" 00:18:07.364 ], 00:18:07.364 "product_name": "Malloc disk", 00:18:07.364 "block_size": 512, 00:18:07.364 "num_blocks": 65536, 00:18:07.364 "uuid": "1a2f25c5-929d-415f-86e5-fc5152511b21", 00:18:07.364 "assigned_rate_limits": { 00:18:07.364 "rw_ios_per_sec": 0, 00:18:07.364 "rw_mbytes_per_sec": 0, 00:18:07.364 "r_mbytes_per_sec": 0, 00:18:07.364 "w_mbytes_per_sec": 0 00:18:07.364 }, 00:18:07.364 "claimed": true, 00:18:07.364 "claim_type": "exclusive_write", 00:18:07.364 "zoned": false, 00:18:07.364 "supported_io_types": { 00:18:07.364 "read": true, 00:18:07.364 "write": true, 00:18:07.364 "unmap": true, 00:18:07.364 "flush": true, 00:18:07.364 "reset": true, 00:18:07.364 "nvme_admin": false, 00:18:07.364 "nvme_io": false, 00:18:07.364 "nvme_io_md": false, 00:18:07.364 "write_zeroes": true, 00:18:07.364 "zcopy": true, 00:18:07.364 "get_zone_info": false, 00:18:07.364 "zone_management": false, 00:18:07.364 "zone_append": false, 00:18:07.364 "compare": false, 00:18:07.364 "compare_and_write": false, 00:18:07.364 "abort": true, 00:18:07.364 "seek_hole": false, 00:18:07.364 "seek_data": false, 00:18:07.364 "copy": true, 00:18:07.364 "nvme_iov_md": false 00:18:07.364 }, 00:18:07.364 "memory_domains": [ 00:18:07.364 { 00:18:07.364 "dma_device_id": "system", 00:18:07.364 "dma_device_type": 1 00:18:07.364 }, 00:18:07.364 { 00:18:07.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.364 "dma_device_type": 2 00:18:07.364 } 00:18:07.364 ], 00:18:07.364 "driver_specific": {} 00:18:07.364 } 00:18:07.364 ] 00:18:07.364 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:07.364 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:07.364 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:07.364 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:07.364 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:07.364 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:07.364 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:07.364 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:07.364 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:07.364 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:07.364 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:07.364 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:07.364 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:07.364 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.364 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:07.624 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:07.624 "name": "Existed_Raid", 00:18:07.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.624 "strip_size_kb": 64, 00:18:07.624 "state": "configuring", 00:18:07.624 "raid_level": "concat", 00:18:07.624 "superblock": false, 00:18:07.624 "num_base_bdevs": 3, 00:18:07.624 "num_base_bdevs_discovered": 2, 00:18:07.624 "num_base_bdevs_operational": 3, 00:18:07.624 "base_bdevs_list": [ 00:18:07.624 { 00:18:07.624 "name": "BaseBdev1", 00:18:07.624 "uuid": "c3845a2f-3a26-4030-b979-4ca3b662f876", 00:18:07.624 "is_configured": true, 00:18:07.624 "data_offset": 0, 00:18:07.624 "data_size": 65536 00:18:07.624 }, 00:18:07.624 { 00:18:07.624 "name": "BaseBdev2", 00:18:07.624 "uuid": "1a2f25c5-929d-415f-86e5-fc5152511b21", 00:18:07.624 "is_configured": true, 00:18:07.624 "data_offset": 0, 00:18:07.624 "data_size": 65536 00:18:07.624 }, 00:18:07.624 { 00:18:07.624 "name": "BaseBdev3", 00:18:07.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.624 "is_configured": false, 00:18:07.624 "data_offset": 0, 00:18:07.624 "data_size": 0 00:18:07.624 } 00:18:07.624 ] 00:18:07.624 }' 00:18:07.624 23:03:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:07.624 23:03:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.191 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:08.450 [2024-07-13 23:03:57.682575] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:08.450 [2024-07-13 23:03:57.682859] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:18:08.450 [2024-07-13 23:03:57.682941] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:08.450 [2024-07-13 23:03:57.683216] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:18:08.450 [2024-07-13 23:03:57.683914] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:18:08.450 [2024-07-13 23:03:57.684050] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:18:08.450 [2024-07-13 23:03:57.684454] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.450 BaseBdev3 00:18:08.450 23:03:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:18:08.450 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:08.450 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:08.450 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:08.450 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:08.450 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:08.450 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:08.709 23:03:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:08.967 [ 00:18:08.967 { 00:18:08.967 "name": "BaseBdev3", 00:18:08.967 "aliases": [ 00:18:08.967 "cc82b206-62c0-4d30-b667-f865470d3020" 00:18:08.967 ], 00:18:08.967 "product_name": "Malloc disk", 00:18:08.967 "block_size": 512, 00:18:08.967 "num_blocks": 65536, 00:18:08.967 "uuid": "cc82b206-62c0-4d30-b667-f865470d3020", 00:18:08.967 "assigned_rate_limits": { 00:18:08.967 "rw_ios_per_sec": 0, 00:18:08.967 "rw_mbytes_per_sec": 0, 00:18:08.967 "r_mbytes_per_sec": 0, 00:18:08.967 "w_mbytes_per_sec": 0 00:18:08.967 }, 00:18:08.967 "claimed": true, 00:18:08.967 "claim_type": "exclusive_write", 00:18:08.967 "zoned": false, 00:18:08.967 "supported_io_types": { 00:18:08.967 "read": true, 00:18:08.967 "write": true, 00:18:08.967 "unmap": true, 00:18:08.967 "flush": true, 00:18:08.967 "reset": true, 00:18:08.967 "nvme_admin": false, 00:18:08.967 "nvme_io": false, 00:18:08.967 "nvme_io_md": false, 00:18:08.967 "write_zeroes": true, 00:18:08.967 "zcopy": true, 00:18:08.967 "get_zone_info": false, 00:18:08.967 "zone_management": false, 00:18:08.967 "zone_append": false, 00:18:08.967 "compare": false, 00:18:08.967 "compare_and_write": false, 00:18:08.967 "abort": true, 00:18:08.967 "seek_hole": false, 00:18:08.967 "seek_data": false, 00:18:08.967 "copy": true, 00:18:08.967 "nvme_iov_md": false 00:18:08.967 }, 00:18:08.967 "memory_domains": [ 00:18:08.967 { 00:18:08.967 "dma_device_id": "system", 00:18:08.967 "dma_device_type": 1 00:18:08.967 }, 00:18:08.967 { 00:18:08.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.967 "dma_device_type": 2 00:18:08.967 } 00:18:08.967 ], 00:18:08.967 "driver_specific": {} 00:18:08.967 } 00:18:08.967 ] 00:18:08.967 23:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:08.967 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:08.967 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:08.967 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:18:08.967 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:08.967 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:08.967 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:08.967 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:08.967 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:08.967 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:08.968 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:08.968 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:08.968 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:08.968 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.968 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.226 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:09.226 "name": "Existed_Raid", 00:18:09.226 "uuid": "75d61ad2-511d-46c4-a298-337cc85c8bae", 00:18:09.226 "strip_size_kb": 64, 00:18:09.226 "state": "online", 00:18:09.226 "raid_level": "concat", 00:18:09.226 "superblock": false, 00:18:09.226 "num_base_bdevs": 3, 00:18:09.226 "num_base_bdevs_discovered": 3, 00:18:09.226 "num_base_bdevs_operational": 3, 00:18:09.226 "base_bdevs_list": [ 00:18:09.226 { 00:18:09.226 "name": "BaseBdev1", 00:18:09.226 "uuid": "c3845a2f-3a26-4030-b979-4ca3b662f876", 00:18:09.226 "is_configured": true, 00:18:09.226 "data_offset": 0, 00:18:09.226 "data_size": 65536 00:18:09.226 }, 00:18:09.226 { 00:18:09.226 "name": "BaseBdev2", 00:18:09.226 "uuid": "1a2f25c5-929d-415f-86e5-fc5152511b21", 00:18:09.226 "is_configured": true, 00:18:09.226 "data_offset": 0, 00:18:09.226 "data_size": 65536 00:18:09.226 }, 00:18:09.226 { 00:18:09.226 "name": "BaseBdev3", 00:18:09.226 "uuid": "cc82b206-62c0-4d30-b667-f865470d3020", 00:18:09.226 "is_configured": true, 00:18:09.226 "data_offset": 0, 00:18:09.226 "data_size": 65536 00:18:09.226 } 00:18:09.226 ] 00:18:09.226 }' 00:18:09.226 23:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:09.226 23:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.793 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:09.793 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:09.793 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:09.793 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:09.793 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:09.793 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:09.793 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:09.793 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:10.052 [2024-07-13 23:03:59.308853] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:10.052 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:10.052 "name": "Existed_Raid", 00:18:10.052 "aliases": [ 00:18:10.052 "75d61ad2-511d-46c4-a298-337cc85c8bae" 00:18:10.052 ], 00:18:10.052 "product_name": "Raid Volume", 00:18:10.052 "block_size": 512, 00:18:10.052 "num_blocks": 196608, 00:18:10.052 "uuid": "75d61ad2-511d-46c4-a298-337cc85c8bae", 00:18:10.052 "assigned_rate_limits": { 00:18:10.052 "rw_ios_per_sec": 0, 00:18:10.052 "rw_mbytes_per_sec": 0, 00:18:10.052 "r_mbytes_per_sec": 0, 00:18:10.052 "w_mbytes_per_sec": 0 00:18:10.052 }, 00:18:10.052 "claimed": false, 00:18:10.052 "zoned": false, 00:18:10.052 "supported_io_types": { 00:18:10.052 "read": true, 00:18:10.052 "write": true, 00:18:10.052 "unmap": true, 00:18:10.052 "flush": true, 00:18:10.052 "reset": true, 00:18:10.052 "nvme_admin": false, 00:18:10.052 "nvme_io": false, 00:18:10.052 "nvme_io_md": false, 00:18:10.052 "write_zeroes": true, 00:18:10.052 "zcopy": false, 00:18:10.052 "get_zone_info": false, 00:18:10.052 "zone_management": false, 00:18:10.052 "zone_append": false, 00:18:10.052 "compare": false, 00:18:10.052 "compare_and_write": false, 00:18:10.052 "abort": false, 00:18:10.052 "seek_hole": false, 00:18:10.052 "seek_data": false, 00:18:10.052 "copy": false, 00:18:10.052 "nvme_iov_md": false 00:18:10.052 }, 00:18:10.052 "memory_domains": [ 00:18:10.052 { 00:18:10.052 "dma_device_id": "system", 00:18:10.052 "dma_device_type": 1 00:18:10.052 }, 00:18:10.052 { 00:18:10.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.052 "dma_device_type": 2 00:18:10.052 }, 00:18:10.052 { 00:18:10.052 "dma_device_id": "system", 00:18:10.052 "dma_device_type": 1 00:18:10.052 }, 00:18:10.052 { 00:18:10.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.052 "dma_device_type": 2 00:18:10.052 }, 00:18:10.052 { 00:18:10.052 "dma_device_id": "system", 00:18:10.052 "dma_device_type": 1 00:18:10.052 }, 00:18:10.052 { 00:18:10.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.052 "dma_device_type": 2 00:18:10.052 } 00:18:10.052 ], 00:18:10.052 "driver_specific": { 00:18:10.052 "raid": { 00:18:10.052 "uuid": "75d61ad2-511d-46c4-a298-337cc85c8bae", 00:18:10.052 "strip_size_kb": 64, 00:18:10.052 "state": "online", 00:18:10.052 "raid_level": "concat", 00:18:10.052 "superblock": false, 00:18:10.052 "num_base_bdevs": 3, 00:18:10.052 "num_base_bdevs_discovered": 3, 00:18:10.052 "num_base_bdevs_operational": 3, 00:18:10.052 "base_bdevs_list": [ 00:18:10.052 { 00:18:10.052 "name": "BaseBdev1", 00:18:10.052 "uuid": "c3845a2f-3a26-4030-b979-4ca3b662f876", 00:18:10.052 "is_configured": true, 00:18:10.052 "data_offset": 0, 00:18:10.052 "data_size": 65536 00:18:10.052 }, 00:18:10.052 { 00:18:10.053 "name": "BaseBdev2", 00:18:10.053 "uuid": "1a2f25c5-929d-415f-86e5-fc5152511b21", 00:18:10.053 "is_configured": true, 00:18:10.053 "data_offset": 0, 00:18:10.053 "data_size": 65536 00:18:10.053 }, 00:18:10.053 { 00:18:10.053 "name": "BaseBdev3", 00:18:10.053 "uuid": "cc82b206-62c0-4d30-b667-f865470d3020", 00:18:10.053 "is_configured": true, 00:18:10.053 "data_offset": 0, 00:18:10.053 "data_size": 65536 00:18:10.053 } 00:18:10.053 ] 00:18:10.053 } 00:18:10.053 } 00:18:10.053 }' 00:18:10.053 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:10.053 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:10.053 BaseBdev2 00:18:10.053 BaseBdev3' 00:18:10.053 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:10.053 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:10.053 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:10.311 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:10.311 "name": "BaseBdev1", 00:18:10.311 "aliases": [ 00:18:10.311 "c3845a2f-3a26-4030-b979-4ca3b662f876" 00:18:10.311 ], 00:18:10.311 "product_name": "Malloc disk", 00:18:10.311 "block_size": 512, 00:18:10.311 "num_blocks": 65536, 00:18:10.311 "uuid": "c3845a2f-3a26-4030-b979-4ca3b662f876", 00:18:10.311 "assigned_rate_limits": { 00:18:10.311 "rw_ios_per_sec": 0, 00:18:10.311 "rw_mbytes_per_sec": 0, 00:18:10.311 "r_mbytes_per_sec": 0, 00:18:10.311 "w_mbytes_per_sec": 0 00:18:10.311 }, 00:18:10.311 "claimed": true, 00:18:10.311 "claim_type": "exclusive_write", 00:18:10.311 "zoned": false, 00:18:10.311 "supported_io_types": { 00:18:10.311 "read": true, 00:18:10.311 "write": true, 00:18:10.311 "unmap": true, 00:18:10.311 "flush": true, 00:18:10.311 "reset": true, 00:18:10.311 "nvme_admin": false, 00:18:10.311 "nvme_io": false, 00:18:10.311 "nvme_io_md": false, 00:18:10.311 "write_zeroes": true, 00:18:10.311 "zcopy": true, 00:18:10.311 "get_zone_info": false, 00:18:10.311 "zone_management": false, 00:18:10.311 "zone_append": false, 00:18:10.311 "compare": false, 00:18:10.311 "compare_and_write": false, 00:18:10.311 "abort": true, 00:18:10.311 "seek_hole": false, 00:18:10.311 "seek_data": false, 00:18:10.311 "copy": true, 00:18:10.311 "nvme_iov_md": false 00:18:10.311 }, 00:18:10.311 "memory_domains": [ 00:18:10.311 { 00:18:10.311 "dma_device_id": "system", 00:18:10.311 "dma_device_type": 1 00:18:10.311 }, 00:18:10.311 { 00:18:10.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.311 "dma_device_type": 2 00:18:10.311 } 00:18:10.311 ], 00:18:10.311 "driver_specific": {} 00:18:10.311 }' 00:18:10.311 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:10.311 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:10.311 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:10.311 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:10.570 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:10.570 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:10.570 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:10.570 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:10.570 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:10.570 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:10.570 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:10.829 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:10.829 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:10.829 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:10.829 23:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:10.829 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:10.829 "name": "BaseBdev2", 00:18:10.829 "aliases": [ 00:18:10.829 "1a2f25c5-929d-415f-86e5-fc5152511b21" 00:18:10.829 ], 00:18:10.829 "product_name": "Malloc disk", 00:18:10.829 "block_size": 512, 00:18:10.829 "num_blocks": 65536, 00:18:10.829 "uuid": "1a2f25c5-929d-415f-86e5-fc5152511b21", 00:18:10.829 "assigned_rate_limits": { 00:18:10.829 "rw_ios_per_sec": 0, 00:18:10.829 "rw_mbytes_per_sec": 0, 00:18:10.829 "r_mbytes_per_sec": 0, 00:18:10.829 "w_mbytes_per_sec": 0 00:18:10.829 }, 00:18:10.829 "claimed": true, 00:18:10.829 "claim_type": "exclusive_write", 00:18:10.829 "zoned": false, 00:18:10.829 "supported_io_types": { 00:18:10.829 "read": true, 00:18:10.829 "write": true, 00:18:10.829 "unmap": true, 00:18:10.829 "flush": true, 00:18:10.829 "reset": true, 00:18:10.829 "nvme_admin": false, 00:18:10.829 "nvme_io": false, 00:18:10.829 "nvme_io_md": false, 00:18:10.829 "write_zeroes": true, 00:18:10.829 "zcopy": true, 00:18:10.829 "get_zone_info": false, 00:18:10.829 "zone_management": false, 00:18:10.829 "zone_append": false, 00:18:10.829 "compare": false, 00:18:10.829 "compare_and_write": false, 00:18:10.829 "abort": true, 00:18:10.829 "seek_hole": false, 00:18:10.829 "seek_data": false, 00:18:10.829 "copy": true, 00:18:10.829 "nvme_iov_md": false 00:18:10.829 }, 00:18:10.829 "memory_domains": [ 00:18:10.829 { 00:18:10.829 "dma_device_id": "system", 00:18:10.829 "dma_device_type": 1 00:18:10.829 }, 00:18:10.829 { 00:18:10.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.829 "dma_device_type": 2 00:18:10.829 } 00:18:10.829 ], 00:18:10.829 "driver_specific": {} 00:18:10.829 }' 00:18:10.829 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:11.087 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:11.087 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:11.087 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:11.087 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:11.087 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:11.087 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:11.087 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:11.087 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:11.087 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:11.346 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:11.346 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:11.346 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:11.346 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:11.346 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:11.604 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:11.604 "name": "BaseBdev3", 00:18:11.605 "aliases": [ 00:18:11.605 "cc82b206-62c0-4d30-b667-f865470d3020" 00:18:11.605 ], 00:18:11.605 "product_name": "Malloc disk", 00:18:11.605 "block_size": 512, 00:18:11.605 "num_blocks": 65536, 00:18:11.605 "uuid": "cc82b206-62c0-4d30-b667-f865470d3020", 00:18:11.605 "assigned_rate_limits": { 00:18:11.605 "rw_ios_per_sec": 0, 00:18:11.605 "rw_mbytes_per_sec": 0, 00:18:11.605 "r_mbytes_per_sec": 0, 00:18:11.605 "w_mbytes_per_sec": 0 00:18:11.605 }, 00:18:11.605 "claimed": true, 00:18:11.605 "claim_type": "exclusive_write", 00:18:11.605 "zoned": false, 00:18:11.605 "supported_io_types": { 00:18:11.605 "read": true, 00:18:11.605 "write": true, 00:18:11.605 "unmap": true, 00:18:11.605 "flush": true, 00:18:11.605 "reset": true, 00:18:11.605 "nvme_admin": false, 00:18:11.605 "nvme_io": false, 00:18:11.605 "nvme_io_md": false, 00:18:11.605 "write_zeroes": true, 00:18:11.605 "zcopy": true, 00:18:11.605 "get_zone_info": false, 00:18:11.605 "zone_management": false, 00:18:11.605 "zone_append": false, 00:18:11.605 "compare": false, 00:18:11.605 "compare_and_write": false, 00:18:11.605 "abort": true, 00:18:11.605 "seek_hole": false, 00:18:11.605 "seek_data": false, 00:18:11.605 "copy": true, 00:18:11.605 "nvme_iov_md": false 00:18:11.605 }, 00:18:11.605 "memory_domains": [ 00:18:11.605 { 00:18:11.605 "dma_device_id": "system", 00:18:11.605 "dma_device_type": 1 00:18:11.605 }, 00:18:11.605 { 00:18:11.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.605 "dma_device_type": 2 00:18:11.605 } 00:18:11.605 ], 00:18:11.605 "driver_specific": {} 00:18:11.605 }' 00:18:11.605 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:11.605 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:11.605 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:11.605 23:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:11.863 23:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:11.863 23:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:11.863 23:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:11.863 23:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:11.863 23:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:11.863 23:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:11.863 23:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:12.122 23:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:12.122 23:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:12.381 [2024-07-13 23:04:01.533229] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:12.381 [2024-07-13 23:04:01.533475] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:12.381 [2024-07-13 23:04:01.533700] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:12.381 23:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:12.381 23:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:18:12.381 23:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:12.381 23:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:12.381 23:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:18:12.381 23:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:18:12.381 23:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:12.381 23:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:18:12.381 23:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:12.381 23:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:12.381 23:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:12.381 23:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:12.381 23:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:12.381 23:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:12.381 23:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:12.381 23:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.381 23:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.640 23:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:12.640 "name": "Existed_Raid", 00:18:12.640 "uuid": "75d61ad2-511d-46c4-a298-337cc85c8bae", 00:18:12.640 "strip_size_kb": 64, 00:18:12.640 "state": "offline", 00:18:12.640 "raid_level": "concat", 00:18:12.640 "superblock": false, 00:18:12.640 "num_base_bdevs": 3, 00:18:12.640 "num_base_bdevs_discovered": 2, 00:18:12.640 "num_base_bdevs_operational": 2, 00:18:12.640 "base_bdevs_list": [ 00:18:12.640 { 00:18:12.640 "name": null, 00:18:12.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.640 "is_configured": false, 00:18:12.640 "data_offset": 0, 00:18:12.640 "data_size": 65536 00:18:12.640 }, 00:18:12.640 { 00:18:12.640 "name": "BaseBdev2", 00:18:12.640 "uuid": "1a2f25c5-929d-415f-86e5-fc5152511b21", 00:18:12.640 "is_configured": true, 00:18:12.640 "data_offset": 0, 00:18:12.640 "data_size": 65536 00:18:12.640 }, 00:18:12.640 { 00:18:12.640 "name": "BaseBdev3", 00:18:12.640 "uuid": "cc82b206-62c0-4d30-b667-f865470d3020", 00:18:12.640 "is_configured": true, 00:18:12.640 "data_offset": 0, 00:18:12.640 "data_size": 65536 00:18:12.640 } 00:18:12.640 ] 00:18:12.640 }' 00:18:12.640 23:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:12.640 23:04:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.233 23:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:13.233 23:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:13.233 23:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.233 23:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:13.491 23:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:13.491 23:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:13.491 23:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:13.749 [2024-07-13 23:04:02.932335] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:13.749 23:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:13.749 23:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:13.749 23:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.749 23:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:14.007 23:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:14.007 23:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:14.007 23:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:14.007 [2024-07-13 23:04:03.410556] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:14.007 [2024-07-13 23:04:03.410782] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:18:14.264 23:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:14.264 23:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:14.265 23:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.265 23:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:14.523 23:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:14.523 23:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:14.523 23:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:18:14.523 23:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:18:14.523 23:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:14.523 23:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:14.523 BaseBdev2 00:18:14.523 23:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:18:14.523 23:04:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:14.523 23:04:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:14.523 23:04:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:14.523 23:04:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:14.523 23:04:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:14.523 23:04:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:14.781 23:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:15.039 [ 00:18:15.039 { 00:18:15.039 "name": "BaseBdev2", 00:18:15.039 "aliases": [ 00:18:15.039 "1014ffaa-8773-4be7-b3cd-756212af35a5" 00:18:15.039 ], 00:18:15.039 "product_name": "Malloc disk", 00:18:15.039 "block_size": 512, 00:18:15.039 "num_blocks": 65536, 00:18:15.039 "uuid": "1014ffaa-8773-4be7-b3cd-756212af35a5", 00:18:15.039 "assigned_rate_limits": { 00:18:15.039 "rw_ios_per_sec": 0, 00:18:15.039 "rw_mbytes_per_sec": 0, 00:18:15.039 "r_mbytes_per_sec": 0, 00:18:15.039 "w_mbytes_per_sec": 0 00:18:15.039 }, 00:18:15.039 "claimed": false, 00:18:15.039 "zoned": false, 00:18:15.039 "supported_io_types": { 00:18:15.039 "read": true, 00:18:15.039 "write": true, 00:18:15.039 "unmap": true, 00:18:15.039 "flush": true, 00:18:15.039 "reset": true, 00:18:15.039 "nvme_admin": false, 00:18:15.039 "nvme_io": false, 00:18:15.039 "nvme_io_md": false, 00:18:15.039 "write_zeroes": true, 00:18:15.039 "zcopy": true, 00:18:15.039 "get_zone_info": false, 00:18:15.039 "zone_management": false, 00:18:15.039 "zone_append": false, 00:18:15.039 "compare": false, 00:18:15.039 "compare_and_write": false, 00:18:15.039 "abort": true, 00:18:15.039 "seek_hole": false, 00:18:15.039 "seek_data": false, 00:18:15.039 "copy": true, 00:18:15.039 "nvme_iov_md": false 00:18:15.039 }, 00:18:15.039 "memory_domains": [ 00:18:15.039 { 00:18:15.039 "dma_device_id": "system", 00:18:15.039 "dma_device_type": 1 00:18:15.039 }, 00:18:15.039 { 00:18:15.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.039 "dma_device_type": 2 00:18:15.039 } 00:18:15.039 ], 00:18:15.039 "driver_specific": {} 00:18:15.039 } 00:18:15.039 ] 00:18:15.039 23:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:15.039 23:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:15.039 23:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:15.039 23:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:15.297 BaseBdev3 00:18:15.297 23:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:18:15.297 23:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:15.297 23:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:15.297 23:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:15.297 23:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:15.297 23:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:15.297 23:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:15.553 23:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:15.811 [ 00:18:15.811 { 00:18:15.811 "name": "BaseBdev3", 00:18:15.811 "aliases": [ 00:18:15.811 "9d52f9a0-e008-48d8-9186-fa7b6ef9a24d" 00:18:15.811 ], 00:18:15.811 "product_name": "Malloc disk", 00:18:15.811 "block_size": 512, 00:18:15.811 "num_blocks": 65536, 00:18:15.811 "uuid": "9d52f9a0-e008-48d8-9186-fa7b6ef9a24d", 00:18:15.811 "assigned_rate_limits": { 00:18:15.811 "rw_ios_per_sec": 0, 00:18:15.811 "rw_mbytes_per_sec": 0, 00:18:15.811 "r_mbytes_per_sec": 0, 00:18:15.811 "w_mbytes_per_sec": 0 00:18:15.811 }, 00:18:15.811 "claimed": false, 00:18:15.811 "zoned": false, 00:18:15.811 "supported_io_types": { 00:18:15.811 "read": true, 00:18:15.811 "write": true, 00:18:15.811 "unmap": true, 00:18:15.811 "flush": true, 00:18:15.811 "reset": true, 00:18:15.811 "nvme_admin": false, 00:18:15.811 "nvme_io": false, 00:18:15.811 "nvme_io_md": false, 00:18:15.811 "write_zeroes": true, 00:18:15.811 "zcopy": true, 00:18:15.811 "get_zone_info": false, 00:18:15.811 "zone_management": false, 00:18:15.811 "zone_append": false, 00:18:15.811 "compare": false, 00:18:15.811 "compare_and_write": false, 00:18:15.811 "abort": true, 00:18:15.811 "seek_hole": false, 00:18:15.811 "seek_data": false, 00:18:15.811 "copy": true, 00:18:15.811 "nvme_iov_md": false 00:18:15.811 }, 00:18:15.811 "memory_domains": [ 00:18:15.811 { 00:18:15.811 "dma_device_id": "system", 00:18:15.811 "dma_device_type": 1 00:18:15.811 }, 00:18:15.811 { 00:18:15.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.811 "dma_device_type": 2 00:18:15.811 } 00:18:15.811 ], 00:18:15.811 "driver_specific": {} 00:18:15.811 } 00:18:15.811 ] 00:18:15.811 23:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:15.811 23:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:15.811 23:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:15.811 23:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:16.069 [2024-07-13 23:04:05.291315] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:16.069 [2024-07-13 23:04:05.292299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:16.069 [2024-07-13 23:04:05.292614] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:16.069 [2024-07-13 23:04:05.294872] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:16.069 23:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:16.069 23:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:16.069 23:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:16.069 23:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:16.069 23:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:16.069 23:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:16.069 23:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:16.070 23:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:16.070 23:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:16.070 23:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:16.070 23:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.070 23:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.328 23:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:16.328 "name": "Existed_Raid", 00:18:16.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.328 "strip_size_kb": 64, 00:18:16.328 "state": "configuring", 00:18:16.328 "raid_level": "concat", 00:18:16.328 "superblock": false, 00:18:16.328 "num_base_bdevs": 3, 00:18:16.328 "num_base_bdevs_discovered": 2, 00:18:16.328 "num_base_bdevs_operational": 3, 00:18:16.328 "base_bdevs_list": [ 00:18:16.328 { 00:18:16.328 "name": "BaseBdev1", 00:18:16.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.328 "is_configured": false, 00:18:16.328 "data_offset": 0, 00:18:16.328 "data_size": 0 00:18:16.328 }, 00:18:16.328 { 00:18:16.328 "name": "BaseBdev2", 00:18:16.328 "uuid": "1014ffaa-8773-4be7-b3cd-756212af35a5", 00:18:16.328 "is_configured": true, 00:18:16.328 "data_offset": 0, 00:18:16.328 "data_size": 65536 00:18:16.328 }, 00:18:16.328 { 00:18:16.328 "name": "BaseBdev3", 00:18:16.328 "uuid": "9d52f9a0-e008-48d8-9186-fa7b6ef9a24d", 00:18:16.328 "is_configured": true, 00:18:16.328 "data_offset": 0, 00:18:16.328 "data_size": 65536 00:18:16.328 } 00:18:16.328 ] 00:18:16.328 }' 00:18:16.328 23:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:16.328 23:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.895 23:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:17.152 [2024-07-13 23:04:06.439644] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:17.152 23:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:17.152 23:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:17.152 23:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:17.152 23:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:17.152 23:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:17.152 23:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:17.152 23:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:17.152 23:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:17.152 23:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:17.152 23:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:17.152 23:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.152 23:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.410 23:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:17.410 "name": "Existed_Raid", 00:18:17.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.410 "strip_size_kb": 64, 00:18:17.410 "state": "configuring", 00:18:17.410 "raid_level": "concat", 00:18:17.410 "superblock": false, 00:18:17.410 "num_base_bdevs": 3, 00:18:17.410 "num_base_bdevs_discovered": 1, 00:18:17.410 "num_base_bdevs_operational": 3, 00:18:17.410 "base_bdevs_list": [ 00:18:17.410 { 00:18:17.410 "name": "BaseBdev1", 00:18:17.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.410 "is_configured": false, 00:18:17.410 "data_offset": 0, 00:18:17.410 "data_size": 0 00:18:17.410 }, 00:18:17.410 { 00:18:17.410 "name": null, 00:18:17.410 "uuid": "1014ffaa-8773-4be7-b3cd-756212af35a5", 00:18:17.410 "is_configured": false, 00:18:17.410 "data_offset": 0, 00:18:17.410 "data_size": 65536 00:18:17.410 }, 00:18:17.410 { 00:18:17.410 "name": "BaseBdev3", 00:18:17.410 "uuid": "9d52f9a0-e008-48d8-9186-fa7b6ef9a24d", 00:18:17.410 "is_configured": true, 00:18:17.410 "data_offset": 0, 00:18:17.410 "data_size": 65536 00:18:17.410 } 00:18:17.410 ] 00:18:17.410 }' 00:18:17.410 23:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:17.410 23:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.975 23:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.975 23:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:18.559 23:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:18:18.559 23:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:18.559 [2024-07-13 23:04:07.860807] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:18.559 BaseBdev1 00:18:18.559 23:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:18:18.559 23:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:18.559 23:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:18.559 23:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:18.559 23:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:18.559 23:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:18.559 23:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:18.816 23:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:19.075 [ 00:18:19.075 { 00:18:19.075 "name": "BaseBdev1", 00:18:19.075 "aliases": [ 00:18:19.075 "226db4dc-25cc-4893-bb74-c4b161d2744e" 00:18:19.075 ], 00:18:19.075 "product_name": "Malloc disk", 00:18:19.075 "block_size": 512, 00:18:19.075 "num_blocks": 65536, 00:18:19.075 "uuid": "226db4dc-25cc-4893-bb74-c4b161d2744e", 00:18:19.075 "assigned_rate_limits": { 00:18:19.075 "rw_ios_per_sec": 0, 00:18:19.075 "rw_mbytes_per_sec": 0, 00:18:19.075 "r_mbytes_per_sec": 0, 00:18:19.075 "w_mbytes_per_sec": 0 00:18:19.075 }, 00:18:19.075 "claimed": true, 00:18:19.075 "claim_type": "exclusive_write", 00:18:19.075 "zoned": false, 00:18:19.075 "supported_io_types": { 00:18:19.075 "read": true, 00:18:19.075 "write": true, 00:18:19.075 "unmap": true, 00:18:19.075 "flush": true, 00:18:19.075 "reset": true, 00:18:19.075 "nvme_admin": false, 00:18:19.075 "nvme_io": false, 00:18:19.075 "nvme_io_md": false, 00:18:19.075 "write_zeroes": true, 00:18:19.075 "zcopy": true, 00:18:19.075 "get_zone_info": false, 00:18:19.075 "zone_management": false, 00:18:19.075 "zone_append": false, 00:18:19.075 "compare": false, 00:18:19.075 "compare_and_write": false, 00:18:19.075 "abort": true, 00:18:19.075 "seek_hole": false, 00:18:19.075 "seek_data": false, 00:18:19.075 "copy": true, 00:18:19.075 "nvme_iov_md": false 00:18:19.075 }, 00:18:19.075 "memory_domains": [ 00:18:19.075 { 00:18:19.075 "dma_device_id": "system", 00:18:19.075 "dma_device_type": 1 00:18:19.075 }, 00:18:19.075 { 00:18:19.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.075 "dma_device_type": 2 00:18:19.075 } 00:18:19.075 ], 00:18:19.075 "driver_specific": {} 00:18:19.075 } 00:18:19.075 ] 00:18:19.075 23:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:19.075 23:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:19.075 23:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:19.075 23:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:19.075 23:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:19.075 23:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:19.075 23:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:19.075 23:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:19.075 23:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:19.075 23:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:19.075 23:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:19.075 23:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.075 23:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:19.333 23:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:19.333 "name": "Existed_Raid", 00:18:19.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.333 "strip_size_kb": 64, 00:18:19.333 "state": "configuring", 00:18:19.333 "raid_level": "concat", 00:18:19.333 "superblock": false, 00:18:19.333 "num_base_bdevs": 3, 00:18:19.333 "num_base_bdevs_discovered": 2, 00:18:19.333 "num_base_bdevs_operational": 3, 00:18:19.333 "base_bdevs_list": [ 00:18:19.333 { 00:18:19.333 "name": "BaseBdev1", 00:18:19.333 "uuid": "226db4dc-25cc-4893-bb74-c4b161d2744e", 00:18:19.333 "is_configured": true, 00:18:19.333 "data_offset": 0, 00:18:19.333 "data_size": 65536 00:18:19.333 }, 00:18:19.333 { 00:18:19.333 "name": null, 00:18:19.333 "uuid": "1014ffaa-8773-4be7-b3cd-756212af35a5", 00:18:19.333 "is_configured": false, 00:18:19.333 "data_offset": 0, 00:18:19.333 "data_size": 65536 00:18:19.333 }, 00:18:19.333 { 00:18:19.333 "name": "BaseBdev3", 00:18:19.333 "uuid": "9d52f9a0-e008-48d8-9186-fa7b6ef9a24d", 00:18:19.333 "is_configured": true, 00:18:19.333 "data_offset": 0, 00:18:19.333 "data_size": 65536 00:18:19.333 } 00:18:19.333 ] 00:18:19.333 }' 00:18:19.333 23:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:19.333 23:04:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.898 23:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.898 23:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:20.156 23:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:18:20.156 23:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:20.414 [2024-07-13 23:04:09.781407] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:20.414 23:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:20.414 23:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:20.414 23:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:20.414 23:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:20.414 23:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:20.414 23:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:20.414 23:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:20.414 23:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:20.414 23:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:20.414 23:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:20.414 23:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.414 23:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.671 23:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:20.671 "name": "Existed_Raid", 00:18:20.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.671 "strip_size_kb": 64, 00:18:20.671 "state": "configuring", 00:18:20.671 "raid_level": "concat", 00:18:20.671 "superblock": false, 00:18:20.671 "num_base_bdevs": 3, 00:18:20.671 "num_base_bdevs_discovered": 1, 00:18:20.671 "num_base_bdevs_operational": 3, 00:18:20.671 "base_bdevs_list": [ 00:18:20.671 { 00:18:20.671 "name": "BaseBdev1", 00:18:20.671 "uuid": "226db4dc-25cc-4893-bb74-c4b161d2744e", 00:18:20.671 "is_configured": true, 00:18:20.671 "data_offset": 0, 00:18:20.671 "data_size": 65536 00:18:20.671 }, 00:18:20.671 { 00:18:20.671 "name": null, 00:18:20.671 "uuid": "1014ffaa-8773-4be7-b3cd-756212af35a5", 00:18:20.671 "is_configured": false, 00:18:20.671 "data_offset": 0, 00:18:20.671 "data_size": 65536 00:18:20.671 }, 00:18:20.671 { 00:18:20.671 "name": null, 00:18:20.671 "uuid": "9d52f9a0-e008-48d8-9186-fa7b6ef9a24d", 00:18:20.671 "is_configured": false, 00:18:20.671 "data_offset": 0, 00:18:20.671 "data_size": 65536 00:18:20.671 } 00:18:20.671 ] 00:18:20.671 }' 00:18:20.671 23:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:20.671 23:04:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.236 23:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.236 23:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:21.802 23:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:18:21.802 23:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:21.802 [2024-07-13 23:04:11.169735] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:21.802 23:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:21.802 23:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:21.802 23:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:21.802 23:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:21.802 23:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:21.802 23:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:21.802 23:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:21.802 23:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:21.803 23:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:21.803 23:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:21.803 23:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.803 23:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.060 23:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:22.060 "name": "Existed_Raid", 00:18:22.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.060 "strip_size_kb": 64, 00:18:22.060 "state": "configuring", 00:18:22.060 "raid_level": "concat", 00:18:22.060 "superblock": false, 00:18:22.060 "num_base_bdevs": 3, 00:18:22.060 "num_base_bdevs_discovered": 2, 00:18:22.060 "num_base_bdevs_operational": 3, 00:18:22.060 "base_bdevs_list": [ 00:18:22.060 { 00:18:22.060 "name": "BaseBdev1", 00:18:22.060 "uuid": "226db4dc-25cc-4893-bb74-c4b161d2744e", 00:18:22.060 "is_configured": true, 00:18:22.060 "data_offset": 0, 00:18:22.060 "data_size": 65536 00:18:22.061 }, 00:18:22.061 { 00:18:22.061 "name": null, 00:18:22.061 "uuid": "1014ffaa-8773-4be7-b3cd-756212af35a5", 00:18:22.061 "is_configured": false, 00:18:22.061 "data_offset": 0, 00:18:22.061 "data_size": 65536 00:18:22.061 }, 00:18:22.061 { 00:18:22.061 "name": "BaseBdev3", 00:18:22.061 "uuid": "9d52f9a0-e008-48d8-9186-fa7b6ef9a24d", 00:18:22.061 "is_configured": true, 00:18:22.061 "data_offset": 0, 00:18:22.061 "data_size": 65536 00:18:22.061 } 00:18:22.061 ] 00:18:22.061 }' 00:18:22.061 23:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:22.061 23:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.014 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.014 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:23.014 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:18:23.014 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:23.273 [2024-07-13 23:04:12.562017] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:23.273 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:23.273 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:23.273 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:23.273 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:23.273 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:23.273 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:23.273 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:23.273 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:23.273 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:23.273 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:23.273 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.273 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:23.531 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:23.531 "name": "Existed_Raid", 00:18:23.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.531 "strip_size_kb": 64, 00:18:23.531 "state": "configuring", 00:18:23.531 "raid_level": "concat", 00:18:23.531 "superblock": false, 00:18:23.531 "num_base_bdevs": 3, 00:18:23.531 "num_base_bdevs_discovered": 1, 00:18:23.531 "num_base_bdevs_operational": 3, 00:18:23.531 "base_bdevs_list": [ 00:18:23.531 { 00:18:23.531 "name": null, 00:18:23.531 "uuid": "226db4dc-25cc-4893-bb74-c4b161d2744e", 00:18:23.531 "is_configured": false, 00:18:23.531 "data_offset": 0, 00:18:23.531 "data_size": 65536 00:18:23.531 }, 00:18:23.531 { 00:18:23.531 "name": null, 00:18:23.531 "uuid": "1014ffaa-8773-4be7-b3cd-756212af35a5", 00:18:23.531 "is_configured": false, 00:18:23.531 "data_offset": 0, 00:18:23.531 "data_size": 65536 00:18:23.531 }, 00:18:23.531 { 00:18:23.531 "name": "BaseBdev3", 00:18:23.531 "uuid": "9d52f9a0-e008-48d8-9186-fa7b6ef9a24d", 00:18:23.531 "is_configured": true, 00:18:23.531 "data_offset": 0, 00:18:23.531 "data_size": 65536 00:18:23.531 } 00:18:23.531 ] 00:18:23.531 }' 00:18:23.531 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:23.531 23:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.097 23:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.097 23:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:24.663 23:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:18:24.663 23:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:24.663 [2024-07-13 23:04:14.023037] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:24.663 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:24.663 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:24.663 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:24.663 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:24.663 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:24.663 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:24.663 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:24.663 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:24.663 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:24.663 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:24.663 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.663 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:24.922 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:24.922 "name": "Existed_Raid", 00:18:24.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.922 "strip_size_kb": 64, 00:18:24.922 "state": "configuring", 00:18:24.922 "raid_level": "concat", 00:18:24.922 "superblock": false, 00:18:24.922 "num_base_bdevs": 3, 00:18:24.922 "num_base_bdevs_discovered": 2, 00:18:24.922 "num_base_bdevs_operational": 3, 00:18:24.922 "base_bdevs_list": [ 00:18:24.922 { 00:18:24.922 "name": null, 00:18:24.922 "uuid": "226db4dc-25cc-4893-bb74-c4b161d2744e", 00:18:24.922 "is_configured": false, 00:18:24.922 "data_offset": 0, 00:18:24.922 "data_size": 65536 00:18:24.922 }, 00:18:24.922 { 00:18:24.922 "name": "BaseBdev2", 00:18:24.922 "uuid": "1014ffaa-8773-4be7-b3cd-756212af35a5", 00:18:24.922 "is_configured": true, 00:18:24.922 "data_offset": 0, 00:18:24.922 "data_size": 65536 00:18:24.922 }, 00:18:24.922 { 00:18:24.922 "name": "BaseBdev3", 00:18:24.922 "uuid": "9d52f9a0-e008-48d8-9186-fa7b6ef9a24d", 00:18:24.922 "is_configured": true, 00:18:24.922 "data_offset": 0, 00:18:24.922 "data_size": 65536 00:18:24.922 } 00:18:24.922 ] 00:18:24.922 }' 00:18:24.922 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:24.922 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.490 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.490 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:25.748 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:18:25.748 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.748 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:26.008 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 226db4dc-25cc-4893-bb74-c4b161d2744e 00:18:26.267 [2024-07-13 23:04:15.608363] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:26.267 [2024-07-13 23:04:15.608637] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:18:26.267 [2024-07-13 23:04:15.608682] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:26.267 [2024-07-13 23:04:15.608868] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:18:26.267 [2024-07-13 23:04:15.609366] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:18:26.267 [2024-07-13 23:04:15.609507] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:18:26.267 [2024-07-13 23:04:15.609812] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.267 NewBaseBdev 00:18:26.267 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:18:26.267 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:18:26.267 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:26.267 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:26.267 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:26.267 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:26.267 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:26.526 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:26.792 [ 00:18:26.792 { 00:18:26.792 "name": "NewBaseBdev", 00:18:26.792 "aliases": [ 00:18:26.792 "226db4dc-25cc-4893-bb74-c4b161d2744e" 00:18:26.792 ], 00:18:26.792 "product_name": "Malloc disk", 00:18:26.792 "block_size": 512, 00:18:26.792 "num_blocks": 65536, 00:18:26.792 "uuid": "226db4dc-25cc-4893-bb74-c4b161d2744e", 00:18:26.792 "assigned_rate_limits": { 00:18:26.792 "rw_ios_per_sec": 0, 00:18:26.792 "rw_mbytes_per_sec": 0, 00:18:26.792 "r_mbytes_per_sec": 0, 00:18:26.792 "w_mbytes_per_sec": 0 00:18:26.792 }, 00:18:26.792 "claimed": true, 00:18:26.792 "claim_type": "exclusive_write", 00:18:26.792 "zoned": false, 00:18:26.792 "supported_io_types": { 00:18:26.792 "read": true, 00:18:26.792 "write": true, 00:18:26.792 "unmap": true, 00:18:26.792 "flush": true, 00:18:26.792 "reset": true, 00:18:26.792 "nvme_admin": false, 00:18:26.792 "nvme_io": false, 00:18:26.792 "nvme_io_md": false, 00:18:26.792 "write_zeroes": true, 00:18:26.792 "zcopy": true, 00:18:26.792 "get_zone_info": false, 00:18:26.792 "zone_management": false, 00:18:26.792 "zone_append": false, 00:18:26.792 "compare": false, 00:18:26.792 "compare_and_write": false, 00:18:26.792 "abort": true, 00:18:26.792 "seek_hole": false, 00:18:26.792 "seek_data": false, 00:18:26.792 "copy": true, 00:18:26.792 "nvme_iov_md": false 00:18:26.792 }, 00:18:26.792 "memory_domains": [ 00:18:26.792 { 00:18:26.792 "dma_device_id": "system", 00:18:26.792 "dma_device_type": 1 00:18:26.792 }, 00:18:26.792 { 00:18:26.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.792 "dma_device_type": 2 00:18:26.792 } 00:18:26.792 ], 00:18:26.792 "driver_specific": {} 00:18:26.792 } 00:18:26.792 ] 00:18:26.792 23:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:26.792 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:18:26.792 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:26.792 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:26.792 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:26.792 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:26.792 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:26.792 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:26.792 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:26.792 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:26.792 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:26.792 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.792 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.050 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:27.050 "name": "Existed_Raid", 00:18:27.050 "uuid": "f4dac1bd-85ee-41b6-972e-53a8a42b2c29", 00:18:27.050 "strip_size_kb": 64, 00:18:27.050 "state": "online", 00:18:27.050 "raid_level": "concat", 00:18:27.050 "superblock": false, 00:18:27.050 "num_base_bdevs": 3, 00:18:27.050 "num_base_bdevs_discovered": 3, 00:18:27.050 "num_base_bdevs_operational": 3, 00:18:27.050 "base_bdevs_list": [ 00:18:27.050 { 00:18:27.050 "name": "NewBaseBdev", 00:18:27.050 "uuid": "226db4dc-25cc-4893-bb74-c4b161d2744e", 00:18:27.050 "is_configured": true, 00:18:27.050 "data_offset": 0, 00:18:27.050 "data_size": 65536 00:18:27.050 }, 00:18:27.050 { 00:18:27.050 "name": "BaseBdev2", 00:18:27.050 "uuid": "1014ffaa-8773-4be7-b3cd-756212af35a5", 00:18:27.050 "is_configured": true, 00:18:27.050 "data_offset": 0, 00:18:27.050 "data_size": 65536 00:18:27.050 }, 00:18:27.050 { 00:18:27.050 "name": "BaseBdev3", 00:18:27.050 "uuid": "9d52f9a0-e008-48d8-9186-fa7b6ef9a24d", 00:18:27.050 "is_configured": true, 00:18:27.050 "data_offset": 0, 00:18:27.050 "data_size": 65536 00:18:27.050 } 00:18:27.050 ] 00:18:27.050 }' 00:18:27.050 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:27.050 23:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.616 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:18:27.616 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:27.616 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:27.616 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:27.616 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:27.616 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:27.616 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:27.616 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:27.875 [2024-07-13 23:04:17.245108] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:27.875 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:27.875 "name": "Existed_Raid", 00:18:27.875 "aliases": [ 00:18:27.875 "f4dac1bd-85ee-41b6-972e-53a8a42b2c29" 00:18:27.875 ], 00:18:27.875 "product_name": "Raid Volume", 00:18:27.875 "block_size": 512, 00:18:27.875 "num_blocks": 196608, 00:18:27.875 "uuid": "f4dac1bd-85ee-41b6-972e-53a8a42b2c29", 00:18:27.875 "assigned_rate_limits": { 00:18:27.875 "rw_ios_per_sec": 0, 00:18:27.875 "rw_mbytes_per_sec": 0, 00:18:27.875 "r_mbytes_per_sec": 0, 00:18:27.875 "w_mbytes_per_sec": 0 00:18:27.875 }, 00:18:27.875 "claimed": false, 00:18:27.875 "zoned": false, 00:18:27.875 "supported_io_types": { 00:18:27.875 "read": true, 00:18:27.875 "write": true, 00:18:27.875 "unmap": true, 00:18:27.875 "flush": true, 00:18:27.875 "reset": true, 00:18:27.875 "nvme_admin": false, 00:18:27.875 "nvme_io": false, 00:18:27.875 "nvme_io_md": false, 00:18:27.875 "write_zeroes": true, 00:18:27.875 "zcopy": false, 00:18:27.875 "get_zone_info": false, 00:18:27.875 "zone_management": false, 00:18:27.875 "zone_append": false, 00:18:27.875 "compare": false, 00:18:27.875 "compare_and_write": false, 00:18:27.875 "abort": false, 00:18:27.875 "seek_hole": false, 00:18:27.875 "seek_data": false, 00:18:27.875 "copy": false, 00:18:27.875 "nvme_iov_md": false 00:18:27.875 }, 00:18:27.875 "memory_domains": [ 00:18:27.875 { 00:18:27.875 "dma_device_id": "system", 00:18:27.875 "dma_device_type": 1 00:18:27.875 }, 00:18:27.875 { 00:18:27.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.875 "dma_device_type": 2 00:18:27.875 }, 00:18:27.875 { 00:18:27.875 "dma_device_id": "system", 00:18:27.875 "dma_device_type": 1 00:18:27.875 }, 00:18:27.875 { 00:18:27.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.875 "dma_device_type": 2 00:18:27.875 }, 00:18:27.875 { 00:18:27.875 "dma_device_id": "system", 00:18:27.875 "dma_device_type": 1 00:18:27.875 }, 00:18:27.875 { 00:18:27.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.875 "dma_device_type": 2 00:18:27.875 } 00:18:27.875 ], 00:18:27.875 "driver_specific": { 00:18:27.875 "raid": { 00:18:27.875 "uuid": "f4dac1bd-85ee-41b6-972e-53a8a42b2c29", 00:18:27.875 "strip_size_kb": 64, 00:18:27.875 "state": "online", 00:18:27.875 "raid_level": "concat", 00:18:27.875 "superblock": false, 00:18:27.875 "num_base_bdevs": 3, 00:18:27.875 "num_base_bdevs_discovered": 3, 00:18:27.875 "num_base_bdevs_operational": 3, 00:18:27.875 "base_bdevs_list": [ 00:18:27.875 { 00:18:27.875 "name": "NewBaseBdev", 00:18:27.875 "uuid": "226db4dc-25cc-4893-bb74-c4b161d2744e", 00:18:27.875 "is_configured": true, 00:18:27.875 "data_offset": 0, 00:18:27.875 "data_size": 65536 00:18:27.875 }, 00:18:27.875 { 00:18:27.875 "name": "BaseBdev2", 00:18:27.875 "uuid": "1014ffaa-8773-4be7-b3cd-756212af35a5", 00:18:27.875 "is_configured": true, 00:18:27.875 "data_offset": 0, 00:18:27.875 "data_size": 65536 00:18:27.875 }, 00:18:27.875 { 00:18:27.875 "name": "BaseBdev3", 00:18:27.875 "uuid": "9d52f9a0-e008-48d8-9186-fa7b6ef9a24d", 00:18:27.875 "is_configured": true, 00:18:27.875 "data_offset": 0, 00:18:27.875 "data_size": 65536 00:18:27.875 } 00:18:27.875 ] 00:18:27.875 } 00:18:27.875 } 00:18:27.876 }' 00:18:27.876 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:28.134 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:18:28.134 BaseBdev2 00:18:28.134 BaseBdev3' 00:18:28.134 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:28.134 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:28.134 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:28.134 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:28.134 "name": "NewBaseBdev", 00:18:28.134 "aliases": [ 00:18:28.134 "226db4dc-25cc-4893-bb74-c4b161d2744e" 00:18:28.134 ], 00:18:28.134 "product_name": "Malloc disk", 00:18:28.134 "block_size": 512, 00:18:28.134 "num_blocks": 65536, 00:18:28.134 "uuid": "226db4dc-25cc-4893-bb74-c4b161d2744e", 00:18:28.134 "assigned_rate_limits": { 00:18:28.134 "rw_ios_per_sec": 0, 00:18:28.134 "rw_mbytes_per_sec": 0, 00:18:28.134 "r_mbytes_per_sec": 0, 00:18:28.134 "w_mbytes_per_sec": 0 00:18:28.134 }, 00:18:28.134 "claimed": true, 00:18:28.134 "claim_type": "exclusive_write", 00:18:28.134 "zoned": false, 00:18:28.134 "supported_io_types": { 00:18:28.134 "read": true, 00:18:28.134 "write": true, 00:18:28.134 "unmap": true, 00:18:28.134 "flush": true, 00:18:28.134 "reset": true, 00:18:28.134 "nvme_admin": false, 00:18:28.134 "nvme_io": false, 00:18:28.134 "nvme_io_md": false, 00:18:28.134 "write_zeroes": true, 00:18:28.134 "zcopy": true, 00:18:28.134 "get_zone_info": false, 00:18:28.134 "zone_management": false, 00:18:28.134 "zone_append": false, 00:18:28.134 "compare": false, 00:18:28.134 "compare_and_write": false, 00:18:28.134 "abort": true, 00:18:28.134 "seek_hole": false, 00:18:28.134 "seek_data": false, 00:18:28.134 "copy": true, 00:18:28.134 "nvme_iov_md": false 00:18:28.134 }, 00:18:28.134 "memory_domains": [ 00:18:28.134 { 00:18:28.134 "dma_device_id": "system", 00:18:28.134 "dma_device_type": 1 00:18:28.134 }, 00:18:28.134 { 00:18:28.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.134 "dma_device_type": 2 00:18:28.134 } 00:18:28.134 ], 00:18:28.134 "driver_specific": {} 00:18:28.134 }' 00:18:28.134 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:28.393 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:28.393 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:28.393 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:28.393 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:28.393 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:28.393 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:28.393 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:28.651 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:28.651 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:28.651 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:28.651 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:28.651 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:28.651 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:28.652 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:28.910 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:28.910 "name": "BaseBdev2", 00:18:28.910 "aliases": [ 00:18:28.910 "1014ffaa-8773-4be7-b3cd-756212af35a5" 00:18:28.910 ], 00:18:28.910 "product_name": "Malloc disk", 00:18:28.910 "block_size": 512, 00:18:28.910 "num_blocks": 65536, 00:18:28.910 "uuid": "1014ffaa-8773-4be7-b3cd-756212af35a5", 00:18:28.910 "assigned_rate_limits": { 00:18:28.910 "rw_ios_per_sec": 0, 00:18:28.910 "rw_mbytes_per_sec": 0, 00:18:28.910 "r_mbytes_per_sec": 0, 00:18:28.910 "w_mbytes_per_sec": 0 00:18:28.910 }, 00:18:28.910 "claimed": true, 00:18:28.910 "claim_type": "exclusive_write", 00:18:28.910 "zoned": false, 00:18:28.910 "supported_io_types": { 00:18:28.910 "read": true, 00:18:28.910 "write": true, 00:18:28.910 "unmap": true, 00:18:28.910 "flush": true, 00:18:28.910 "reset": true, 00:18:28.910 "nvme_admin": false, 00:18:28.910 "nvme_io": false, 00:18:28.910 "nvme_io_md": false, 00:18:28.910 "write_zeroes": true, 00:18:28.910 "zcopy": true, 00:18:28.910 "get_zone_info": false, 00:18:28.910 "zone_management": false, 00:18:28.910 "zone_append": false, 00:18:28.910 "compare": false, 00:18:28.910 "compare_and_write": false, 00:18:28.910 "abort": true, 00:18:28.910 "seek_hole": false, 00:18:28.910 "seek_data": false, 00:18:28.910 "copy": true, 00:18:28.910 "nvme_iov_md": false 00:18:28.910 }, 00:18:28.910 "memory_domains": [ 00:18:28.910 { 00:18:28.910 "dma_device_id": "system", 00:18:28.910 "dma_device_type": 1 00:18:28.910 }, 00:18:28.910 { 00:18:28.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.910 "dma_device_type": 2 00:18:28.910 } 00:18:28.910 ], 00:18:28.910 "driver_specific": {} 00:18:28.910 }' 00:18:28.910 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:28.910 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:28.910 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:29.169 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:29.169 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:29.169 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:29.169 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:29.169 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:29.169 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:29.169 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:29.169 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:29.428 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:29.428 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:29.428 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:29.428 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:29.687 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:29.687 "name": "BaseBdev3", 00:18:29.687 "aliases": [ 00:18:29.687 "9d52f9a0-e008-48d8-9186-fa7b6ef9a24d" 00:18:29.687 ], 00:18:29.687 "product_name": "Malloc disk", 00:18:29.687 "block_size": 512, 00:18:29.687 "num_blocks": 65536, 00:18:29.687 "uuid": "9d52f9a0-e008-48d8-9186-fa7b6ef9a24d", 00:18:29.687 "assigned_rate_limits": { 00:18:29.687 "rw_ios_per_sec": 0, 00:18:29.687 "rw_mbytes_per_sec": 0, 00:18:29.687 "r_mbytes_per_sec": 0, 00:18:29.687 "w_mbytes_per_sec": 0 00:18:29.687 }, 00:18:29.687 "claimed": true, 00:18:29.687 "claim_type": "exclusive_write", 00:18:29.687 "zoned": false, 00:18:29.687 "supported_io_types": { 00:18:29.687 "read": true, 00:18:29.687 "write": true, 00:18:29.687 "unmap": true, 00:18:29.687 "flush": true, 00:18:29.687 "reset": true, 00:18:29.687 "nvme_admin": false, 00:18:29.687 "nvme_io": false, 00:18:29.687 "nvme_io_md": false, 00:18:29.687 "write_zeroes": true, 00:18:29.687 "zcopy": true, 00:18:29.687 "get_zone_info": false, 00:18:29.687 "zone_management": false, 00:18:29.687 "zone_append": false, 00:18:29.687 "compare": false, 00:18:29.687 "compare_and_write": false, 00:18:29.687 "abort": true, 00:18:29.687 "seek_hole": false, 00:18:29.687 "seek_data": false, 00:18:29.687 "copy": true, 00:18:29.687 "nvme_iov_md": false 00:18:29.687 }, 00:18:29.687 "memory_domains": [ 00:18:29.687 { 00:18:29.687 "dma_device_id": "system", 00:18:29.687 "dma_device_type": 1 00:18:29.687 }, 00:18:29.687 { 00:18:29.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.687 "dma_device_type": 2 00:18:29.687 } 00:18:29.687 ], 00:18:29.687 "driver_specific": {} 00:18:29.687 }' 00:18:29.687 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:29.687 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:29.687 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:29.687 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:29.687 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:29.687 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:29.687 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:29.945 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:29.945 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:29.945 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:29.945 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:29.945 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:29.945 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:30.206 [2024-07-13 23:04:19.509408] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:30.206 [2024-07-13 23:04:19.509632] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:30.206 [2024-07-13 23:04:19.509831] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:30.206 [2024-07-13 23:04:19.509995] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:30.206 [2024-07-13 23:04:19.510109] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:18:30.206 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 138140 00:18:30.206 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 138140 ']' 00:18:30.206 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 138140 00:18:30.206 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:18:30.206 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:30.206 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 138140 00:18:30.206 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:30.206 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:30.206 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 138140' 00:18:30.206 killing process with pid 138140 00:18:30.206 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 138140 00:18:30.206 [2024-07-13 23:04:19.551928] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:30.206 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 138140 00:18:30.206 [2024-07-13 23:04:19.578039] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:30.464 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:18:30.464 00:18:30.464 real 0m28.810s 00:18:30.464 user 0m55.294s 00:18:30.464 sys 0m3.483s 00:18:30.464 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:30.464 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.464 ************************************ 00:18:30.464 END TEST raid_state_function_test 00:18:30.464 ************************************ 00:18:30.464 23:04:19 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:30.464 23:04:19 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:18:30.464 23:04:19 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:30.464 23:04:19 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:30.464 23:04:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:30.464 ************************************ 00:18:30.464 START TEST raid_state_function_test_sb 00:18:30.464 ************************************ 00:18:30.464 23:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 true 00:18:30.464 23:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:18:30.464 23:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:18:30.464 23:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:18:30.464 23:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:30.464 23:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:30.464 23:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:30.464 23:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:18:30.464 23:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:30.464 23:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:30.464 23:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:18:30.464 23:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:30.464 23:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:30.464 23:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:18:30.464 23:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:30.464 23:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:30.464 23:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:30.464 23:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:30.464 23:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:30.722 23:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:30.722 23:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:30.722 23:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:30.722 23:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:18:30.722 23:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:18:30.722 23:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:18:30.722 23:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:18:30.722 23:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:18:30.722 23:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=139110 00:18:30.722 23:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:30.722 Process raid pid: 139110 00:18:30.722 23:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 139110' 00:18:30.722 23:04:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 139110 /var/tmp/spdk-raid.sock 00:18:30.722 23:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 139110 ']' 00:18:30.722 23:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:30.722 23:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:30.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:30.722 23:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:30.722 23:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:30.722 23:04:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.722 [2024-07-13 23:04:19.924967] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:30.722 [2024-07-13 23:04:19.925203] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:30.722 [2024-07-13 23:04:20.071146] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.981 [2024-07-13 23:04:20.137432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.981 [2024-07-13 23:04:20.192486] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:30.981 23:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:30.981 23:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:18:30.981 23:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:31.239 [2024-07-13 23:04:20.489397] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:31.239 [2024-07-13 23:04:20.489496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:31.239 [2024-07-13 23:04:20.489528] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:31.239 [2024-07-13 23:04:20.489547] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:31.239 [2024-07-13 23:04:20.489555] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:31.239 [2024-07-13 23:04:20.489594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:31.239 23:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:31.239 23:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:31.239 23:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:31.239 23:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:31.239 23:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:31.239 23:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:31.239 23:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:31.239 23:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:31.239 23:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:31.239 23:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:31.239 23:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.239 23:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.497 23:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:31.497 "name": "Existed_Raid", 00:18:31.497 "uuid": "9950cd0a-080a-45d9-b834-7de5a4b9a501", 00:18:31.497 "strip_size_kb": 64, 00:18:31.497 "state": "configuring", 00:18:31.497 "raid_level": "concat", 00:18:31.497 "superblock": true, 00:18:31.497 "num_base_bdevs": 3, 00:18:31.497 "num_base_bdevs_discovered": 0, 00:18:31.497 "num_base_bdevs_operational": 3, 00:18:31.497 "base_bdevs_list": [ 00:18:31.497 { 00:18:31.497 "name": "BaseBdev1", 00:18:31.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.497 "is_configured": false, 00:18:31.497 "data_offset": 0, 00:18:31.497 "data_size": 0 00:18:31.497 }, 00:18:31.497 { 00:18:31.497 "name": "BaseBdev2", 00:18:31.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.497 "is_configured": false, 00:18:31.497 "data_offset": 0, 00:18:31.497 "data_size": 0 00:18:31.497 }, 00:18:31.497 { 00:18:31.497 "name": "BaseBdev3", 00:18:31.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.497 "is_configured": false, 00:18:31.497 "data_offset": 0, 00:18:31.497 "data_size": 0 00:18:31.497 } 00:18:31.497 ] 00:18:31.497 }' 00:18:31.497 23:04:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:31.497 23:04:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.068 23:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:32.338 [2024-07-13 23:04:21.617706] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:32.338 [2024-07-13 23:04:21.617776] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:18:32.338 23:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:32.603 [2024-07-13 23:04:21.885803] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:32.603 [2024-07-13 23:04:21.885894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:32.603 [2024-07-13 23:04:21.885925] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:32.603 [2024-07-13 23:04:21.885944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:32.603 [2024-07-13 23:04:21.885952] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:32.603 [2024-07-13 23:04:21.885975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:32.603 23:04:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:32.862 [2024-07-13 23:04:22.149433] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:32.862 BaseBdev1 00:18:32.862 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:32.862 23:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:32.862 23:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:32.862 23:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:32.862 23:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:32.862 23:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:32.862 23:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:33.120 23:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:33.378 [ 00:18:33.379 { 00:18:33.379 "name": "BaseBdev1", 00:18:33.379 "aliases": [ 00:18:33.379 "566dce97-d54d-4f40-b0c5-64686606776f" 00:18:33.379 ], 00:18:33.379 "product_name": "Malloc disk", 00:18:33.379 "block_size": 512, 00:18:33.379 "num_blocks": 65536, 00:18:33.379 "uuid": "566dce97-d54d-4f40-b0c5-64686606776f", 00:18:33.379 "assigned_rate_limits": { 00:18:33.379 "rw_ios_per_sec": 0, 00:18:33.379 "rw_mbytes_per_sec": 0, 00:18:33.379 "r_mbytes_per_sec": 0, 00:18:33.379 "w_mbytes_per_sec": 0 00:18:33.379 }, 00:18:33.379 "claimed": true, 00:18:33.379 "claim_type": "exclusive_write", 00:18:33.379 "zoned": false, 00:18:33.379 "supported_io_types": { 00:18:33.379 "read": true, 00:18:33.379 "write": true, 00:18:33.379 "unmap": true, 00:18:33.379 "flush": true, 00:18:33.379 "reset": true, 00:18:33.379 "nvme_admin": false, 00:18:33.379 "nvme_io": false, 00:18:33.379 "nvme_io_md": false, 00:18:33.379 "write_zeroes": true, 00:18:33.379 "zcopy": true, 00:18:33.379 "get_zone_info": false, 00:18:33.379 "zone_management": false, 00:18:33.379 "zone_append": false, 00:18:33.379 "compare": false, 00:18:33.379 "compare_and_write": false, 00:18:33.379 "abort": true, 00:18:33.379 "seek_hole": false, 00:18:33.379 "seek_data": false, 00:18:33.379 "copy": true, 00:18:33.379 "nvme_iov_md": false 00:18:33.379 }, 00:18:33.379 "memory_domains": [ 00:18:33.379 { 00:18:33.379 "dma_device_id": "system", 00:18:33.379 "dma_device_type": 1 00:18:33.379 }, 00:18:33.379 { 00:18:33.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:33.379 "dma_device_type": 2 00:18:33.379 } 00:18:33.379 ], 00:18:33.379 "driver_specific": {} 00:18:33.379 } 00:18:33.379 ] 00:18:33.379 23:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:33.379 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:33.379 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:33.379 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:33.379 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:33.379 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:33.379 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:33.379 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:33.379 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:33.379 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:33.379 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:33.379 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.379 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.638 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:33.638 "name": "Existed_Raid", 00:18:33.638 "uuid": "1fd92dd2-d4a3-4523-8a88-1e95ccabd8f1", 00:18:33.638 "strip_size_kb": 64, 00:18:33.638 "state": "configuring", 00:18:33.638 "raid_level": "concat", 00:18:33.638 "superblock": true, 00:18:33.638 "num_base_bdevs": 3, 00:18:33.638 "num_base_bdevs_discovered": 1, 00:18:33.638 "num_base_bdevs_operational": 3, 00:18:33.638 "base_bdevs_list": [ 00:18:33.638 { 00:18:33.638 "name": "BaseBdev1", 00:18:33.638 "uuid": "566dce97-d54d-4f40-b0c5-64686606776f", 00:18:33.638 "is_configured": true, 00:18:33.638 "data_offset": 2048, 00:18:33.638 "data_size": 63488 00:18:33.638 }, 00:18:33.638 { 00:18:33.638 "name": "BaseBdev2", 00:18:33.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.638 "is_configured": false, 00:18:33.638 "data_offset": 0, 00:18:33.638 "data_size": 0 00:18:33.638 }, 00:18:33.638 { 00:18:33.638 "name": "BaseBdev3", 00:18:33.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.638 "is_configured": false, 00:18:33.638 "data_offset": 0, 00:18:33.638 "data_size": 0 00:18:33.638 } 00:18:33.638 ] 00:18:33.638 }' 00:18:33.638 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:33.638 23:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.204 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:34.462 [2024-07-13 23:04:23.737848] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:34.462 [2024-07-13 23:04:23.737935] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:18:34.462 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:34.719 [2024-07-13 23:04:23.961993] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:34.719 [2024-07-13 23:04:23.964369] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:34.719 [2024-07-13 23:04:23.964445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:34.719 [2024-07-13 23:04:23.964473] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:34.719 [2024-07-13 23:04:23.964499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:34.719 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:34.719 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:34.719 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:34.719 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:34.719 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:34.719 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:34.719 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:34.720 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:34.720 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:34.720 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:34.720 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:34.720 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:34.720 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.720 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:34.978 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:34.978 "name": "Existed_Raid", 00:18:34.978 "uuid": "11d4336c-06b8-461a-99dd-c31b19032ff4", 00:18:34.978 "strip_size_kb": 64, 00:18:34.978 "state": "configuring", 00:18:34.978 "raid_level": "concat", 00:18:34.978 "superblock": true, 00:18:34.978 "num_base_bdevs": 3, 00:18:34.978 "num_base_bdevs_discovered": 1, 00:18:34.978 "num_base_bdevs_operational": 3, 00:18:34.978 "base_bdevs_list": [ 00:18:34.978 { 00:18:34.978 "name": "BaseBdev1", 00:18:34.978 "uuid": "566dce97-d54d-4f40-b0c5-64686606776f", 00:18:34.978 "is_configured": true, 00:18:34.978 "data_offset": 2048, 00:18:34.978 "data_size": 63488 00:18:34.978 }, 00:18:34.978 { 00:18:34.978 "name": "BaseBdev2", 00:18:34.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.978 "is_configured": false, 00:18:34.978 "data_offset": 0, 00:18:34.978 "data_size": 0 00:18:34.978 }, 00:18:34.978 { 00:18:34.978 "name": "BaseBdev3", 00:18:34.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.978 "is_configured": false, 00:18:34.978 "data_offset": 0, 00:18:34.978 "data_size": 0 00:18:34.978 } 00:18:34.978 ] 00:18:34.978 }' 00:18:34.978 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:34.978 23:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.544 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:35.803 [2024-07-13 23:04:25.093108] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:35.803 BaseBdev2 00:18:35.803 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:35.803 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:35.803 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:35.803 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:35.803 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:35.803 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:35.803 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:36.062 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:36.321 [ 00:18:36.321 { 00:18:36.321 "name": "BaseBdev2", 00:18:36.321 "aliases": [ 00:18:36.321 "009c7f04-86ad-459b-a871-2b7d4618f31c" 00:18:36.321 ], 00:18:36.321 "product_name": "Malloc disk", 00:18:36.321 "block_size": 512, 00:18:36.321 "num_blocks": 65536, 00:18:36.321 "uuid": "009c7f04-86ad-459b-a871-2b7d4618f31c", 00:18:36.321 "assigned_rate_limits": { 00:18:36.321 "rw_ios_per_sec": 0, 00:18:36.321 "rw_mbytes_per_sec": 0, 00:18:36.321 "r_mbytes_per_sec": 0, 00:18:36.321 "w_mbytes_per_sec": 0 00:18:36.321 }, 00:18:36.321 "claimed": true, 00:18:36.321 "claim_type": "exclusive_write", 00:18:36.321 "zoned": false, 00:18:36.321 "supported_io_types": { 00:18:36.321 "read": true, 00:18:36.321 "write": true, 00:18:36.321 "unmap": true, 00:18:36.321 "flush": true, 00:18:36.321 "reset": true, 00:18:36.321 "nvme_admin": false, 00:18:36.321 "nvme_io": false, 00:18:36.321 "nvme_io_md": false, 00:18:36.321 "write_zeroes": true, 00:18:36.321 "zcopy": true, 00:18:36.321 "get_zone_info": false, 00:18:36.321 "zone_management": false, 00:18:36.321 "zone_append": false, 00:18:36.321 "compare": false, 00:18:36.321 "compare_and_write": false, 00:18:36.321 "abort": true, 00:18:36.321 "seek_hole": false, 00:18:36.321 "seek_data": false, 00:18:36.321 "copy": true, 00:18:36.321 "nvme_iov_md": false 00:18:36.321 }, 00:18:36.321 "memory_domains": [ 00:18:36.321 { 00:18:36.321 "dma_device_id": "system", 00:18:36.321 "dma_device_type": 1 00:18:36.321 }, 00:18:36.321 { 00:18:36.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.321 "dma_device_type": 2 00:18:36.321 } 00:18:36.321 ], 00:18:36.321 "driver_specific": {} 00:18:36.321 } 00:18:36.321 ] 00:18:36.321 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:36.321 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:36.321 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:36.321 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:36.321 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:36.321 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:36.321 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:36.321 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:36.321 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:36.321 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:36.321 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:36.321 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:36.321 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:36.321 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.321 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:36.591 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:36.591 "name": "Existed_Raid", 00:18:36.591 "uuid": "11d4336c-06b8-461a-99dd-c31b19032ff4", 00:18:36.591 "strip_size_kb": 64, 00:18:36.591 "state": "configuring", 00:18:36.591 "raid_level": "concat", 00:18:36.591 "superblock": true, 00:18:36.591 "num_base_bdevs": 3, 00:18:36.591 "num_base_bdevs_discovered": 2, 00:18:36.591 "num_base_bdevs_operational": 3, 00:18:36.591 "base_bdevs_list": [ 00:18:36.591 { 00:18:36.591 "name": "BaseBdev1", 00:18:36.591 "uuid": "566dce97-d54d-4f40-b0c5-64686606776f", 00:18:36.591 "is_configured": true, 00:18:36.591 "data_offset": 2048, 00:18:36.591 "data_size": 63488 00:18:36.591 }, 00:18:36.591 { 00:18:36.591 "name": "BaseBdev2", 00:18:36.591 "uuid": "009c7f04-86ad-459b-a871-2b7d4618f31c", 00:18:36.591 "is_configured": true, 00:18:36.591 "data_offset": 2048, 00:18:36.591 "data_size": 63488 00:18:36.591 }, 00:18:36.591 { 00:18:36.591 "name": "BaseBdev3", 00:18:36.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.591 "is_configured": false, 00:18:36.591 "data_offset": 0, 00:18:36.591 "data_size": 0 00:18:36.591 } 00:18:36.591 ] 00:18:36.591 }' 00:18:36.591 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:36.591 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.157 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:37.415 [2024-07-13 23:04:26.694643] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:37.415 [2024-07-13 23:04:26.694895] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:18:37.415 [2024-07-13 23:04:26.694910] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:37.415 [2024-07-13 23:04:26.695103] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:18:37.415 [2024-07-13 23:04:26.695547] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:18:37.416 [2024-07-13 23:04:26.695586] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:18:37.416 BaseBdev3 00:18:37.416 [2024-07-13 23:04:26.695753] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.416 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:18:37.416 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:37.416 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:37.416 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:37.416 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:37.416 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:37.416 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:37.674 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:37.932 [ 00:18:37.932 { 00:18:37.932 "name": "BaseBdev3", 00:18:37.932 "aliases": [ 00:18:37.932 "53095b5c-a020-4aa1-8881-a94c6ec5d2ce" 00:18:37.932 ], 00:18:37.932 "product_name": "Malloc disk", 00:18:37.932 "block_size": 512, 00:18:37.932 "num_blocks": 65536, 00:18:37.932 "uuid": "53095b5c-a020-4aa1-8881-a94c6ec5d2ce", 00:18:37.932 "assigned_rate_limits": { 00:18:37.932 "rw_ios_per_sec": 0, 00:18:37.932 "rw_mbytes_per_sec": 0, 00:18:37.932 "r_mbytes_per_sec": 0, 00:18:37.932 "w_mbytes_per_sec": 0 00:18:37.932 }, 00:18:37.932 "claimed": true, 00:18:37.932 "claim_type": "exclusive_write", 00:18:37.932 "zoned": false, 00:18:37.932 "supported_io_types": { 00:18:37.932 "read": true, 00:18:37.932 "write": true, 00:18:37.932 "unmap": true, 00:18:37.932 "flush": true, 00:18:37.932 "reset": true, 00:18:37.932 "nvme_admin": false, 00:18:37.932 "nvme_io": false, 00:18:37.932 "nvme_io_md": false, 00:18:37.932 "write_zeroes": true, 00:18:37.932 "zcopy": true, 00:18:37.932 "get_zone_info": false, 00:18:37.932 "zone_management": false, 00:18:37.932 "zone_append": false, 00:18:37.932 "compare": false, 00:18:37.932 "compare_and_write": false, 00:18:37.932 "abort": true, 00:18:37.932 "seek_hole": false, 00:18:37.932 "seek_data": false, 00:18:37.932 "copy": true, 00:18:37.932 "nvme_iov_md": false 00:18:37.932 }, 00:18:37.932 "memory_domains": [ 00:18:37.932 { 00:18:37.932 "dma_device_id": "system", 00:18:37.932 "dma_device_type": 1 00:18:37.932 }, 00:18:37.932 { 00:18:37.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.932 "dma_device_type": 2 00:18:37.932 } 00:18:37.932 ], 00:18:37.932 "driver_specific": {} 00:18:37.932 } 00:18:37.932 ] 00:18:37.932 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:37.932 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:37.932 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:37.932 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:18:37.932 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:37.932 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:37.932 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:37.932 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:37.932 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:37.932 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:37.932 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:37.932 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:37.932 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:37.932 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.933 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:38.191 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:38.191 "name": "Existed_Raid", 00:18:38.191 "uuid": "11d4336c-06b8-461a-99dd-c31b19032ff4", 00:18:38.191 "strip_size_kb": 64, 00:18:38.191 "state": "online", 00:18:38.191 "raid_level": "concat", 00:18:38.191 "superblock": true, 00:18:38.191 "num_base_bdevs": 3, 00:18:38.191 "num_base_bdevs_discovered": 3, 00:18:38.191 "num_base_bdevs_operational": 3, 00:18:38.191 "base_bdevs_list": [ 00:18:38.191 { 00:18:38.191 "name": "BaseBdev1", 00:18:38.191 "uuid": "566dce97-d54d-4f40-b0c5-64686606776f", 00:18:38.191 "is_configured": true, 00:18:38.191 "data_offset": 2048, 00:18:38.191 "data_size": 63488 00:18:38.191 }, 00:18:38.191 { 00:18:38.191 "name": "BaseBdev2", 00:18:38.191 "uuid": "009c7f04-86ad-459b-a871-2b7d4618f31c", 00:18:38.191 "is_configured": true, 00:18:38.191 "data_offset": 2048, 00:18:38.191 "data_size": 63488 00:18:38.191 }, 00:18:38.191 { 00:18:38.191 "name": "BaseBdev3", 00:18:38.191 "uuid": "53095b5c-a020-4aa1-8881-a94c6ec5d2ce", 00:18:38.191 "is_configured": true, 00:18:38.191 "data_offset": 2048, 00:18:38.191 "data_size": 63488 00:18:38.191 } 00:18:38.191 ] 00:18:38.191 }' 00:18:38.191 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:38.191 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.759 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:38.759 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:38.759 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:38.759 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:38.759 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:38.759 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:38.759 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:38.759 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:39.018 [2024-07-13 23:04:28.303384] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:39.018 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:39.018 "name": "Existed_Raid", 00:18:39.018 "aliases": [ 00:18:39.018 "11d4336c-06b8-461a-99dd-c31b19032ff4" 00:18:39.018 ], 00:18:39.018 "product_name": "Raid Volume", 00:18:39.018 "block_size": 512, 00:18:39.018 "num_blocks": 190464, 00:18:39.018 "uuid": "11d4336c-06b8-461a-99dd-c31b19032ff4", 00:18:39.018 "assigned_rate_limits": { 00:18:39.018 "rw_ios_per_sec": 0, 00:18:39.018 "rw_mbytes_per_sec": 0, 00:18:39.018 "r_mbytes_per_sec": 0, 00:18:39.018 "w_mbytes_per_sec": 0 00:18:39.018 }, 00:18:39.018 "claimed": false, 00:18:39.018 "zoned": false, 00:18:39.018 "supported_io_types": { 00:18:39.018 "read": true, 00:18:39.018 "write": true, 00:18:39.018 "unmap": true, 00:18:39.018 "flush": true, 00:18:39.018 "reset": true, 00:18:39.018 "nvme_admin": false, 00:18:39.018 "nvme_io": false, 00:18:39.018 "nvme_io_md": false, 00:18:39.018 "write_zeroes": true, 00:18:39.018 "zcopy": false, 00:18:39.018 "get_zone_info": false, 00:18:39.018 "zone_management": false, 00:18:39.018 "zone_append": false, 00:18:39.018 "compare": false, 00:18:39.018 "compare_and_write": false, 00:18:39.018 "abort": false, 00:18:39.018 "seek_hole": false, 00:18:39.018 "seek_data": false, 00:18:39.018 "copy": false, 00:18:39.018 "nvme_iov_md": false 00:18:39.018 }, 00:18:39.018 "memory_domains": [ 00:18:39.018 { 00:18:39.018 "dma_device_id": "system", 00:18:39.018 "dma_device_type": 1 00:18:39.018 }, 00:18:39.018 { 00:18:39.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.018 "dma_device_type": 2 00:18:39.018 }, 00:18:39.018 { 00:18:39.018 "dma_device_id": "system", 00:18:39.018 "dma_device_type": 1 00:18:39.018 }, 00:18:39.018 { 00:18:39.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.018 "dma_device_type": 2 00:18:39.018 }, 00:18:39.018 { 00:18:39.018 "dma_device_id": "system", 00:18:39.018 "dma_device_type": 1 00:18:39.018 }, 00:18:39.018 { 00:18:39.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.018 "dma_device_type": 2 00:18:39.018 } 00:18:39.018 ], 00:18:39.018 "driver_specific": { 00:18:39.018 "raid": { 00:18:39.018 "uuid": "11d4336c-06b8-461a-99dd-c31b19032ff4", 00:18:39.018 "strip_size_kb": 64, 00:18:39.018 "state": "online", 00:18:39.018 "raid_level": "concat", 00:18:39.018 "superblock": true, 00:18:39.018 "num_base_bdevs": 3, 00:18:39.018 "num_base_bdevs_discovered": 3, 00:18:39.018 "num_base_bdevs_operational": 3, 00:18:39.018 "base_bdevs_list": [ 00:18:39.018 { 00:18:39.018 "name": "BaseBdev1", 00:18:39.018 "uuid": "566dce97-d54d-4f40-b0c5-64686606776f", 00:18:39.018 "is_configured": true, 00:18:39.018 "data_offset": 2048, 00:18:39.018 "data_size": 63488 00:18:39.018 }, 00:18:39.018 { 00:18:39.018 "name": "BaseBdev2", 00:18:39.018 "uuid": "009c7f04-86ad-459b-a871-2b7d4618f31c", 00:18:39.018 "is_configured": true, 00:18:39.018 "data_offset": 2048, 00:18:39.018 "data_size": 63488 00:18:39.018 }, 00:18:39.018 { 00:18:39.018 "name": "BaseBdev3", 00:18:39.018 "uuid": "53095b5c-a020-4aa1-8881-a94c6ec5d2ce", 00:18:39.018 "is_configured": true, 00:18:39.018 "data_offset": 2048, 00:18:39.018 "data_size": 63488 00:18:39.018 } 00:18:39.018 ] 00:18:39.018 } 00:18:39.018 } 00:18:39.018 }' 00:18:39.018 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:39.018 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:39.018 BaseBdev2 00:18:39.018 BaseBdev3' 00:18:39.018 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:39.018 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:39.018 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:39.277 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:39.277 "name": "BaseBdev1", 00:18:39.277 "aliases": [ 00:18:39.277 "566dce97-d54d-4f40-b0c5-64686606776f" 00:18:39.277 ], 00:18:39.277 "product_name": "Malloc disk", 00:18:39.277 "block_size": 512, 00:18:39.277 "num_blocks": 65536, 00:18:39.277 "uuid": "566dce97-d54d-4f40-b0c5-64686606776f", 00:18:39.277 "assigned_rate_limits": { 00:18:39.277 "rw_ios_per_sec": 0, 00:18:39.277 "rw_mbytes_per_sec": 0, 00:18:39.277 "r_mbytes_per_sec": 0, 00:18:39.277 "w_mbytes_per_sec": 0 00:18:39.277 }, 00:18:39.277 "claimed": true, 00:18:39.277 "claim_type": "exclusive_write", 00:18:39.277 "zoned": false, 00:18:39.277 "supported_io_types": { 00:18:39.277 "read": true, 00:18:39.277 "write": true, 00:18:39.277 "unmap": true, 00:18:39.277 "flush": true, 00:18:39.277 "reset": true, 00:18:39.277 "nvme_admin": false, 00:18:39.277 "nvme_io": false, 00:18:39.277 "nvme_io_md": false, 00:18:39.277 "write_zeroes": true, 00:18:39.277 "zcopy": true, 00:18:39.277 "get_zone_info": false, 00:18:39.277 "zone_management": false, 00:18:39.277 "zone_append": false, 00:18:39.277 "compare": false, 00:18:39.277 "compare_and_write": false, 00:18:39.277 "abort": true, 00:18:39.277 "seek_hole": false, 00:18:39.277 "seek_data": false, 00:18:39.277 "copy": true, 00:18:39.277 "nvme_iov_md": false 00:18:39.277 }, 00:18:39.277 "memory_domains": [ 00:18:39.277 { 00:18:39.277 "dma_device_id": "system", 00:18:39.277 "dma_device_type": 1 00:18:39.277 }, 00:18:39.277 { 00:18:39.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.277 "dma_device_type": 2 00:18:39.277 } 00:18:39.277 ], 00:18:39.277 "driver_specific": {} 00:18:39.277 }' 00:18:39.277 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:39.536 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:39.536 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:39.536 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:39.536 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:39.536 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:39.536 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:39.536 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:39.794 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:39.794 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:39.794 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:39.794 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:39.794 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:39.794 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:39.794 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:40.051 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:40.051 "name": "BaseBdev2", 00:18:40.051 "aliases": [ 00:18:40.051 "009c7f04-86ad-459b-a871-2b7d4618f31c" 00:18:40.051 ], 00:18:40.051 "product_name": "Malloc disk", 00:18:40.051 "block_size": 512, 00:18:40.051 "num_blocks": 65536, 00:18:40.051 "uuid": "009c7f04-86ad-459b-a871-2b7d4618f31c", 00:18:40.051 "assigned_rate_limits": { 00:18:40.051 "rw_ios_per_sec": 0, 00:18:40.051 "rw_mbytes_per_sec": 0, 00:18:40.051 "r_mbytes_per_sec": 0, 00:18:40.052 "w_mbytes_per_sec": 0 00:18:40.052 }, 00:18:40.052 "claimed": true, 00:18:40.052 "claim_type": "exclusive_write", 00:18:40.052 "zoned": false, 00:18:40.052 "supported_io_types": { 00:18:40.052 "read": true, 00:18:40.052 "write": true, 00:18:40.052 "unmap": true, 00:18:40.052 "flush": true, 00:18:40.052 "reset": true, 00:18:40.052 "nvme_admin": false, 00:18:40.052 "nvme_io": false, 00:18:40.052 "nvme_io_md": false, 00:18:40.052 "write_zeroes": true, 00:18:40.052 "zcopy": true, 00:18:40.052 "get_zone_info": false, 00:18:40.052 "zone_management": false, 00:18:40.052 "zone_append": false, 00:18:40.052 "compare": false, 00:18:40.052 "compare_and_write": false, 00:18:40.052 "abort": true, 00:18:40.052 "seek_hole": false, 00:18:40.052 "seek_data": false, 00:18:40.052 "copy": true, 00:18:40.052 "nvme_iov_md": false 00:18:40.052 }, 00:18:40.052 "memory_domains": [ 00:18:40.052 { 00:18:40.052 "dma_device_id": "system", 00:18:40.052 "dma_device_type": 1 00:18:40.052 }, 00:18:40.052 { 00:18:40.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.052 "dma_device_type": 2 00:18:40.052 } 00:18:40.052 ], 00:18:40.052 "driver_specific": {} 00:18:40.052 }' 00:18:40.052 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:40.052 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:40.052 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:40.052 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:40.052 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:40.310 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:40.310 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:40.310 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:40.310 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:40.310 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:40.310 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:40.310 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:40.310 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:40.310 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:40.310 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:40.569 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:40.569 "name": "BaseBdev3", 00:18:40.569 "aliases": [ 00:18:40.569 "53095b5c-a020-4aa1-8881-a94c6ec5d2ce" 00:18:40.569 ], 00:18:40.569 "product_name": "Malloc disk", 00:18:40.569 "block_size": 512, 00:18:40.569 "num_blocks": 65536, 00:18:40.569 "uuid": "53095b5c-a020-4aa1-8881-a94c6ec5d2ce", 00:18:40.569 "assigned_rate_limits": { 00:18:40.569 "rw_ios_per_sec": 0, 00:18:40.569 "rw_mbytes_per_sec": 0, 00:18:40.569 "r_mbytes_per_sec": 0, 00:18:40.569 "w_mbytes_per_sec": 0 00:18:40.569 }, 00:18:40.569 "claimed": true, 00:18:40.569 "claim_type": "exclusive_write", 00:18:40.569 "zoned": false, 00:18:40.569 "supported_io_types": { 00:18:40.569 "read": true, 00:18:40.569 "write": true, 00:18:40.569 "unmap": true, 00:18:40.569 "flush": true, 00:18:40.569 "reset": true, 00:18:40.569 "nvme_admin": false, 00:18:40.569 "nvme_io": false, 00:18:40.569 "nvme_io_md": false, 00:18:40.569 "write_zeroes": true, 00:18:40.569 "zcopy": true, 00:18:40.569 "get_zone_info": false, 00:18:40.569 "zone_management": false, 00:18:40.569 "zone_append": false, 00:18:40.569 "compare": false, 00:18:40.569 "compare_and_write": false, 00:18:40.569 "abort": true, 00:18:40.569 "seek_hole": false, 00:18:40.569 "seek_data": false, 00:18:40.569 "copy": true, 00:18:40.569 "nvme_iov_md": false 00:18:40.569 }, 00:18:40.569 "memory_domains": [ 00:18:40.569 { 00:18:40.569 "dma_device_id": "system", 00:18:40.569 "dma_device_type": 1 00:18:40.569 }, 00:18:40.569 { 00:18:40.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.569 "dma_device_type": 2 00:18:40.569 } 00:18:40.569 ], 00:18:40.569 "driver_specific": {} 00:18:40.569 }' 00:18:40.569 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:40.827 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:40.827 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:40.827 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:40.827 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:40.827 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:40.827 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:40.827 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:41.085 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:41.085 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:41.085 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:41.085 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:41.085 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:41.355 [2024-07-13 23:04:30.639842] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:41.355 [2024-07-13 23:04:30.639912] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:41.355 [2024-07-13 23:04:30.640028] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:41.355 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:41.355 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:18:41.355 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:41.355 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:18:41.355 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:18:41.355 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:18:41.355 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:41.355 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:18:41.355 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:41.355 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:41.355 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:41.355 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:41.355 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:41.355 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:41.355 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:41.355 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.355 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.628 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:41.628 "name": "Existed_Raid", 00:18:41.628 "uuid": "11d4336c-06b8-461a-99dd-c31b19032ff4", 00:18:41.628 "strip_size_kb": 64, 00:18:41.628 "state": "offline", 00:18:41.628 "raid_level": "concat", 00:18:41.628 "superblock": true, 00:18:41.628 "num_base_bdevs": 3, 00:18:41.628 "num_base_bdevs_discovered": 2, 00:18:41.628 "num_base_bdevs_operational": 2, 00:18:41.628 "base_bdevs_list": [ 00:18:41.628 { 00:18:41.628 "name": null, 00:18:41.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.628 "is_configured": false, 00:18:41.628 "data_offset": 2048, 00:18:41.628 "data_size": 63488 00:18:41.628 }, 00:18:41.628 { 00:18:41.628 "name": "BaseBdev2", 00:18:41.628 "uuid": "009c7f04-86ad-459b-a871-2b7d4618f31c", 00:18:41.628 "is_configured": true, 00:18:41.628 "data_offset": 2048, 00:18:41.628 "data_size": 63488 00:18:41.628 }, 00:18:41.628 { 00:18:41.628 "name": "BaseBdev3", 00:18:41.628 "uuid": "53095b5c-a020-4aa1-8881-a94c6ec5d2ce", 00:18:41.628 "is_configured": true, 00:18:41.628 "data_offset": 2048, 00:18:41.628 "data_size": 63488 00:18:41.628 } 00:18:41.628 ] 00:18:41.628 }' 00:18:41.628 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:41.628 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.193 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:42.193 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:42.193 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.193 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:42.451 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:42.451 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:42.709 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:42.709 [2024-07-13 23:04:32.105272] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:42.967 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:42.967 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:42.967 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:42.967 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.225 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:43.225 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:43.225 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:43.225 [2024-07-13 23:04:32.593546] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:43.225 [2024-07-13 23:04:32.593626] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:18:43.225 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:43.225 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:43.225 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.225 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:43.483 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:43.483 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:43.483 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:18:43.483 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:18:43.483 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:43.483 23:04:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:43.741 BaseBdev2 00:18:43.741 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:18:43.741 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:43.741 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:43.741 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:43.741 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:43.741 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:43.741 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:43.998 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:44.255 [ 00:18:44.255 { 00:18:44.255 "name": "BaseBdev2", 00:18:44.255 "aliases": [ 00:18:44.255 "a627eca4-f170-47fb-8b8a-f58b793b4e38" 00:18:44.255 ], 00:18:44.255 "product_name": "Malloc disk", 00:18:44.255 "block_size": 512, 00:18:44.255 "num_blocks": 65536, 00:18:44.255 "uuid": "a627eca4-f170-47fb-8b8a-f58b793b4e38", 00:18:44.255 "assigned_rate_limits": { 00:18:44.255 "rw_ios_per_sec": 0, 00:18:44.255 "rw_mbytes_per_sec": 0, 00:18:44.255 "r_mbytes_per_sec": 0, 00:18:44.255 "w_mbytes_per_sec": 0 00:18:44.255 }, 00:18:44.255 "claimed": false, 00:18:44.255 "zoned": false, 00:18:44.255 "supported_io_types": { 00:18:44.255 "read": true, 00:18:44.255 "write": true, 00:18:44.255 "unmap": true, 00:18:44.255 "flush": true, 00:18:44.255 "reset": true, 00:18:44.255 "nvme_admin": false, 00:18:44.255 "nvme_io": false, 00:18:44.255 "nvme_io_md": false, 00:18:44.255 "write_zeroes": true, 00:18:44.255 "zcopy": true, 00:18:44.255 "get_zone_info": false, 00:18:44.255 "zone_management": false, 00:18:44.255 "zone_append": false, 00:18:44.255 "compare": false, 00:18:44.255 "compare_and_write": false, 00:18:44.255 "abort": true, 00:18:44.255 "seek_hole": false, 00:18:44.255 "seek_data": false, 00:18:44.255 "copy": true, 00:18:44.255 "nvme_iov_md": false 00:18:44.255 }, 00:18:44.255 "memory_domains": [ 00:18:44.255 { 00:18:44.255 "dma_device_id": "system", 00:18:44.255 "dma_device_type": 1 00:18:44.255 }, 00:18:44.255 { 00:18:44.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.255 "dma_device_type": 2 00:18:44.255 } 00:18:44.255 ], 00:18:44.255 "driver_specific": {} 00:18:44.255 } 00:18:44.255 ] 00:18:44.255 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:44.255 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:44.255 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:44.255 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:44.513 BaseBdev3 00:18:44.513 23:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:18:44.513 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:44.513 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:44.513 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:44.513 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:44.513 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:44.513 23:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:44.770 23:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:45.027 [ 00:18:45.028 { 00:18:45.028 "name": "BaseBdev3", 00:18:45.028 "aliases": [ 00:18:45.028 "d7d75055-7186-4861-9206-952ff877be90" 00:18:45.028 ], 00:18:45.028 "product_name": "Malloc disk", 00:18:45.028 "block_size": 512, 00:18:45.028 "num_blocks": 65536, 00:18:45.028 "uuid": "d7d75055-7186-4861-9206-952ff877be90", 00:18:45.028 "assigned_rate_limits": { 00:18:45.028 "rw_ios_per_sec": 0, 00:18:45.028 "rw_mbytes_per_sec": 0, 00:18:45.028 "r_mbytes_per_sec": 0, 00:18:45.028 "w_mbytes_per_sec": 0 00:18:45.028 }, 00:18:45.028 "claimed": false, 00:18:45.028 "zoned": false, 00:18:45.028 "supported_io_types": { 00:18:45.028 "read": true, 00:18:45.028 "write": true, 00:18:45.028 "unmap": true, 00:18:45.028 "flush": true, 00:18:45.028 "reset": true, 00:18:45.028 "nvme_admin": false, 00:18:45.028 "nvme_io": false, 00:18:45.028 "nvme_io_md": false, 00:18:45.028 "write_zeroes": true, 00:18:45.028 "zcopy": true, 00:18:45.028 "get_zone_info": false, 00:18:45.028 "zone_management": false, 00:18:45.028 "zone_append": false, 00:18:45.028 "compare": false, 00:18:45.028 "compare_and_write": false, 00:18:45.028 "abort": true, 00:18:45.028 "seek_hole": false, 00:18:45.028 "seek_data": false, 00:18:45.028 "copy": true, 00:18:45.028 "nvme_iov_md": false 00:18:45.028 }, 00:18:45.028 "memory_domains": [ 00:18:45.028 { 00:18:45.028 "dma_device_id": "system", 00:18:45.028 "dma_device_type": 1 00:18:45.028 }, 00:18:45.028 { 00:18:45.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.028 "dma_device_type": 2 00:18:45.028 } 00:18:45.028 ], 00:18:45.028 "driver_specific": {} 00:18:45.028 } 00:18:45.028 ] 00:18:45.028 23:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:45.028 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:45.028 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:45.028 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:45.286 [2024-07-13 23:04:34.469343] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:45.286 [2024-07-13 23:04:34.469453] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:45.286 [2024-07-13 23:04:34.469505] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:45.286 [2024-07-13 23:04:34.471532] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:45.286 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:45.286 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:45.286 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:45.286 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:45.286 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:45.286 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:45.286 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:45.286 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:45.286 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:45.286 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:45.286 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.286 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:45.544 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:45.544 "name": "Existed_Raid", 00:18:45.544 "uuid": "f2027b35-3c71-4688-a39f-1636b7f8a292", 00:18:45.544 "strip_size_kb": 64, 00:18:45.544 "state": "configuring", 00:18:45.544 "raid_level": "concat", 00:18:45.544 "superblock": true, 00:18:45.544 "num_base_bdevs": 3, 00:18:45.544 "num_base_bdevs_discovered": 2, 00:18:45.544 "num_base_bdevs_operational": 3, 00:18:45.544 "base_bdevs_list": [ 00:18:45.544 { 00:18:45.544 "name": "BaseBdev1", 00:18:45.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.544 "is_configured": false, 00:18:45.544 "data_offset": 0, 00:18:45.544 "data_size": 0 00:18:45.544 }, 00:18:45.544 { 00:18:45.544 "name": "BaseBdev2", 00:18:45.544 "uuid": "a627eca4-f170-47fb-8b8a-f58b793b4e38", 00:18:45.544 "is_configured": true, 00:18:45.544 "data_offset": 2048, 00:18:45.544 "data_size": 63488 00:18:45.544 }, 00:18:45.544 { 00:18:45.544 "name": "BaseBdev3", 00:18:45.544 "uuid": "d7d75055-7186-4861-9206-952ff877be90", 00:18:45.544 "is_configured": true, 00:18:45.544 "data_offset": 2048, 00:18:45.544 "data_size": 63488 00:18:45.544 } 00:18:45.544 ] 00:18:45.544 }' 00:18:45.544 23:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:45.544 23:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.110 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:46.369 [2024-07-13 23:04:35.584542] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:46.369 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:46.369 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:46.369 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:46.369 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:46.369 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:46.369 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:46.369 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:46.369 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:46.369 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:46.369 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:46.369 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.369 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:46.628 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:46.628 "name": "Existed_Raid", 00:18:46.628 "uuid": "f2027b35-3c71-4688-a39f-1636b7f8a292", 00:18:46.628 "strip_size_kb": 64, 00:18:46.628 "state": "configuring", 00:18:46.628 "raid_level": "concat", 00:18:46.628 "superblock": true, 00:18:46.628 "num_base_bdevs": 3, 00:18:46.628 "num_base_bdevs_discovered": 1, 00:18:46.628 "num_base_bdevs_operational": 3, 00:18:46.628 "base_bdevs_list": [ 00:18:46.628 { 00:18:46.628 "name": "BaseBdev1", 00:18:46.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.628 "is_configured": false, 00:18:46.628 "data_offset": 0, 00:18:46.628 "data_size": 0 00:18:46.628 }, 00:18:46.628 { 00:18:46.628 "name": null, 00:18:46.628 "uuid": "a627eca4-f170-47fb-8b8a-f58b793b4e38", 00:18:46.628 "is_configured": false, 00:18:46.628 "data_offset": 2048, 00:18:46.628 "data_size": 63488 00:18:46.628 }, 00:18:46.628 { 00:18:46.628 "name": "BaseBdev3", 00:18:46.628 "uuid": "d7d75055-7186-4861-9206-952ff877be90", 00:18:46.628 "is_configured": true, 00:18:46.628 "data_offset": 2048, 00:18:46.628 "data_size": 63488 00:18:46.628 } 00:18:46.628 ] 00:18:46.628 }' 00:18:46.628 23:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:46.628 23:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.196 23:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.196 23:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:47.454 23:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:18:47.454 23:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:47.713 [2024-07-13 23:04:36.921754] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:47.713 BaseBdev1 00:18:47.713 23:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:18:47.713 23:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:47.713 23:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:47.713 23:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:47.713 23:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:47.713 23:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:47.713 23:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:47.971 23:04:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:48.229 [ 00:18:48.229 { 00:18:48.229 "name": "BaseBdev1", 00:18:48.229 "aliases": [ 00:18:48.229 "38cdc4f5-cc5b-45cd-a755-711219de22b1" 00:18:48.229 ], 00:18:48.229 "product_name": "Malloc disk", 00:18:48.229 "block_size": 512, 00:18:48.229 "num_blocks": 65536, 00:18:48.229 "uuid": "38cdc4f5-cc5b-45cd-a755-711219de22b1", 00:18:48.229 "assigned_rate_limits": { 00:18:48.229 "rw_ios_per_sec": 0, 00:18:48.229 "rw_mbytes_per_sec": 0, 00:18:48.229 "r_mbytes_per_sec": 0, 00:18:48.229 "w_mbytes_per_sec": 0 00:18:48.229 }, 00:18:48.229 "claimed": true, 00:18:48.229 "claim_type": "exclusive_write", 00:18:48.229 "zoned": false, 00:18:48.229 "supported_io_types": { 00:18:48.229 "read": true, 00:18:48.229 "write": true, 00:18:48.229 "unmap": true, 00:18:48.229 "flush": true, 00:18:48.229 "reset": true, 00:18:48.229 "nvme_admin": false, 00:18:48.229 "nvme_io": false, 00:18:48.229 "nvme_io_md": false, 00:18:48.229 "write_zeroes": true, 00:18:48.229 "zcopy": true, 00:18:48.229 "get_zone_info": false, 00:18:48.229 "zone_management": false, 00:18:48.229 "zone_append": false, 00:18:48.229 "compare": false, 00:18:48.229 "compare_and_write": false, 00:18:48.229 "abort": true, 00:18:48.229 "seek_hole": false, 00:18:48.229 "seek_data": false, 00:18:48.229 "copy": true, 00:18:48.229 "nvme_iov_md": false 00:18:48.229 }, 00:18:48.229 "memory_domains": [ 00:18:48.229 { 00:18:48.229 "dma_device_id": "system", 00:18:48.229 "dma_device_type": 1 00:18:48.229 }, 00:18:48.229 { 00:18:48.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:48.229 "dma_device_type": 2 00:18:48.229 } 00:18:48.229 ], 00:18:48.229 "driver_specific": {} 00:18:48.229 } 00:18:48.229 ] 00:18:48.229 23:04:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:48.229 23:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:48.229 23:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:48.229 23:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:48.229 23:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:48.229 23:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:48.229 23:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:48.229 23:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:48.229 23:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:48.229 23:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:48.229 23:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:48.229 23:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.229 23:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:48.229 23:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:48.229 "name": "Existed_Raid", 00:18:48.230 "uuid": "f2027b35-3c71-4688-a39f-1636b7f8a292", 00:18:48.230 "strip_size_kb": 64, 00:18:48.230 "state": "configuring", 00:18:48.230 "raid_level": "concat", 00:18:48.230 "superblock": true, 00:18:48.230 "num_base_bdevs": 3, 00:18:48.230 "num_base_bdevs_discovered": 2, 00:18:48.230 "num_base_bdevs_operational": 3, 00:18:48.230 "base_bdevs_list": [ 00:18:48.230 { 00:18:48.230 "name": "BaseBdev1", 00:18:48.230 "uuid": "38cdc4f5-cc5b-45cd-a755-711219de22b1", 00:18:48.230 "is_configured": true, 00:18:48.230 "data_offset": 2048, 00:18:48.230 "data_size": 63488 00:18:48.230 }, 00:18:48.230 { 00:18:48.230 "name": null, 00:18:48.230 "uuid": "a627eca4-f170-47fb-8b8a-f58b793b4e38", 00:18:48.230 "is_configured": false, 00:18:48.230 "data_offset": 2048, 00:18:48.230 "data_size": 63488 00:18:48.230 }, 00:18:48.230 { 00:18:48.230 "name": "BaseBdev3", 00:18:48.230 "uuid": "d7d75055-7186-4861-9206-952ff877be90", 00:18:48.230 "is_configured": true, 00:18:48.230 "data_offset": 2048, 00:18:48.230 "data_size": 63488 00:18:48.230 } 00:18:48.230 ] 00:18:48.230 }' 00:18:48.230 23:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:48.230 23:04:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.164 23:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.164 23:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:49.164 23:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:18:49.422 23:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:49.423 [2024-07-13 23:04:38.830335] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:49.681 23:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:49.681 23:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:49.681 23:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:49.681 23:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:49.681 23:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:49.681 23:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:49.681 23:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:49.681 23:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:49.681 23:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:49.681 23:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:49.681 23:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:49.681 23:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.681 23:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:49.681 "name": "Existed_Raid", 00:18:49.681 "uuid": "f2027b35-3c71-4688-a39f-1636b7f8a292", 00:18:49.681 "strip_size_kb": 64, 00:18:49.681 "state": "configuring", 00:18:49.681 "raid_level": "concat", 00:18:49.681 "superblock": true, 00:18:49.681 "num_base_bdevs": 3, 00:18:49.681 "num_base_bdevs_discovered": 1, 00:18:49.681 "num_base_bdevs_operational": 3, 00:18:49.681 "base_bdevs_list": [ 00:18:49.681 { 00:18:49.681 "name": "BaseBdev1", 00:18:49.681 "uuid": "38cdc4f5-cc5b-45cd-a755-711219de22b1", 00:18:49.681 "is_configured": true, 00:18:49.681 "data_offset": 2048, 00:18:49.681 "data_size": 63488 00:18:49.681 }, 00:18:49.681 { 00:18:49.681 "name": null, 00:18:49.681 "uuid": "a627eca4-f170-47fb-8b8a-f58b793b4e38", 00:18:49.681 "is_configured": false, 00:18:49.681 "data_offset": 2048, 00:18:49.681 "data_size": 63488 00:18:49.681 }, 00:18:49.681 { 00:18:49.681 "name": null, 00:18:49.681 "uuid": "d7d75055-7186-4861-9206-952ff877be90", 00:18:49.681 "is_configured": false, 00:18:49.681 "data_offset": 2048, 00:18:49.681 "data_size": 63488 00:18:49.681 } 00:18:49.681 ] 00:18:49.681 }' 00:18:49.681 23:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:49.681 23:04:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.616 23:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.616 23:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:50.616 23:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:18:50.616 23:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:50.887 [2024-07-13 23:04:40.170648] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:50.887 23:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:50.887 23:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:50.887 23:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:50.887 23:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:50.887 23:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:50.887 23:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:50.887 23:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:50.887 23:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:50.887 23:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:50.887 23:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:50.887 23:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.887 23:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:51.157 23:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:51.157 "name": "Existed_Raid", 00:18:51.157 "uuid": "f2027b35-3c71-4688-a39f-1636b7f8a292", 00:18:51.157 "strip_size_kb": 64, 00:18:51.157 "state": "configuring", 00:18:51.157 "raid_level": "concat", 00:18:51.157 "superblock": true, 00:18:51.157 "num_base_bdevs": 3, 00:18:51.157 "num_base_bdevs_discovered": 2, 00:18:51.157 "num_base_bdevs_operational": 3, 00:18:51.157 "base_bdevs_list": [ 00:18:51.157 { 00:18:51.157 "name": "BaseBdev1", 00:18:51.157 "uuid": "38cdc4f5-cc5b-45cd-a755-711219de22b1", 00:18:51.157 "is_configured": true, 00:18:51.157 "data_offset": 2048, 00:18:51.157 "data_size": 63488 00:18:51.157 }, 00:18:51.157 { 00:18:51.157 "name": null, 00:18:51.157 "uuid": "a627eca4-f170-47fb-8b8a-f58b793b4e38", 00:18:51.157 "is_configured": false, 00:18:51.157 "data_offset": 2048, 00:18:51.157 "data_size": 63488 00:18:51.157 }, 00:18:51.157 { 00:18:51.157 "name": "BaseBdev3", 00:18:51.157 "uuid": "d7d75055-7186-4861-9206-952ff877be90", 00:18:51.157 "is_configured": true, 00:18:51.157 "data_offset": 2048, 00:18:51.157 "data_size": 63488 00:18:51.157 } 00:18:51.157 ] 00:18:51.157 }' 00:18:51.157 23:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:51.157 23:04:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.723 23:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.723 23:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:51.981 23:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:18:51.981 23:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:52.239 [2024-07-13 23:04:41.567031] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:52.239 23:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:52.239 23:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:52.239 23:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:52.239 23:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:52.239 23:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:52.239 23:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:52.239 23:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:52.239 23:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:52.239 23:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:52.239 23:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:52.239 23:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:52.239 23:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.496 23:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:52.496 "name": "Existed_Raid", 00:18:52.496 "uuid": "f2027b35-3c71-4688-a39f-1636b7f8a292", 00:18:52.496 "strip_size_kb": 64, 00:18:52.496 "state": "configuring", 00:18:52.496 "raid_level": "concat", 00:18:52.496 "superblock": true, 00:18:52.496 "num_base_bdevs": 3, 00:18:52.496 "num_base_bdevs_discovered": 1, 00:18:52.496 "num_base_bdevs_operational": 3, 00:18:52.496 "base_bdevs_list": [ 00:18:52.496 { 00:18:52.496 "name": null, 00:18:52.496 "uuid": "38cdc4f5-cc5b-45cd-a755-711219de22b1", 00:18:52.496 "is_configured": false, 00:18:52.496 "data_offset": 2048, 00:18:52.496 "data_size": 63488 00:18:52.496 }, 00:18:52.496 { 00:18:52.496 "name": null, 00:18:52.496 "uuid": "a627eca4-f170-47fb-8b8a-f58b793b4e38", 00:18:52.496 "is_configured": false, 00:18:52.496 "data_offset": 2048, 00:18:52.496 "data_size": 63488 00:18:52.496 }, 00:18:52.496 { 00:18:52.496 "name": "BaseBdev3", 00:18:52.496 "uuid": "d7d75055-7186-4861-9206-952ff877be90", 00:18:52.496 "is_configured": true, 00:18:52.496 "data_offset": 2048, 00:18:52.496 "data_size": 63488 00:18:52.496 } 00:18:52.496 ] 00:18:52.496 }' 00:18:52.496 23:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:52.496 23:04:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.426 23:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:53.426 23:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.426 23:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:18:53.426 23:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:53.683 [2024-07-13 23:04:42.924627] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:53.683 23:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:53.683 23:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:53.683 23:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:53.683 23:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:53.683 23:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:53.683 23:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:53.683 23:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:53.683 23:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:53.683 23:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:53.683 23:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:53.683 23:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.683 23:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:53.958 23:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:53.958 "name": "Existed_Raid", 00:18:53.958 "uuid": "f2027b35-3c71-4688-a39f-1636b7f8a292", 00:18:53.958 "strip_size_kb": 64, 00:18:53.958 "state": "configuring", 00:18:53.958 "raid_level": "concat", 00:18:53.958 "superblock": true, 00:18:53.958 "num_base_bdevs": 3, 00:18:53.958 "num_base_bdevs_discovered": 2, 00:18:53.958 "num_base_bdevs_operational": 3, 00:18:53.958 "base_bdevs_list": [ 00:18:53.958 { 00:18:53.958 "name": null, 00:18:53.958 "uuid": "38cdc4f5-cc5b-45cd-a755-711219de22b1", 00:18:53.958 "is_configured": false, 00:18:53.958 "data_offset": 2048, 00:18:53.958 "data_size": 63488 00:18:53.958 }, 00:18:53.958 { 00:18:53.958 "name": "BaseBdev2", 00:18:53.958 "uuid": "a627eca4-f170-47fb-8b8a-f58b793b4e38", 00:18:53.958 "is_configured": true, 00:18:53.958 "data_offset": 2048, 00:18:53.958 "data_size": 63488 00:18:53.958 }, 00:18:53.958 { 00:18:53.958 "name": "BaseBdev3", 00:18:53.958 "uuid": "d7d75055-7186-4861-9206-952ff877be90", 00:18:53.958 "is_configured": true, 00:18:53.958 "data_offset": 2048, 00:18:53.958 "data_size": 63488 00:18:53.958 } 00:18:53.958 ] 00:18:53.958 }' 00:18:53.958 23:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:53.958 23:04:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.523 23:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.523 23:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:54.781 23:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:18:54.781 23:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.781 23:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:55.039 23:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 38cdc4f5-cc5b-45cd-a755-711219de22b1 00:18:55.297 [2024-07-13 23:04:44.590008] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:55.297 [2024-07-13 23:04:44.590425] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:18:55.297 [2024-07-13 23:04:44.590577] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:55.297 [2024-07-13 23:04:44.590715] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:18:55.297 NewBaseBdev 00:18:55.297 [2024-07-13 23:04:44.591186] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:18:55.297 [2024-07-13 23:04:44.591330] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:18:55.297 [2024-07-13 23:04:44.591557] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:55.297 23:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:18:55.297 23:04:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:18:55.297 23:04:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:55.297 23:04:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:55.297 23:04:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:55.297 23:04:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:55.297 23:04:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:55.553 23:04:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:55.811 [ 00:18:55.811 { 00:18:55.811 "name": "NewBaseBdev", 00:18:55.811 "aliases": [ 00:18:55.811 "38cdc4f5-cc5b-45cd-a755-711219de22b1" 00:18:55.811 ], 00:18:55.811 "product_name": "Malloc disk", 00:18:55.811 "block_size": 512, 00:18:55.811 "num_blocks": 65536, 00:18:55.811 "uuid": "38cdc4f5-cc5b-45cd-a755-711219de22b1", 00:18:55.811 "assigned_rate_limits": { 00:18:55.811 "rw_ios_per_sec": 0, 00:18:55.811 "rw_mbytes_per_sec": 0, 00:18:55.811 "r_mbytes_per_sec": 0, 00:18:55.812 "w_mbytes_per_sec": 0 00:18:55.812 }, 00:18:55.812 "claimed": true, 00:18:55.812 "claim_type": "exclusive_write", 00:18:55.812 "zoned": false, 00:18:55.812 "supported_io_types": { 00:18:55.812 "read": true, 00:18:55.812 "write": true, 00:18:55.812 "unmap": true, 00:18:55.812 "flush": true, 00:18:55.812 "reset": true, 00:18:55.812 "nvme_admin": false, 00:18:55.812 "nvme_io": false, 00:18:55.812 "nvme_io_md": false, 00:18:55.812 "write_zeroes": true, 00:18:55.812 "zcopy": true, 00:18:55.812 "get_zone_info": false, 00:18:55.812 "zone_management": false, 00:18:55.812 "zone_append": false, 00:18:55.812 "compare": false, 00:18:55.812 "compare_and_write": false, 00:18:55.812 "abort": true, 00:18:55.812 "seek_hole": false, 00:18:55.812 "seek_data": false, 00:18:55.812 "copy": true, 00:18:55.812 "nvme_iov_md": false 00:18:55.812 }, 00:18:55.812 "memory_domains": [ 00:18:55.812 { 00:18:55.812 "dma_device_id": "system", 00:18:55.812 "dma_device_type": 1 00:18:55.812 }, 00:18:55.812 { 00:18:55.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.812 "dma_device_type": 2 00:18:55.812 } 00:18:55.812 ], 00:18:55.812 "driver_specific": {} 00:18:55.812 } 00:18:55.812 ] 00:18:55.812 23:04:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:55.812 23:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:18:55.812 23:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:55.812 23:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:55.812 23:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:55.812 23:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:55.812 23:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:55.812 23:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:55.812 23:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:55.812 23:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:55.812 23:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:55.812 23:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.812 23:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:56.071 23:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:56.071 "name": "Existed_Raid", 00:18:56.071 "uuid": "f2027b35-3c71-4688-a39f-1636b7f8a292", 00:18:56.071 "strip_size_kb": 64, 00:18:56.071 "state": "online", 00:18:56.071 "raid_level": "concat", 00:18:56.071 "superblock": true, 00:18:56.071 "num_base_bdevs": 3, 00:18:56.071 "num_base_bdevs_discovered": 3, 00:18:56.071 "num_base_bdevs_operational": 3, 00:18:56.071 "base_bdevs_list": [ 00:18:56.071 { 00:18:56.071 "name": "NewBaseBdev", 00:18:56.071 "uuid": "38cdc4f5-cc5b-45cd-a755-711219de22b1", 00:18:56.071 "is_configured": true, 00:18:56.071 "data_offset": 2048, 00:18:56.071 "data_size": 63488 00:18:56.071 }, 00:18:56.071 { 00:18:56.071 "name": "BaseBdev2", 00:18:56.071 "uuid": "a627eca4-f170-47fb-8b8a-f58b793b4e38", 00:18:56.071 "is_configured": true, 00:18:56.071 "data_offset": 2048, 00:18:56.071 "data_size": 63488 00:18:56.071 }, 00:18:56.071 { 00:18:56.071 "name": "BaseBdev3", 00:18:56.071 "uuid": "d7d75055-7186-4861-9206-952ff877be90", 00:18:56.071 "is_configured": true, 00:18:56.071 "data_offset": 2048, 00:18:56.071 "data_size": 63488 00:18:56.071 } 00:18:56.071 ] 00:18:56.071 }' 00:18:56.071 23:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:56.071 23:04:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.637 23:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:18:56.637 23:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:56.637 23:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:56.637 23:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:56.637 23:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:56.637 23:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:56.637 23:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:56.637 23:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:56.896 [2024-07-13 23:04:46.182768] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:56.896 23:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:56.896 "name": "Existed_Raid", 00:18:56.896 "aliases": [ 00:18:56.896 "f2027b35-3c71-4688-a39f-1636b7f8a292" 00:18:56.896 ], 00:18:56.896 "product_name": "Raid Volume", 00:18:56.896 "block_size": 512, 00:18:56.896 "num_blocks": 190464, 00:18:56.896 "uuid": "f2027b35-3c71-4688-a39f-1636b7f8a292", 00:18:56.896 "assigned_rate_limits": { 00:18:56.896 "rw_ios_per_sec": 0, 00:18:56.896 "rw_mbytes_per_sec": 0, 00:18:56.896 "r_mbytes_per_sec": 0, 00:18:56.896 "w_mbytes_per_sec": 0 00:18:56.896 }, 00:18:56.896 "claimed": false, 00:18:56.896 "zoned": false, 00:18:56.896 "supported_io_types": { 00:18:56.896 "read": true, 00:18:56.896 "write": true, 00:18:56.896 "unmap": true, 00:18:56.896 "flush": true, 00:18:56.896 "reset": true, 00:18:56.896 "nvme_admin": false, 00:18:56.896 "nvme_io": false, 00:18:56.896 "nvme_io_md": false, 00:18:56.896 "write_zeroes": true, 00:18:56.896 "zcopy": false, 00:18:56.896 "get_zone_info": false, 00:18:56.896 "zone_management": false, 00:18:56.896 "zone_append": false, 00:18:56.896 "compare": false, 00:18:56.896 "compare_and_write": false, 00:18:56.896 "abort": false, 00:18:56.896 "seek_hole": false, 00:18:56.896 "seek_data": false, 00:18:56.896 "copy": false, 00:18:56.896 "nvme_iov_md": false 00:18:56.896 }, 00:18:56.896 "memory_domains": [ 00:18:56.896 { 00:18:56.896 "dma_device_id": "system", 00:18:56.896 "dma_device_type": 1 00:18:56.896 }, 00:18:56.896 { 00:18:56.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.896 "dma_device_type": 2 00:18:56.896 }, 00:18:56.896 { 00:18:56.896 "dma_device_id": "system", 00:18:56.896 "dma_device_type": 1 00:18:56.896 }, 00:18:56.896 { 00:18:56.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.896 "dma_device_type": 2 00:18:56.896 }, 00:18:56.896 { 00:18:56.896 "dma_device_id": "system", 00:18:56.896 "dma_device_type": 1 00:18:56.896 }, 00:18:56.896 { 00:18:56.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.896 "dma_device_type": 2 00:18:56.896 } 00:18:56.896 ], 00:18:56.896 "driver_specific": { 00:18:56.896 "raid": { 00:18:56.896 "uuid": "f2027b35-3c71-4688-a39f-1636b7f8a292", 00:18:56.896 "strip_size_kb": 64, 00:18:56.896 "state": "online", 00:18:56.896 "raid_level": "concat", 00:18:56.896 "superblock": true, 00:18:56.896 "num_base_bdevs": 3, 00:18:56.896 "num_base_bdevs_discovered": 3, 00:18:56.896 "num_base_bdevs_operational": 3, 00:18:56.896 "base_bdevs_list": [ 00:18:56.896 { 00:18:56.896 "name": "NewBaseBdev", 00:18:56.896 "uuid": "38cdc4f5-cc5b-45cd-a755-711219de22b1", 00:18:56.896 "is_configured": true, 00:18:56.896 "data_offset": 2048, 00:18:56.896 "data_size": 63488 00:18:56.896 }, 00:18:56.896 { 00:18:56.896 "name": "BaseBdev2", 00:18:56.896 "uuid": "a627eca4-f170-47fb-8b8a-f58b793b4e38", 00:18:56.896 "is_configured": true, 00:18:56.896 "data_offset": 2048, 00:18:56.896 "data_size": 63488 00:18:56.896 }, 00:18:56.896 { 00:18:56.896 "name": "BaseBdev3", 00:18:56.896 "uuid": "d7d75055-7186-4861-9206-952ff877be90", 00:18:56.896 "is_configured": true, 00:18:56.896 "data_offset": 2048, 00:18:56.896 "data_size": 63488 00:18:56.896 } 00:18:56.896 ] 00:18:56.896 } 00:18:56.896 } 00:18:56.896 }' 00:18:56.896 23:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:56.896 23:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:18:56.896 BaseBdev2 00:18:56.896 BaseBdev3' 00:18:56.896 23:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:56.896 23:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:56.896 23:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:57.154 23:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:57.154 "name": "NewBaseBdev", 00:18:57.154 "aliases": [ 00:18:57.154 "38cdc4f5-cc5b-45cd-a755-711219de22b1" 00:18:57.154 ], 00:18:57.154 "product_name": "Malloc disk", 00:18:57.154 "block_size": 512, 00:18:57.154 "num_blocks": 65536, 00:18:57.154 "uuid": "38cdc4f5-cc5b-45cd-a755-711219de22b1", 00:18:57.154 "assigned_rate_limits": { 00:18:57.154 "rw_ios_per_sec": 0, 00:18:57.154 "rw_mbytes_per_sec": 0, 00:18:57.154 "r_mbytes_per_sec": 0, 00:18:57.154 "w_mbytes_per_sec": 0 00:18:57.154 }, 00:18:57.154 "claimed": true, 00:18:57.154 "claim_type": "exclusive_write", 00:18:57.154 "zoned": false, 00:18:57.154 "supported_io_types": { 00:18:57.154 "read": true, 00:18:57.154 "write": true, 00:18:57.155 "unmap": true, 00:18:57.155 "flush": true, 00:18:57.155 "reset": true, 00:18:57.155 "nvme_admin": false, 00:18:57.155 "nvme_io": false, 00:18:57.155 "nvme_io_md": false, 00:18:57.155 "write_zeroes": true, 00:18:57.155 "zcopy": true, 00:18:57.155 "get_zone_info": false, 00:18:57.155 "zone_management": false, 00:18:57.155 "zone_append": false, 00:18:57.155 "compare": false, 00:18:57.155 "compare_and_write": false, 00:18:57.155 "abort": true, 00:18:57.155 "seek_hole": false, 00:18:57.155 "seek_data": false, 00:18:57.155 "copy": true, 00:18:57.155 "nvme_iov_md": false 00:18:57.155 }, 00:18:57.155 "memory_domains": [ 00:18:57.155 { 00:18:57.155 "dma_device_id": "system", 00:18:57.155 "dma_device_type": 1 00:18:57.155 }, 00:18:57.155 { 00:18:57.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.155 "dma_device_type": 2 00:18:57.155 } 00:18:57.155 ], 00:18:57.155 "driver_specific": {} 00:18:57.155 }' 00:18:57.155 23:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:57.412 23:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:57.412 23:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:57.412 23:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:57.412 23:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:57.412 23:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:57.412 23:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:57.412 23:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:57.670 23:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:57.670 23:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:57.670 23:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:57.670 23:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:57.670 23:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:57.670 23:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:57.670 23:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:57.927 23:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:57.927 "name": "BaseBdev2", 00:18:57.927 "aliases": [ 00:18:57.927 "a627eca4-f170-47fb-8b8a-f58b793b4e38" 00:18:57.927 ], 00:18:57.927 "product_name": "Malloc disk", 00:18:57.927 "block_size": 512, 00:18:57.927 "num_blocks": 65536, 00:18:57.927 "uuid": "a627eca4-f170-47fb-8b8a-f58b793b4e38", 00:18:57.927 "assigned_rate_limits": { 00:18:57.927 "rw_ios_per_sec": 0, 00:18:57.927 "rw_mbytes_per_sec": 0, 00:18:57.927 "r_mbytes_per_sec": 0, 00:18:57.927 "w_mbytes_per_sec": 0 00:18:57.927 }, 00:18:57.927 "claimed": true, 00:18:57.927 "claim_type": "exclusive_write", 00:18:57.927 "zoned": false, 00:18:57.927 "supported_io_types": { 00:18:57.927 "read": true, 00:18:57.927 "write": true, 00:18:57.927 "unmap": true, 00:18:57.927 "flush": true, 00:18:57.927 "reset": true, 00:18:57.927 "nvme_admin": false, 00:18:57.927 "nvme_io": false, 00:18:57.927 "nvme_io_md": false, 00:18:57.927 "write_zeroes": true, 00:18:57.927 "zcopy": true, 00:18:57.927 "get_zone_info": false, 00:18:57.927 "zone_management": false, 00:18:57.927 "zone_append": false, 00:18:57.927 "compare": false, 00:18:57.927 "compare_and_write": false, 00:18:57.927 "abort": true, 00:18:57.927 "seek_hole": false, 00:18:57.927 "seek_data": false, 00:18:57.927 "copy": true, 00:18:57.927 "nvme_iov_md": false 00:18:57.928 }, 00:18:57.928 "memory_domains": [ 00:18:57.928 { 00:18:57.928 "dma_device_id": "system", 00:18:57.928 "dma_device_type": 1 00:18:57.928 }, 00:18:57.928 { 00:18:57.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.928 "dma_device_type": 2 00:18:57.928 } 00:18:57.928 ], 00:18:57.928 "driver_specific": {} 00:18:57.928 }' 00:18:57.928 23:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:57.928 23:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:58.185 23:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:58.185 23:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:58.185 23:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:58.185 23:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:58.185 23:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:58.186 23:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:58.186 23:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:58.186 23:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:58.443 23:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:58.443 23:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:58.443 23:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:58.443 23:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:58.443 23:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:58.701 23:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:58.701 "name": "BaseBdev3", 00:18:58.701 "aliases": [ 00:18:58.701 "d7d75055-7186-4861-9206-952ff877be90" 00:18:58.701 ], 00:18:58.701 "product_name": "Malloc disk", 00:18:58.701 "block_size": 512, 00:18:58.701 "num_blocks": 65536, 00:18:58.701 "uuid": "d7d75055-7186-4861-9206-952ff877be90", 00:18:58.701 "assigned_rate_limits": { 00:18:58.701 "rw_ios_per_sec": 0, 00:18:58.701 "rw_mbytes_per_sec": 0, 00:18:58.701 "r_mbytes_per_sec": 0, 00:18:58.701 "w_mbytes_per_sec": 0 00:18:58.701 }, 00:18:58.701 "claimed": true, 00:18:58.701 "claim_type": "exclusive_write", 00:18:58.701 "zoned": false, 00:18:58.701 "supported_io_types": { 00:18:58.701 "read": true, 00:18:58.701 "write": true, 00:18:58.701 "unmap": true, 00:18:58.701 "flush": true, 00:18:58.701 "reset": true, 00:18:58.701 "nvme_admin": false, 00:18:58.701 "nvme_io": false, 00:18:58.701 "nvme_io_md": false, 00:18:58.701 "write_zeroes": true, 00:18:58.701 "zcopy": true, 00:18:58.701 "get_zone_info": false, 00:18:58.701 "zone_management": false, 00:18:58.701 "zone_append": false, 00:18:58.701 "compare": false, 00:18:58.701 "compare_and_write": false, 00:18:58.701 "abort": true, 00:18:58.701 "seek_hole": false, 00:18:58.701 "seek_data": false, 00:18:58.701 "copy": true, 00:18:58.701 "nvme_iov_md": false 00:18:58.701 }, 00:18:58.701 "memory_domains": [ 00:18:58.701 { 00:18:58.701 "dma_device_id": "system", 00:18:58.701 "dma_device_type": 1 00:18:58.701 }, 00:18:58.701 { 00:18:58.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.701 "dma_device_type": 2 00:18:58.701 } 00:18:58.701 ], 00:18:58.701 "driver_specific": {} 00:18:58.701 }' 00:18:58.701 23:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:58.701 23:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:58.701 23:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:58.701 23:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:58.701 23:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:58.960 23:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:58.960 23:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:58.960 23:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:58.960 23:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:58.960 23:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:58.960 23:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:58.960 23:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:58.960 23:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:59.219 [2024-07-13 23:04:48.618944] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:59.219 [2024-07-13 23:04:48.619180] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:59.219 [2024-07-13 23:04:48.619352] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:59.219 [2024-07-13 23:04:48.619576] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:59.219 [2024-07-13 23:04:48.619683] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:18:59.477 23:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 139110 00:18:59.477 23:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 139110 ']' 00:18:59.477 23:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 139110 00:18:59.477 23:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:18:59.477 23:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:59.477 23:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 139110 00:18:59.477 killing process with pid 139110 00:18:59.477 23:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:59.477 23:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:59.477 23:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 139110' 00:18:59.477 23:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 139110 00:18:59.477 [2024-07-13 23:04:48.659777] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:59.477 23:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 139110 00:18:59.477 [2024-07-13 23:04:48.686147] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:59.736 23:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:18:59.736 ************************************ 00:18:59.736 END TEST raid_state_function_test_sb 00:18:59.736 ************************************ 00:18:59.736 00:18:59.736 real 0m29.058s 00:18:59.736 user 0m55.721s 00:18:59.736 sys 0m3.521s 00:18:59.736 23:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:59.736 23:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.736 23:04:48 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:59.736 23:04:48 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:18:59.736 23:04:48 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:59.736 23:04:48 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:59.736 23:04:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:59.736 ************************************ 00:18:59.736 START TEST raid_superblock_test 00:18:59.736 ************************************ 00:18:59.736 23:04:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 3 00:18:59.736 23:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:18:59.736 23:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:18:59.736 23:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:18:59.736 23:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:18:59.736 23:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:18:59.736 23:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:18:59.736 23:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:18:59.736 23:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:18:59.736 23:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:18:59.736 23:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:18:59.736 23:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:18:59.736 23:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:18:59.736 23:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:18:59.736 23:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:18:59.736 23:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:18:59.736 23:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:18:59.736 23:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=140083 00:18:59.736 23:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:59.736 23:04:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 140083 /var/tmp/spdk-raid.sock 00:18:59.736 23:04:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 140083 ']' 00:18:59.737 23:04:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:59.737 23:04:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:59.737 23:04:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:59.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:59.737 23:04:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:59.737 23:04:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.737 [2024-07-13 23:04:49.053210] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:59.737 [2024-07-13 23:04:49.053697] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140083 ] 00:18:59.995 [2024-07-13 23:04:49.201772] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.995 [2024-07-13 23:04:49.308894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.995 [2024-07-13 23:04:49.383622] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:00.566 23:04:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:00.566 23:04:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:19:00.566 23:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:19:00.566 23:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:00.566 23:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:19:00.566 23:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:19:00.566 23:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:00.566 23:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:00.566 23:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:00.566 23:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:00.566 23:04:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:00.824 malloc1 00:19:00.824 23:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:01.082 [2024-07-13 23:04:50.441989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:01.082 [2024-07-13 23:04:50.442354] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.082 [2024-07-13 23:04:50.442522] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:19:01.082 [2024-07-13 23:04:50.442801] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.082 [2024-07-13 23:04:50.445794] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.082 [2024-07-13 23:04:50.445985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:01.082 pt1 00:19:01.082 23:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:01.082 23:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:01.082 23:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:19:01.082 23:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:19:01.082 23:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:01.082 23:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:01.082 23:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:01.082 23:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:01.082 23:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:01.340 malloc2 00:19:01.340 23:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:01.598 [2024-07-13 23:04:50.892230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:01.598 [2024-07-13 23:04:50.892500] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.598 [2024-07-13 23:04:50.892582] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:19:01.598 [2024-07-13 23:04:50.892808] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.598 [2024-07-13 23:04:50.895325] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.598 [2024-07-13 23:04:50.895508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:01.598 pt2 00:19:01.598 23:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:01.598 23:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:01.598 23:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:19:01.598 23:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:19:01.598 23:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:01.598 23:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:01.598 23:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:01.598 23:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:01.598 23:04:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:01.856 malloc3 00:19:01.856 23:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:02.114 [2024-07-13 23:04:51.374222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:02.114 [2024-07-13 23:04:51.374536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.114 [2024-07-13 23:04:51.374714] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:02.114 [2024-07-13 23:04:51.374883] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.114 [2024-07-13 23:04:51.378004] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.114 [2024-07-13 23:04:51.378188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:02.114 pt3 00:19:02.114 23:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:02.114 23:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:02.114 23:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:19:02.373 [2024-07-13 23:04:51.686607] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:02.373 [2024-07-13 23:04:51.689186] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:02.373 [2024-07-13 23:04:51.689438] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:02.373 [2024-07-13 23:04:51.689743] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:19:02.373 [2024-07-13 23:04:51.689883] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:02.373 [2024-07-13 23:04:51.690066] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:19:02.373 [2024-07-13 23:04:51.690610] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:19:02.373 [2024-07-13 23:04:51.690753] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880 00:19:02.373 [2024-07-13 23:04:51.691056] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.373 23:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:02.373 23:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:02.373 23:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:02.373 23:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:02.373 23:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:02.373 23:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:02.373 23:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:02.373 23:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:02.373 23:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:02.373 23:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:02.373 23:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.373 23:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.631 23:04:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:02.631 "name": "raid_bdev1", 00:19:02.631 "uuid": "593523a5-54df-4e3a-bb31-7f656e76c5bc", 00:19:02.631 "strip_size_kb": 64, 00:19:02.631 "state": "online", 00:19:02.631 "raid_level": "concat", 00:19:02.631 "superblock": true, 00:19:02.631 "num_base_bdevs": 3, 00:19:02.631 "num_base_bdevs_discovered": 3, 00:19:02.631 "num_base_bdevs_operational": 3, 00:19:02.631 "base_bdevs_list": [ 00:19:02.631 { 00:19:02.631 "name": "pt1", 00:19:02.631 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:02.631 "is_configured": true, 00:19:02.631 "data_offset": 2048, 00:19:02.631 "data_size": 63488 00:19:02.631 }, 00:19:02.631 { 00:19:02.631 "name": "pt2", 00:19:02.631 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:02.631 "is_configured": true, 00:19:02.631 "data_offset": 2048, 00:19:02.631 "data_size": 63488 00:19:02.631 }, 00:19:02.631 { 00:19:02.631 "name": "pt3", 00:19:02.631 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:02.631 "is_configured": true, 00:19:02.631 "data_offset": 2048, 00:19:02.631 "data_size": 63488 00:19:02.631 } 00:19:02.631 ] 00:19:02.631 }' 00:19:02.631 23:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:02.631 23:04:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.565 23:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:19:03.565 23:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:03.565 23:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:03.565 23:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:03.565 23:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:03.565 23:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:03.566 23:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:03.566 23:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:03.566 [2024-07-13 23:04:52.863595] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:03.566 23:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:03.566 "name": "raid_bdev1", 00:19:03.566 "aliases": [ 00:19:03.566 "593523a5-54df-4e3a-bb31-7f656e76c5bc" 00:19:03.566 ], 00:19:03.566 "product_name": "Raid Volume", 00:19:03.566 "block_size": 512, 00:19:03.566 "num_blocks": 190464, 00:19:03.566 "uuid": "593523a5-54df-4e3a-bb31-7f656e76c5bc", 00:19:03.566 "assigned_rate_limits": { 00:19:03.566 "rw_ios_per_sec": 0, 00:19:03.566 "rw_mbytes_per_sec": 0, 00:19:03.566 "r_mbytes_per_sec": 0, 00:19:03.566 "w_mbytes_per_sec": 0 00:19:03.566 }, 00:19:03.566 "claimed": false, 00:19:03.566 "zoned": false, 00:19:03.566 "supported_io_types": { 00:19:03.566 "read": true, 00:19:03.566 "write": true, 00:19:03.566 "unmap": true, 00:19:03.566 "flush": true, 00:19:03.566 "reset": true, 00:19:03.566 "nvme_admin": false, 00:19:03.566 "nvme_io": false, 00:19:03.566 "nvme_io_md": false, 00:19:03.566 "write_zeroes": true, 00:19:03.566 "zcopy": false, 00:19:03.566 "get_zone_info": false, 00:19:03.566 "zone_management": false, 00:19:03.566 "zone_append": false, 00:19:03.566 "compare": false, 00:19:03.566 "compare_and_write": false, 00:19:03.566 "abort": false, 00:19:03.566 "seek_hole": false, 00:19:03.566 "seek_data": false, 00:19:03.566 "copy": false, 00:19:03.566 "nvme_iov_md": false 00:19:03.566 }, 00:19:03.566 "memory_domains": [ 00:19:03.566 { 00:19:03.566 "dma_device_id": "system", 00:19:03.566 "dma_device_type": 1 00:19:03.566 }, 00:19:03.566 { 00:19:03.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.566 "dma_device_type": 2 00:19:03.566 }, 00:19:03.566 { 00:19:03.566 "dma_device_id": "system", 00:19:03.566 "dma_device_type": 1 00:19:03.566 }, 00:19:03.566 { 00:19:03.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.566 "dma_device_type": 2 00:19:03.566 }, 00:19:03.566 { 00:19:03.566 "dma_device_id": "system", 00:19:03.566 "dma_device_type": 1 00:19:03.566 }, 00:19:03.566 { 00:19:03.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.566 "dma_device_type": 2 00:19:03.566 } 00:19:03.566 ], 00:19:03.566 "driver_specific": { 00:19:03.566 "raid": { 00:19:03.566 "uuid": "593523a5-54df-4e3a-bb31-7f656e76c5bc", 00:19:03.566 "strip_size_kb": 64, 00:19:03.566 "state": "online", 00:19:03.566 "raid_level": "concat", 00:19:03.566 "superblock": true, 00:19:03.566 "num_base_bdevs": 3, 00:19:03.566 "num_base_bdevs_discovered": 3, 00:19:03.566 "num_base_bdevs_operational": 3, 00:19:03.566 "base_bdevs_list": [ 00:19:03.566 { 00:19:03.566 "name": "pt1", 00:19:03.566 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:03.566 "is_configured": true, 00:19:03.566 "data_offset": 2048, 00:19:03.566 "data_size": 63488 00:19:03.566 }, 00:19:03.566 { 00:19:03.566 "name": "pt2", 00:19:03.566 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:03.566 "is_configured": true, 00:19:03.566 "data_offset": 2048, 00:19:03.566 "data_size": 63488 00:19:03.566 }, 00:19:03.566 { 00:19:03.566 "name": "pt3", 00:19:03.566 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:03.566 "is_configured": true, 00:19:03.566 "data_offset": 2048, 00:19:03.566 "data_size": 63488 00:19:03.566 } 00:19:03.566 ] 00:19:03.566 } 00:19:03.566 } 00:19:03.566 }' 00:19:03.566 23:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:03.566 23:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:03.566 pt2 00:19:03.566 pt3' 00:19:03.566 23:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:03.566 23:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:03.566 23:04:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:03.825 23:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:03.825 "name": "pt1", 00:19:03.825 "aliases": [ 00:19:03.825 "00000000-0000-0000-0000-000000000001" 00:19:03.825 ], 00:19:03.825 "product_name": "passthru", 00:19:03.825 "block_size": 512, 00:19:03.825 "num_blocks": 65536, 00:19:03.825 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:03.825 "assigned_rate_limits": { 00:19:03.825 "rw_ios_per_sec": 0, 00:19:03.825 "rw_mbytes_per_sec": 0, 00:19:03.825 "r_mbytes_per_sec": 0, 00:19:03.825 "w_mbytes_per_sec": 0 00:19:03.825 }, 00:19:03.825 "claimed": true, 00:19:03.825 "claim_type": "exclusive_write", 00:19:03.825 "zoned": false, 00:19:03.825 "supported_io_types": { 00:19:03.825 "read": true, 00:19:03.825 "write": true, 00:19:03.825 "unmap": true, 00:19:03.825 "flush": true, 00:19:03.825 "reset": true, 00:19:03.825 "nvme_admin": false, 00:19:03.825 "nvme_io": false, 00:19:03.825 "nvme_io_md": false, 00:19:03.825 "write_zeroes": true, 00:19:03.825 "zcopy": true, 00:19:03.825 "get_zone_info": false, 00:19:03.825 "zone_management": false, 00:19:03.825 "zone_append": false, 00:19:03.825 "compare": false, 00:19:03.825 "compare_and_write": false, 00:19:03.825 "abort": true, 00:19:03.825 "seek_hole": false, 00:19:03.825 "seek_data": false, 00:19:03.825 "copy": true, 00:19:03.825 "nvme_iov_md": false 00:19:03.825 }, 00:19:03.825 "memory_domains": [ 00:19:03.825 { 00:19:03.825 "dma_device_id": "system", 00:19:03.825 "dma_device_type": 1 00:19:03.825 }, 00:19:03.825 { 00:19:03.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.825 "dma_device_type": 2 00:19:03.825 } 00:19:03.825 ], 00:19:03.825 "driver_specific": { 00:19:03.825 "passthru": { 00:19:03.825 "name": "pt1", 00:19:03.825 "base_bdev_name": "malloc1" 00:19:03.825 } 00:19:03.825 } 00:19:03.825 }' 00:19:03.825 23:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:04.082 23:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:04.082 23:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:04.082 23:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:04.082 23:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:04.082 23:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:04.082 23:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:04.082 23:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:04.082 23:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:04.082 23:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:04.082 23:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:04.340 23:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:04.340 23:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:04.340 23:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:04.340 23:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:04.597 23:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:04.597 "name": "pt2", 00:19:04.597 "aliases": [ 00:19:04.597 "00000000-0000-0000-0000-000000000002" 00:19:04.597 ], 00:19:04.597 "product_name": "passthru", 00:19:04.597 "block_size": 512, 00:19:04.597 "num_blocks": 65536, 00:19:04.597 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:04.597 "assigned_rate_limits": { 00:19:04.597 "rw_ios_per_sec": 0, 00:19:04.597 "rw_mbytes_per_sec": 0, 00:19:04.597 "r_mbytes_per_sec": 0, 00:19:04.597 "w_mbytes_per_sec": 0 00:19:04.597 }, 00:19:04.597 "claimed": true, 00:19:04.597 "claim_type": "exclusive_write", 00:19:04.597 "zoned": false, 00:19:04.597 "supported_io_types": { 00:19:04.597 "read": true, 00:19:04.597 "write": true, 00:19:04.597 "unmap": true, 00:19:04.597 "flush": true, 00:19:04.597 "reset": true, 00:19:04.597 "nvme_admin": false, 00:19:04.597 "nvme_io": false, 00:19:04.597 "nvme_io_md": false, 00:19:04.597 "write_zeroes": true, 00:19:04.597 "zcopy": true, 00:19:04.597 "get_zone_info": false, 00:19:04.597 "zone_management": false, 00:19:04.597 "zone_append": false, 00:19:04.597 "compare": false, 00:19:04.597 "compare_and_write": false, 00:19:04.597 "abort": true, 00:19:04.597 "seek_hole": false, 00:19:04.597 "seek_data": false, 00:19:04.597 "copy": true, 00:19:04.597 "nvme_iov_md": false 00:19:04.597 }, 00:19:04.597 "memory_domains": [ 00:19:04.597 { 00:19:04.597 "dma_device_id": "system", 00:19:04.597 "dma_device_type": 1 00:19:04.597 }, 00:19:04.597 { 00:19:04.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.597 "dma_device_type": 2 00:19:04.597 } 00:19:04.597 ], 00:19:04.597 "driver_specific": { 00:19:04.597 "passthru": { 00:19:04.597 "name": "pt2", 00:19:04.597 "base_bdev_name": "malloc2" 00:19:04.597 } 00:19:04.597 } 00:19:04.597 }' 00:19:04.597 23:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:04.597 23:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:04.597 23:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:04.597 23:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:04.597 23:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:04.597 23:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:04.597 23:04:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:04.854 23:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:04.854 23:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:04.854 23:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:04.854 23:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:04.854 23:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:04.854 23:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:04.854 23:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:04.854 23:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:05.112 23:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:05.112 "name": "pt3", 00:19:05.112 "aliases": [ 00:19:05.112 "00000000-0000-0000-0000-000000000003" 00:19:05.112 ], 00:19:05.112 "product_name": "passthru", 00:19:05.112 "block_size": 512, 00:19:05.112 "num_blocks": 65536, 00:19:05.112 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:05.112 "assigned_rate_limits": { 00:19:05.112 "rw_ios_per_sec": 0, 00:19:05.112 "rw_mbytes_per_sec": 0, 00:19:05.112 "r_mbytes_per_sec": 0, 00:19:05.112 "w_mbytes_per_sec": 0 00:19:05.112 }, 00:19:05.112 "claimed": true, 00:19:05.112 "claim_type": "exclusive_write", 00:19:05.112 "zoned": false, 00:19:05.112 "supported_io_types": { 00:19:05.112 "read": true, 00:19:05.112 "write": true, 00:19:05.112 "unmap": true, 00:19:05.112 "flush": true, 00:19:05.112 "reset": true, 00:19:05.112 "nvme_admin": false, 00:19:05.112 "nvme_io": false, 00:19:05.112 "nvme_io_md": false, 00:19:05.112 "write_zeroes": true, 00:19:05.112 "zcopy": true, 00:19:05.112 "get_zone_info": false, 00:19:05.112 "zone_management": false, 00:19:05.112 "zone_append": false, 00:19:05.112 "compare": false, 00:19:05.112 "compare_and_write": false, 00:19:05.112 "abort": true, 00:19:05.112 "seek_hole": false, 00:19:05.112 "seek_data": false, 00:19:05.112 "copy": true, 00:19:05.112 "nvme_iov_md": false 00:19:05.112 }, 00:19:05.112 "memory_domains": [ 00:19:05.112 { 00:19:05.112 "dma_device_id": "system", 00:19:05.112 "dma_device_type": 1 00:19:05.112 }, 00:19:05.112 { 00:19:05.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.112 "dma_device_type": 2 00:19:05.112 } 00:19:05.112 ], 00:19:05.112 "driver_specific": { 00:19:05.112 "passthru": { 00:19:05.112 "name": "pt3", 00:19:05.112 "base_bdev_name": "malloc3" 00:19:05.112 } 00:19:05.112 } 00:19:05.112 }' 00:19:05.112 23:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:05.112 23:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:05.370 23:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:05.370 23:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:05.370 23:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:05.370 23:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:05.370 23:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:05.370 23:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:05.370 23:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:05.370 23:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:05.370 23:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:05.628 23:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:05.628 23:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:05.628 23:04:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:19:05.628 [2024-07-13 23:04:55.027940] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:05.887 23:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=593523a5-54df-4e3a-bb31-7f656e76c5bc 00:19:05.887 23:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 593523a5-54df-4e3a-bb31-7f656e76c5bc ']' 00:19:05.887 23:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:05.887 [2024-07-13 23:04:55.287790] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:05.887 [2024-07-13 23:04:55.287975] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:05.887 [2024-07-13 23:04:55.288225] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:05.887 [2024-07-13 23:04:55.288423] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:05.887 [2024-07-13 23:04:55.288535] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline 00:19:06.144 23:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.144 23:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:19:06.144 23:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:19:06.144 23:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:19:06.144 23:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:06.144 23:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:06.403 23:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:06.403 23:04:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:06.661 23:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:06.661 23:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:06.920 23:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:06.920 23:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:07.178 23:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:19:07.178 23:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:07.178 23:04:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:19:07.178 23:04:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:07.178 23:04:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:07.178 23:04:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:07.178 23:04:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:07.178 23:04:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:07.178 23:04:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:07.178 23:04:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:07.178 23:04:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:07.178 23:04:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:07.178 23:04:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:07.436 [2024-07-13 23:04:56.756133] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:07.436 [2024-07-13 23:04:56.758752] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:07.436 [2024-07-13 23:04:56.758956] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:07.436 [2024-07-13 23:04:56.759086] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:07.436 [2024-07-13 23:04:56.759383] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:07.436 [2024-07-13 23:04:56.759584] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:07.436 [2024-07-13 23:04:56.759781] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:07.436 [2024-07-13 23:04:56.759922] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state configuring 00:19:07.436 request: 00:19:07.436 { 00:19:07.436 "name": "raid_bdev1", 00:19:07.436 "raid_level": "concat", 00:19:07.436 "base_bdevs": [ 00:19:07.436 "malloc1", 00:19:07.436 "malloc2", 00:19:07.436 "malloc3" 00:19:07.436 ], 00:19:07.436 "strip_size_kb": 64, 00:19:07.436 "superblock": false, 00:19:07.436 "method": "bdev_raid_create", 00:19:07.436 "req_id": 1 00:19:07.436 } 00:19:07.436 Got JSON-RPC error response 00:19:07.436 response: 00:19:07.436 { 00:19:07.436 "code": -17, 00:19:07.436 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:07.436 } 00:19:07.436 23:04:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:19:07.436 23:04:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:07.436 23:04:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:07.436 23:04:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:07.436 23:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.436 23:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:19:07.695 23:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:19:07.695 23:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:19:07.695 23:04:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:07.953 [2024-07-13 23:04:57.224343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:07.953 [2024-07-13 23:04:57.224601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.953 [2024-07-13 23:04:57.224685] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:07.953 [2024-07-13 23:04:57.224996] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.953 [2024-07-13 23:04:57.227458] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.953 [2024-07-13 23:04:57.227641] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:07.953 [2024-07-13 23:04:57.227900] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:07.953 [2024-07-13 23:04:57.228076] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:07.953 pt1 00:19:07.953 23:04:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:19:07.953 23:04:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:07.953 23:04:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:07.953 23:04:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:07.953 23:04:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:07.953 23:04:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:07.953 23:04:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:07.953 23:04:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:07.953 23:04:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:07.953 23:04:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:07.953 23:04:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.953 23:04:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.212 23:04:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:08.212 "name": "raid_bdev1", 00:19:08.212 "uuid": "593523a5-54df-4e3a-bb31-7f656e76c5bc", 00:19:08.212 "strip_size_kb": 64, 00:19:08.212 "state": "configuring", 00:19:08.212 "raid_level": "concat", 00:19:08.212 "superblock": true, 00:19:08.212 "num_base_bdevs": 3, 00:19:08.212 "num_base_bdevs_discovered": 1, 00:19:08.212 "num_base_bdevs_operational": 3, 00:19:08.212 "base_bdevs_list": [ 00:19:08.212 { 00:19:08.212 "name": "pt1", 00:19:08.212 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:08.212 "is_configured": true, 00:19:08.212 "data_offset": 2048, 00:19:08.212 "data_size": 63488 00:19:08.212 }, 00:19:08.212 { 00:19:08.212 "name": null, 00:19:08.212 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:08.212 "is_configured": false, 00:19:08.212 "data_offset": 2048, 00:19:08.212 "data_size": 63488 00:19:08.212 }, 00:19:08.212 { 00:19:08.212 "name": null, 00:19:08.212 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:08.212 "is_configured": false, 00:19:08.212 "data_offset": 2048, 00:19:08.212 "data_size": 63488 00:19:08.212 } 00:19:08.212 ] 00:19:08.212 }' 00:19:08.212 23:04:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:08.212 23:04:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.780 23:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:19:08.781 23:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:09.039 [2024-07-13 23:04:58.312737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:09.039 [2024-07-13 23:04:58.313134] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:09.039 [2024-07-13 23:04:58.313377] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:19:09.039 [2024-07-13 23:04:58.313561] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:09.039 [2024-07-13 23:04:58.314305] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:09.039 [2024-07-13 23:04:58.314483] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:09.039 [2024-07-13 23:04:58.314784] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:09.039 [2024-07-13 23:04:58.314942] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:09.039 pt2 00:19:09.039 23:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:09.297 [2024-07-13 23:04:58.540762] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:09.297 23:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:19:09.297 23:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:09.297 23:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:09.297 23:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:09.297 23:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:09.297 23:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:09.297 23:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:09.297 23:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:09.297 23:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:09.297 23:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:09.297 23:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.297 23:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.556 23:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:09.556 "name": "raid_bdev1", 00:19:09.556 "uuid": "593523a5-54df-4e3a-bb31-7f656e76c5bc", 00:19:09.556 "strip_size_kb": 64, 00:19:09.556 "state": "configuring", 00:19:09.556 "raid_level": "concat", 00:19:09.556 "superblock": true, 00:19:09.556 "num_base_bdevs": 3, 00:19:09.556 "num_base_bdevs_discovered": 1, 00:19:09.556 "num_base_bdevs_operational": 3, 00:19:09.556 "base_bdevs_list": [ 00:19:09.556 { 00:19:09.556 "name": "pt1", 00:19:09.556 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:09.556 "is_configured": true, 00:19:09.556 "data_offset": 2048, 00:19:09.556 "data_size": 63488 00:19:09.556 }, 00:19:09.556 { 00:19:09.556 "name": null, 00:19:09.556 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:09.556 "is_configured": false, 00:19:09.556 "data_offset": 2048, 00:19:09.556 "data_size": 63488 00:19:09.556 }, 00:19:09.556 { 00:19:09.556 "name": null, 00:19:09.556 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:09.556 "is_configured": false, 00:19:09.556 "data_offset": 2048, 00:19:09.556 "data_size": 63488 00:19:09.556 } 00:19:09.556 ] 00:19:09.556 }' 00:19:09.556 23:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:09.556 23:04:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.123 23:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:19:10.123 23:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:19:10.123 23:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:10.382 [2024-07-13 23:04:59.608981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:10.382 [2024-07-13 23:04:59.609317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:10.382 [2024-07-13 23:04:59.609471] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:10.382 [2024-07-13 23:04:59.609600] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:10.382 [2024-07-13 23:04:59.610166] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:10.382 [2024-07-13 23:04:59.610357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:10.382 [2024-07-13 23:04:59.610605] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:10.382 [2024-07-13 23:04:59.610731] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:10.382 pt2 00:19:10.382 23:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:19:10.382 23:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:19:10.382 23:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:10.641 [2024-07-13 23:04:59.877063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:10.641 [2024-07-13 23:04:59.877463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:10.641 [2024-07-13 23:04:59.877668] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:10.641 [2024-07-13 23:04:59.877830] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:10.641 [2024-07-13 23:04:59.878465] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:10.641 [2024-07-13 23:04:59.878674] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:10.641 [2024-07-13 23:04:59.878922] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:10.641 [2024-07-13 23:04:59.879053] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:10.641 [2024-07-13 23:04:59.879269] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:19:10.641 [2024-07-13 23:04:59.879445] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:10.641 [2024-07-13 23:04:59.879591] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:19:10.641 [2024-07-13 23:04:59.879997] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:19:10.641 [2024-07-13 23:04:59.880166] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:19:10.641 [2024-07-13 23:04:59.880392] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:10.641 pt3 00:19:10.641 23:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:19:10.641 23:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:19:10.641 23:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:10.641 23:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:10.641 23:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:10.641 23:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:10.641 23:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:10.641 23:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:10.641 23:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:10.641 23:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:10.641 23:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:10.641 23:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:10.641 23:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.641 23:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.899 23:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:10.899 "name": "raid_bdev1", 00:19:10.899 "uuid": "593523a5-54df-4e3a-bb31-7f656e76c5bc", 00:19:10.899 "strip_size_kb": 64, 00:19:10.899 "state": "online", 00:19:10.899 "raid_level": "concat", 00:19:10.899 "superblock": true, 00:19:10.899 "num_base_bdevs": 3, 00:19:10.899 "num_base_bdevs_discovered": 3, 00:19:10.899 "num_base_bdevs_operational": 3, 00:19:10.899 "base_bdevs_list": [ 00:19:10.899 { 00:19:10.899 "name": "pt1", 00:19:10.899 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:10.899 "is_configured": true, 00:19:10.899 "data_offset": 2048, 00:19:10.899 "data_size": 63488 00:19:10.899 }, 00:19:10.899 { 00:19:10.899 "name": "pt2", 00:19:10.899 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:10.899 "is_configured": true, 00:19:10.899 "data_offset": 2048, 00:19:10.899 "data_size": 63488 00:19:10.899 }, 00:19:10.899 { 00:19:10.899 "name": "pt3", 00:19:10.899 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:10.899 "is_configured": true, 00:19:10.899 "data_offset": 2048, 00:19:10.899 "data_size": 63488 00:19:10.899 } 00:19:10.899 ] 00:19:10.899 }' 00:19:10.899 23:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:10.899 23:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.466 23:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:19:11.466 23:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:11.466 23:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:11.466 23:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:11.466 23:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:11.466 23:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:11.466 23:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:11.466 23:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:11.724 [2024-07-13 23:05:01.017648] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:11.724 23:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:11.724 "name": "raid_bdev1", 00:19:11.724 "aliases": [ 00:19:11.724 "593523a5-54df-4e3a-bb31-7f656e76c5bc" 00:19:11.724 ], 00:19:11.724 "product_name": "Raid Volume", 00:19:11.724 "block_size": 512, 00:19:11.724 "num_blocks": 190464, 00:19:11.724 "uuid": "593523a5-54df-4e3a-bb31-7f656e76c5bc", 00:19:11.724 "assigned_rate_limits": { 00:19:11.724 "rw_ios_per_sec": 0, 00:19:11.724 "rw_mbytes_per_sec": 0, 00:19:11.724 "r_mbytes_per_sec": 0, 00:19:11.724 "w_mbytes_per_sec": 0 00:19:11.724 }, 00:19:11.724 "claimed": false, 00:19:11.724 "zoned": false, 00:19:11.724 "supported_io_types": { 00:19:11.724 "read": true, 00:19:11.724 "write": true, 00:19:11.724 "unmap": true, 00:19:11.724 "flush": true, 00:19:11.724 "reset": true, 00:19:11.724 "nvme_admin": false, 00:19:11.724 "nvme_io": false, 00:19:11.724 "nvme_io_md": false, 00:19:11.724 "write_zeroes": true, 00:19:11.724 "zcopy": false, 00:19:11.724 "get_zone_info": false, 00:19:11.724 "zone_management": false, 00:19:11.724 "zone_append": false, 00:19:11.724 "compare": false, 00:19:11.724 "compare_and_write": false, 00:19:11.724 "abort": false, 00:19:11.724 "seek_hole": false, 00:19:11.724 "seek_data": false, 00:19:11.724 "copy": false, 00:19:11.724 "nvme_iov_md": false 00:19:11.724 }, 00:19:11.724 "memory_domains": [ 00:19:11.724 { 00:19:11.724 "dma_device_id": "system", 00:19:11.724 "dma_device_type": 1 00:19:11.724 }, 00:19:11.724 { 00:19:11.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:11.724 "dma_device_type": 2 00:19:11.724 }, 00:19:11.724 { 00:19:11.724 "dma_device_id": "system", 00:19:11.724 "dma_device_type": 1 00:19:11.724 }, 00:19:11.724 { 00:19:11.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:11.724 "dma_device_type": 2 00:19:11.724 }, 00:19:11.724 { 00:19:11.724 "dma_device_id": "system", 00:19:11.724 "dma_device_type": 1 00:19:11.724 }, 00:19:11.724 { 00:19:11.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:11.724 "dma_device_type": 2 00:19:11.724 } 00:19:11.724 ], 00:19:11.724 "driver_specific": { 00:19:11.724 "raid": { 00:19:11.724 "uuid": "593523a5-54df-4e3a-bb31-7f656e76c5bc", 00:19:11.724 "strip_size_kb": 64, 00:19:11.724 "state": "online", 00:19:11.724 "raid_level": "concat", 00:19:11.724 "superblock": true, 00:19:11.724 "num_base_bdevs": 3, 00:19:11.724 "num_base_bdevs_discovered": 3, 00:19:11.724 "num_base_bdevs_operational": 3, 00:19:11.724 "base_bdevs_list": [ 00:19:11.724 { 00:19:11.724 "name": "pt1", 00:19:11.724 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:11.724 "is_configured": true, 00:19:11.724 "data_offset": 2048, 00:19:11.724 "data_size": 63488 00:19:11.724 }, 00:19:11.724 { 00:19:11.724 "name": "pt2", 00:19:11.724 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:11.724 "is_configured": true, 00:19:11.724 "data_offset": 2048, 00:19:11.724 "data_size": 63488 00:19:11.724 }, 00:19:11.724 { 00:19:11.724 "name": "pt3", 00:19:11.724 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:11.724 "is_configured": true, 00:19:11.724 "data_offset": 2048, 00:19:11.724 "data_size": 63488 00:19:11.724 } 00:19:11.724 ] 00:19:11.724 } 00:19:11.724 } 00:19:11.724 }' 00:19:11.724 23:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:11.724 23:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:11.724 pt2 00:19:11.724 pt3' 00:19:11.724 23:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:11.724 23:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:11.724 23:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:11.981 23:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:11.981 "name": "pt1", 00:19:11.981 "aliases": [ 00:19:11.981 "00000000-0000-0000-0000-000000000001" 00:19:11.981 ], 00:19:11.981 "product_name": "passthru", 00:19:11.981 "block_size": 512, 00:19:11.981 "num_blocks": 65536, 00:19:11.981 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:11.981 "assigned_rate_limits": { 00:19:11.981 "rw_ios_per_sec": 0, 00:19:11.981 "rw_mbytes_per_sec": 0, 00:19:11.981 "r_mbytes_per_sec": 0, 00:19:11.981 "w_mbytes_per_sec": 0 00:19:11.981 }, 00:19:11.981 "claimed": true, 00:19:11.981 "claim_type": "exclusive_write", 00:19:11.981 "zoned": false, 00:19:11.981 "supported_io_types": { 00:19:11.981 "read": true, 00:19:11.981 "write": true, 00:19:11.981 "unmap": true, 00:19:11.981 "flush": true, 00:19:11.981 "reset": true, 00:19:11.981 "nvme_admin": false, 00:19:11.981 "nvme_io": false, 00:19:11.981 "nvme_io_md": false, 00:19:11.981 "write_zeroes": true, 00:19:11.981 "zcopy": true, 00:19:11.981 "get_zone_info": false, 00:19:11.981 "zone_management": false, 00:19:11.981 "zone_append": false, 00:19:11.981 "compare": false, 00:19:11.981 "compare_and_write": false, 00:19:11.981 "abort": true, 00:19:11.981 "seek_hole": false, 00:19:11.981 "seek_data": false, 00:19:11.981 "copy": true, 00:19:11.981 "nvme_iov_md": false 00:19:11.981 }, 00:19:11.981 "memory_domains": [ 00:19:11.981 { 00:19:11.981 "dma_device_id": "system", 00:19:11.981 "dma_device_type": 1 00:19:11.981 }, 00:19:11.981 { 00:19:11.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:11.981 "dma_device_type": 2 00:19:11.981 } 00:19:11.981 ], 00:19:11.981 "driver_specific": { 00:19:11.981 "passthru": { 00:19:11.981 "name": "pt1", 00:19:11.981 "base_bdev_name": "malloc1" 00:19:11.981 } 00:19:11.981 } 00:19:11.981 }' 00:19:11.981 23:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:12.239 23:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:12.239 23:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:12.239 23:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:12.239 23:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:12.239 23:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:12.239 23:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:12.239 23:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:12.239 23:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:12.239 23:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:12.496 23:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:12.496 23:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:12.496 23:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:12.496 23:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:12.496 23:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:12.754 23:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:12.754 "name": "pt2", 00:19:12.754 "aliases": [ 00:19:12.754 "00000000-0000-0000-0000-000000000002" 00:19:12.754 ], 00:19:12.754 "product_name": "passthru", 00:19:12.754 "block_size": 512, 00:19:12.754 "num_blocks": 65536, 00:19:12.754 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:12.754 "assigned_rate_limits": { 00:19:12.754 "rw_ios_per_sec": 0, 00:19:12.754 "rw_mbytes_per_sec": 0, 00:19:12.754 "r_mbytes_per_sec": 0, 00:19:12.754 "w_mbytes_per_sec": 0 00:19:12.754 }, 00:19:12.754 "claimed": true, 00:19:12.754 "claim_type": "exclusive_write", 00:19:12.754 "zoned": false, 00:19:12.754 "supported_io_types": { 00:19:12.754 "read": true, 00:19:12.754 "write": true, 00:19:12.754 "unmap": true, 00:19:12.754 "flush": true, 00:19:12.754 "reset": true, 00:19:12.754 "nvme_admin": false, 00:19:12.754 "nvme_io": false, 00:19:12.754 "nvme_io_md": false, 00:19:12.754 "write_zeroes": true, 00:19:12.754 "zcopy": true, 00:19:12.754 "get_zone_info": false, 00:19:12.754 "zone_management": false, 00:19:12.754 "zone_append": false, 00:19:12.754 "compare": false, 00:19:12.754 "compare_and_write": false, 00:19:12.754 "abort": true, 00:19:12.754 "seek_hole": false, 00:19:12.754 "seek_data": false, 00:19:12.754 "copy": true, 00:19:12.754 "nvme_iov_md": false 00:19:12.754 }, 00:19:12.754 "memory_domains": [ 00:19:12.754 { 00:19:12.754 "dma_device_id": "system", 00:19:12.754 "dma_device_type": 1 00:19:12.754 }, 00:19:12.754 { 00:19:12.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.754 "dma_device_type": 2 00:19:12.754 } 00:19:12.754 ], 00:19:12.754 "driver_specific": { 00:19:12.754 "passthru": { 00:19:12.754 "name": "pt2", 00:19:12.754 "base_bdev_name": "malloc2" 00:19:12.754 } 00:19:12.754 } 00:19:12.754 }' 00:19:12.754 23:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:12.754 23:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:12.754 23:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:12.754 23:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:12.754 23:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:12.754 23:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:12.754 23:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:13.011 23:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:13.011 23:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:13.011 23:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:13.011 23:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:13.011 23:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:13.011 23:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:13.011 23:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:13.011 23:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:13.269 23:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:13.269 "name": "pt3", 00:19:13.269 "aliases": [ 00:19:13.269 "00000000-0000-0000-0000-000000000003" 00:19:13.269 ], 00:19:13.269 "product_name": "passthru", 00:19:13.269 "block_size": 512, 00:19:13.269 "num_blocks": 65536, 00:19:13.270 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:13.270 "assigned_rate_limits": { 00:19:13.270 "rw_ios_per_sec": 0, 00:19:13.270 "rw_mbytes_per_sec": 0, 00:19:13.270 "r_mbytes_per_sec": 0, 00:19:13.270 "w_mbytes_per_sec": 0 00:19:13.270 }, 00:19:13.270 "claimed": true, 00:19:13.270 "claim_type": "exclusive_write", 00:19:13.270 "zoned": false, 00:19:13.270 "supported_io_types": { 00:19:13.270 "read": true, 00:19:13.270 "write": true, 00:19:13.270 "unmap": true, 00:19:13.270 "flush": true, 00:19:13.270 "reset": true, 00:19:13.270 "nvme_admin": false, 00:19:13.270 "nvme_io": false, 00:19:13.270 "nvme_io_md": false, 00:19:13.270 "write_zeroes": true, 00:19:13.270 "zcopy": true, 00:19:13.270 "get_zone_info": false, 00:19:13.270 "zone_management": false, 00:19:13.270 "zone_append": false, 00:19:13.270 "compare": false, 00:19:13.270 "compare_and_write": false, 00:19:13.270 "abort": true, 00:19:13.270 "seek_hole": false, 00:19:13.270 "seek_data": false, 00:19:13.270 "copy": true, 00:19:13.270 "nvme_iov_md": false 00:19:13.270 }, 00:19:13.270 "memory_domains": [ 00:19:13.270 { 00:19:13.270 "dma_device_id": "system", 00:19:13.270 "dma_device_type": 1 00:19:13.270 }, 00:19:13.270 { 00:19:13.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.270 "dma_device_type": 2 00:19:13.270 } 00:19:13.270 ], 00:19:13.270 "driver_specific": { 00:19:13.270 "passthru": { 00:19:13.270 "name": "pt3", 00:19:13.270 "base_bdev_name": "malloc3" 00:19:13.270 } 00:19:13.270 } 00:19:13.270 }' 00:19:13.270 23:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:13.270 23:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:13.270 23:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:13.270 23:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:13.528 23:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:13.528 23:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:13.528 23:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:13.528 23:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:13.528 23:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:13.528 23:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:13.529 23:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:13.529 23:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:13.529 23:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:13.529 23:05:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:19:13.787 [2024-07-13 23:05:03.126179] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:13.787 23:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 593523a5-54df-4e3a-bb31-7f656e76c5bc '!=' 593523a5-54df-4e3a-bb31-7f656e76c5bc ']' 00:19:13.787 23:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:19:13.787 23:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:13.787 23:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:13.787 23:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 140083 00:19:13.787 23:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 140083 ']' 00:19:13.787 23:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 140083 00:19:13.787 23:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:19:13.787 23:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:13.787 23:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 140083 00:19:13.787 23:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:13.787 23:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:13.787 23:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 140083' 00:19:13.787 killing process with pid 140083 00:19:13.787 23:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 140083 00:19:13.787 [2024-07-13 23:05:03.170925] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:13.787 23:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 140083 00:19:13.787 [2024-07-13 23:05:03.171342] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:13.787 [2024-07-13 23:05:03.171628] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:13.787 [2024-07-13 23:05:03.171799] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:19:14.044 [2024-07-13 23:05:03.212699] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:14.301 23:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:19:14.301 00:19:14.301 real 0m14.536s 00:19:14.301 user 0m26.708s 00:19:14.301 sys 0m2.017s 00:19:14.301 23:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:14.301 23:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.301 ************************************ 00:19:14.301 END TEST raid_superblock_test 00:19:14.301 ************************************ 00:19:14.301 23:05:03 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:14.301 23:05:03 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:19:14.301 23:05:03 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:14.301 23:05:03 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:14.301 23:05:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:14.301 ************************************ 00:19:14.301 START TEST raid_read_error_test 00:19:14.301 ************************************ 00:19:14.301 23:05:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 read 00:19:14.301 23:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:19:14.301 23:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:19:14.301 23:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:19:14.301 23:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:19:14.301 23:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:14.301 23:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:19:14.301 23:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:14.301 23:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:14.301 23:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:19:14.301 23:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:14.301 23:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:14.301 23:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:19:14.301 23:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:14.301 23:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:14.301 23:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:14.301 23:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:19:14.301 23:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:19:14.301 23:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:19:14.301 23:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:19:14.301 23:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:19:14.301 23:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:19:14.301 23:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:19:14.301 23:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:19:14.301 23:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:19:14.301 23:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:19:14.301 23:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.SjAzK2KLph 00:19:14.301 23:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=140565 00:19:14.301 23:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:14.301 23:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 140565 /var/tmp/spdk-raid.sock 00:19:14.301 23:05:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 140565 ']' 00:19:14.301 23:05:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:14.301 23:05:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:14.301 23:05:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:14.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:14.301 23:05:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:14.301 23:05:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.301 [2024-07-13 23:05:03.659556] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:19:14.301 [2024-07-13 23:05:03.660068] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140565 ] 00:19:14.559 [2024-07-13 23:05:03.800762] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.559 [2024-07-13 23:05:03.884850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.559 [2024-07-13 23:05:03.958738] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:15.492 23:05:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:15.492 23:05:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:19:15.492 23:05:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:15.492 23:05:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:15.492 BaseBdev1_malloc 00:19:15.492 23:05:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:15.749 true 00:19:15.749 23:05:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:16.007 [2024-07-13 23:05:05.282550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:16.007 [2024-07-13 23:05:05.282831] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:16.007 [2024-07-13 23:05:05.282930] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:19:16.007 [2024-07-13 23:05:05.283262] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:16.007 [2024-07-13 23:05:05.286164] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:16.007 [2024-07-13 23:05:05.286349] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:16.007 BaseBdev1 00:19:16.007 23:05:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:16.007 23:05:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:16.264 BaseBdev2_malloc 00:19:16.264 23:05:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:16.521 true 00:19:16.521 23:05:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:16.521 [2024-07-13 23:05:05.928797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:16.521 [2024-07-13 23:05:05.929113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:16.521 [2024-07-13 23:05:05.929206] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:19:16.521 [2024-07-13 23:05:05.929519] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:16.779 [2024-07-13 23:05:05.932257] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:16.779 [2024-07-13 23:05:05.932458] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:16.779 BaseBdev2 00:19:16.779 23:05:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:16.779 23:05:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:16.779 BaseBdev3_malloc 00:19:17.037 23:05:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:19:17.037 true 00:19:17.037 23:05:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:17.296 [2024-07-13 23:05:06.602511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:17.296 [2024-07-13 23:05:06.602837] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:17.296 [2024-07-13 23:05:06.602954] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:19:17.296 [2024-07-13 23:05:06.603225] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:17.296 [2024-07-13 23:05:06.606096] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:17.296 [2024-07-13 23:05:06.606296] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:17.296 BaseBdev3 00:19:17.296 23:05:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:19:17.554 [2024-07-13 23:05:06.822801] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:17.554 [2024-07-13 23:05:06.825248] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:17.554 [2024-07-13 23:05:06.825509] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:17.554 [2024-07-13 23:05:06.825934] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:19:17.554 [2024-07-13 23:05:06.826069] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:17.554 [2024-07-13 23:05:06.826268] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:19:17.554 [2024-07-13 23:05:06.826806] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:19:17.554 [2024-07-13 23:05:06.826937] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:19:17.554 [2024-07-13 23:05:06.827250] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:17.554 23:05:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:17.554 23:05:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:17.554 23:05:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:17.554 23:05:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:17.554 23:05:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:17.554 23:05:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:17.554 23:05:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:17.554 23:05:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:17.554 23:05:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:17.554 23:05:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:17.555 23:05:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.555 23:05:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.813 23:05:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:17.813 "name": "raid_bdev1", 00:19:17.813 "uuid": "0c9ce27e-886f-45a8-9149-ffac973e5993", 00:19:17.813 "strip_size_kb": 64, 00:19:17.813 "state": "online", 00:19:17.813 "raid_level": "concat", 00:19:17.813 "superblock": true, 00:19:17.813 "num_base_bdevs": 3, 00:19:17.813 "num_base_bdevs_discovered": 3, 00:19:17.813 "num_base_bdevs_operational": 3, 00:19:17.813 "base_bdevs_list": [ 00:19:17.813 { 00:19:17.813 "name": "BaseBdev1", 00:19:17.813 "uuid": "2217c02a-f503-5bce-9784-dc570e0582c0", 00:19:17.813 "is_configured": true, 00:19:17.813 "data_offset": 2048, 00:19:17.813 "data_size": 63488 00:19:17.813 }, 00:19:17.813 { 00:19:17.813 "name": "BaseBdev2", 00:19:17.813 "uuid": "fa9ae093-ee53-56a6-84ed-4bbbf8538d72", 00:19:17.813 "is_configured": true, 00:19:17.813 "data_offset": 2048, 00:19:17.813 "data_size": 63488 00:19:17.813 }, 00:19:17.813 { 00:19:17.813 "name": "BaseBdev3", 00:19:17.813 "uuid": "47bb8e7c-fe59-5767-b520-fdad958761d9", 00:19:17.813 "is_configured": true, 00:19:17.813 "data_offset": 2048, 00:19:17.813 "data_size": 63488 00:19:17.813 } 00:19:17.813 ] 00:19:17.813 }' 00:19:17.813 23:05:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:17.813 23:05:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.379 23:05:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:19:18.379 23:05:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:18.379 [2024-07-13 23:05:07.779985] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:19:19.350 23:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:19:19.608 23:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:19:19.608 23:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:19:19.608 23:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:19:19.608 23:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:19.608 23:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:19.608 23:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:19.608 23:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:19.608 23:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:19.608 23:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:19.608 23:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:19.608 23:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:19.608 23:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:19.608 23:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:19.608 23:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.608 23:05:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.867 23:05:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:19.867 "name": "raid_bdev1", 00:19:19.867 "uuid": "0c9ce27e-886f-45a8-9149-ffac973e5993", 00:19:19.867 "strip_size_kb": 64, 00:19:19.867 "state": "online", 00:19:19.867 "raid_level": "concat", 00:19:19.867 "superblock": true, 00:19:19.867 "num_base_bdevs": 3, 00:19:19.867 "num_base_bdevs_discovered": 3, 00:19:19.867 "num_base_bdevs_operational": 3, 00:19:19.867 "base_bdevs_list": [ 00:19:19.867 { 00:19:19.867 "name": "BaseBdev1", 00:19:19.867 "uuid": "2217c02a-f503-5bce-9784-dc570e0582c0", 00:19:19.867 "is_configured": true, 00:19:19.867 "data_offset": 2048, 00:19:19.867 "data_size": 63488 00:19:19.867 }, 00:19:19.867 { 00:19:19.867 "name": "BaseBdev2", 00:19:19.867 "uuid": "fa9ae093-ee53-56a6-84ed-4bbbf8538d72", 00:19:19.867 "is_configured": true, 00:19:19.867 "data_offset": 2048, 00:19:19.867 "data_size": 63488 00:19:19.867 }, 00:19:19.867 { 00:19:19.867 "name": "BaseBdev3", 00:19:19.867 "uuid": "47bb8e7c-fe59-5767-b520-fdad958761d9", 00:19:19.867 "is_configured": true, 00:19:19.867 "data_offset": 2048, 00:19:19.867 "data_size": 63488 00:19:19.867 } 00:19:19.867 ] 00:19:19.867 }' 00:19:19.867 23:05:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:19.867 23:05:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.802 23:05:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:20.802 [2024-07-13 23:05:10.097567] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:20.802 [2024-07-13 23:05:10.097884] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:20.802 [2024-07-13 23:05:10.101103] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:20.802 [2024-07-13 23:05:10.101295] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.802 [2024-07-13 23:05:10.101385] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:20.802 [2024-07-13 23:05:10.101716] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:19:20.802 0 00:19:20.802 23:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 140565 00:19:20.802 23:05:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 140565 ']' 00:19:20.802 23:05:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 140565 00:19:20.802 23:05:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:19:20.802 23:05:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:20.802 23:05:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 140565 00:19:20.802 23:05:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:20.802 23:05:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:20.802 23:05:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 140565' 00:19:20.802 killing process with pid 140565 00:19:20.802 23:05:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 140565 00:19:20.802 23:05:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 140565 00:19:20.802 [2024-07-13 23:05:10.148139] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:20.802 [2024-07-13 23:05:10.184671] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:21.369 23:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.SjAzK2KLph 00:19:21.369 23:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:19:21.369 23:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:19:21.369 23:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:19:21.369 23:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:19:21.369 23:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:21.369 23:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:21.369 23:05:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:19:21.369 00:19:21.369 real 0m6.927s 00:19:21.369 user 0m11.122s 00:19:21.369 sys 0m0.896s 00:19:21.369 23:05:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:21.369 23:05:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.369 ************************************ 00:19:21.369 END TEST raid_read_error_test 00:19:21.369 ************************************ 00:19:21.369 23:05:10 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:21.369 23:05:10 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:19:21.369 23:05:10 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:21.369 23:05:10 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:21.369 23:05:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:21.369 ************************************ 00:19:21.369 START TEST raid_write_error_test 00:19:21.369 ************************************ 00:19:21.369 23:05:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 write 00:19:21.369 23:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:19:21.370 23:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:19:21.370 23:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:19:21.370 23:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:19:21.370 23:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:21.370 23:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:19:21.370 23:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:21.370 23:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:21.370 23:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:19:21.370 23:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:21.370 23:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:21.370 23:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:19:21.370 23:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:21.370 23:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:21.370 23:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:21.370 23:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:19:21.370 23:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:19:21.370 23:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:19:21.370 23:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:19:21.370 23:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:19:21.370 23:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:19:21.370 23:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:19:21.370 23:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:19:21.370 23:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:19:21.370 23:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:19:21.370 23:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.BYShjsn7PK 00:19:21.370 23:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=140758 00:19:21.370 23:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:21.370 23:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 140758 /var/tmp/spdk-raid.sock 00:19:21.370 23:05:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 140758 ']' 00:19:21.370 23:05:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:21.370 23:05:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:21.370 23:05:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:21.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:21.370 23:05:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:21.370 23:05:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.370 [2024-07-13 23:05:10.656614] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:19:21.370 [2024-07-13 23:05:10.657119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140758 ] 00:19:21.628 [2024-07-13 23:05:10.797713] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.628 [2024-07-13 23:05:10.875700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.628 [2024-07-13 23:05:10.948510] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:22.565 23:05:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:22.565 23:05:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:19:22.565 23:05:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:22.565 23:05:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:22.565 BaseBdev1_malloc 00:19:22.565 23:05:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:22.823 true 00:19:22.823 23:05:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:23.081 [2024-07-13 23:05:12.292635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:23.081 [2024-07-13 23:05:12.292963] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.081 [2024-07-13 23:05:12.293148] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:19:23.081 [2024-07-13 23:05:12.293316] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.081 [2024-07-13 23:05:12.296178] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.081 [2024-07-13 23:05:12.296367] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:23.081 BaseBdev1 00:19:23.081 23:05:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:23.081 23:05:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:23.339 BaseBdev2_malloc 00:19:23.339 23:05:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:23.596 true 00:19:23.596 23:05:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:23.596 [2024-07-13 23:05:13.002383] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:23.596 [2024-07-13 23:05:13.002854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.596 [2024-07-13 23:05:13.003061] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:19:23.596 [2024-07-13 23:05:13.003232] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.854 [2024-07-13 23:05:13.006039] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.854 [2024-07-13 23:05:13.006209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:23.854 BaseBdev2 00:19:23.854 23:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:23.854 23:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:24.112 BaseBdev3_malloc 00:19:24.112 23:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:19:24.112 true 00:19:24.369 23:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:24.369 [2024-07-13 23:05:13.726876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:24.369 [2024-07-13 23:05:13.727234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:24.369 [2024-07-13 23:05:13.727328] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:19:24.369 [2024-07-13 23:05:13.727710] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:24.369 [2024-07-13 23:05:13.730735] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:24.369 [2024-07-13 23:05:13.730996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:24.369 BaseBdev3 00:19:24.369 23:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:19:24.627 [2024-07-13 23:05:13.951423] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:24.627 [2024-07-13 23:05:13.953782] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:24.627 [2024-07-13 23:05:13.954031] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:24.627 [2024-07-13 23:05:13.954388] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:19:24.628 [2024-07-13 23:05:13.954540] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:24.628 [2024-07-13 23:05:13.954732] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:19:24.628 [2024-07-13 23:05:13.955388] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:19:24.628 [2024-07-13 23:05:13.955512] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:19:24.628 [2024-07-13 23:05:13.955824] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:24.628 23:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:24.628 23:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:24.628 23:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:24.628 23:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:24.628 23:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:24.628 23:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:24.628 23:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:24.628 23:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:24.628 23:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:24.628 23:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:24.628 23:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.628 23:05:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.886 23:05:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:24.886 "name": "raid_bdev1", 00:19:24.886 "uuid": "6858573a-7c9e-4d11-859e-c36e742fea24", 00:19:24.886 "strip_size_kb": 64, 00:19:24.886 "state": "online", 00:19:24.886 "raid_level": "concat", 00:19:24.886 "superblock": true, 00:19:24.886 "num_base_bdevs": 3, 00:19:24.886 "num_base_bdevs_discovered": 3, 00:19:24.886 "num_base_bdevs_operational": 3, 00:19:24.886 "base_bdevs_list": [ 00:19:24.886 { 00:19:24.886 "name": "BaseBdev1", 00:19:24.886 "uuid": "98159f2d-8354-521e-9010-f6460783e036", 00:19:24.886 "is_configured": true, 00:19:24.886 "data_offset": 2048, 00:19:24.886 "data_size": 63488 00:19:24.886 }, 00:19:24.886 { 00:19:24.886 "name": "BaseBdev2", 00:19:24.886 "uuid": "af15aaae-2018-5baf-93c3-011bc956083a", 00:19:24.886 "is_configured": true, 00:19:24.886 "data_offset": 2048, 00:19:24.886 "data_size": 63488 00:19:24.886 }, 00:19:24.886 { 00:19:24.886 "name": "BaseBdev3", 00:19:24.886 "uuid": "3d71f7ff-053f-56c2-815b-56cf8d0c136e", 00:19:24.886 "is_configured": true, 00:19:24.886 "data_offset": 2048, 00:19:24.886 "data_size": 63488 00:19:24.886 } 00:19:24.886 ] 00:19:24.886 }' 00:19:24.886 23:05:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:24.886 23:05:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.820 23:05:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:19:25.820 23:05:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:25.820 [2024-07-13 23:05:14.960666] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:19:26.756 23:05:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:19:26.756 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:19:26.756 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:19:26.756 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:19:26.756 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:26.756 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:26.756 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:26.756 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:26.756 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:26.756 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:26.756 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:26.756 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:26.756 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:26.756 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:26.756 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:26.756 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.015 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:27.015 "name": "raid_bdev1", 00:19:27.015 "uuid": "6858573a-7c9e-4d11-859e-c36e742fea24", 00:19:27.015 "strip_size_kb": 64, 00:19:27.015 "state": "online", 00:19:27.015 "raid_level": "concat", 00:19:27.015 "superblock": true, 00:19:27.015 "num_base_bdevs": 3, 00:19:27.015 "num_base_bdevs_discovered": 3, 00:19:27.015 "num_base_bdevs_operational": 3, 00:19:27.015 "base_bdevs_list": [ 00:19:27.015 { 00:19:27.015 "name": "BaseBdev1", 00:19:27.015 "uuid": "98159f2d-8354-521e-9010-f6460783e036", 00:19:27.015 "is_configured": true, 00:19:27.015 "data_offset": 2048, 00:19:27.015 "data_size": 63488 00:19:27.015 }, 00:19:27.015 { 00:19:27.015 "name": "BaseBdev2", 00:19:27.015 "uuid": "af15aaae-2018-5baf-93c3-011bc956083a", 00:19:27.015 "is_configured": true, 00:19:27.015 "data_offset": 2048, 00:19:27.015 "data_size": 63488 00:19:27.015 }, 00:19:27.015 { 00:19:27.015 "name": "BaseBdev3", 00:19:27.015 "uuid": "3d71f7ff-053f-56c2-815b-56cf8d0c136e", 00:19:27.015 "is_configured": true, 00:19:27.015 "data_offset": 2048, 00:19:27.015 "data_size": 63488 00:19:27.015 } 00:19:27.015 ] 00:19:27.015 }' 00:19:27.015 23:05:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:27.015 23:05:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.949 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:27.949 [2024-07-13 23:05:17.291110] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:27.949 [2024-07-13 23:05:17.291463] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:27.949 [2024-07-13 23:05:17.294835] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:27.949 [2024-07-13 23:05:17.295145] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:27.949 [2024-07-13 23:05:17.295235] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:27.949 [2024-07-13 23:05:17.295448] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:19:27.949 0 00:19:27.949 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 140758 00:19:27.949 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 140758 ']' 00:19:27.949 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 140758 00:19:27.949 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:19:27.949 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:27.949 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 140758 00:19:27.949 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:27.949 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:27.949 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 140758' 00:19:27.949 killing process with pid 140758 00:19:27.949 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 140758 00:19:27.949 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 140758 00:19:27.949 [2024-07-13 23:05:17.344453] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:28.206 [2024-07-13 23:05:17.380972] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:28.532 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.BYShjsn7PK 00:19:28.532 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:19:28.532 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:19:28.532 ************************************ 00:19:28.532 END TEST raid_write_error_test 00:19:28.532 ************************************ 00:19:28.532 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:19:28.532 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:19:28.532 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:28.532 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:28.532 23:05:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:19:28.532 00:19:28.532 real 0m7.137s 00:19:28.532 user 0m11.330s 00:19:28.532 sys 0m1.051s 00:19:28.532 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:28.532 23:05:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.532 23:05:17 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:28.532 23:05:17 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:19:28.532 23:05:17 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:19:28.532 23:05:17 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:28.532 23:05:17 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:28.532 23:05:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:28.532 ************************************ 00:19:28.532 START TEST raid_state_function_test 00:19:28.532 ************************************ 00:19:28.532 23:05:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 false 00:19:28.532 23:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:19:28.532 23:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:19:28.532 23:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:19:28.532 23:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:28.532 23:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:28.532 23:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:28.532 23:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:19:28.532 23:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:28.532 23:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:28.532 23:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:19:28.532 23:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:28.532 23:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:28.532 23:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:19:28.532 23:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:28.532 23:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:28.532 23:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:28.532 23:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:28.532 23:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:28.532 23:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:28.532 23:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:28.532 23:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:28.532 23:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:19:28.532 23:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:19:28.532 23:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:19:28.532 23:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:19:28.532 23:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=140951 00:19:28.532 23:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:28.532 23:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 140951' 00:19:28.532 Process raid pid: 140951 00:19:28.532 23:05:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 140951 /var/tmp/spdk-raid.sock 00:19:28.532 23:05:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 140951 ']' 00:19:28.532 23:05:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:28.532 23:05:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:28.532 23:05:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:28.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:28.532 23:05:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:28.532 23:05:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.532 [2024-07-13 23:05:17.833542] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:19:28.532 [2024-07-13 23:05:17.834035] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.806 [2024-07-13 23:05:17.976476] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.806 [2024-07-13 23:05:18.076183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.806 [2024-07-13 23:05:18.149411] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:29.370 23:05:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:29.370 23:05:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:19:29.370 23:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:29.627 [2024-07-13 23:05:18.966867] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:29.627 [2024-07-13 23:05:18.967127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:29.627 [2024-07-13 23:05:18.967287] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:29.627 [2024-07-13 23:05:18.967352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:29.627 [2024-07-13 23:05:18.967525] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:29.627 [2024-07-13 23:05:18.967639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:29.627 23:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:29.627 23:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:29.627 23:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:29.627 23:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:29.627 23:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:29.627 23:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:29.627 23:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:29.627 23:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:29.627 23:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:29.627 23:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:29.627 23:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.627 23:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.883 23:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:29.883 "name": "Existed_Raid", 00:19:29.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.883 "strip_size_kb": 0, 00:19:29.883 "state": "configuring", 00:19:29.883 "raid_level": "raid1", 00:19:29.883 "superblock": false, 00:19:29.883 "num_base_bdevs": 3, 00:19:29.883 "num_base_bdevs_discovered": 0, 00:19:29.883 "num_base_bdevs_operational": 3, 00:19:29.883 "base_bdevs_list": [ 00:19:29.883 { 00:19:29.883 "name": "BaseBdev1", 00:19:29.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.883 "is_configured": false, 00:19:29.883 "data_offset": 0, 00:19:29.883 "data_size": 0 00:19:29.883 }, 00:19:29.883 { 00:19:29.883 "name": "BaseBdev2", 00:19:29.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.883 "is_configured": false, 00:19:29.883 "data_offset": 0, 00:19:29.883 "data_size": 0 00:19:29.883 }, 00:19:29.883 { 00:19:29.883 "name": "BaseBdev3", 00:19:29.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.884 "is_configured": false, 00:19:29.884 "data_offset": 0, 00:19:29.884 "data_size": 0 00:19:29.884 } 00:19:29.884 ] 00:19:29.884 }' 00:19:29.884 23:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:29.884 23:05:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.817 23:05:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:30.817 [2024-07-13 23:05:20.151008] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:30.817 [2024-07-13 23:05:20.151205] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:19:30.817 23:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:31.075 [2024-07-13 23:05:20.363079] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:31.075 [2024-07-13 23:05:20.363323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:31.075 [2024-07-13 23:05:20.363451] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:31.075 [2024-07-13 23:05:20.363546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:31.075 [2024-07-13 23:05:20.363804] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:31.075 [2024-07-13 23:05:20.363897] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:31.075 23:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:31.333 [2024-07-13 23:05:20.588876] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:31.333 BaseBdev1 00:19:31.333 23:05:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:31.333 23:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:31.333 23:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:31.333 23:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:31.333 23:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:31.333 23:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:31.333 23:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:31.592 23:05:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:31.849 [ 00:19:31.849 { 00:19:31.849 "name": "BaseBdev1", 00:19:31.849 "aliases": [ 00:19:31.849 "3e8189f6-e1f1-4128-a271-29a3c29b230b" 00:19:31.849 ], 00:19:31.849 "product_name": "Malloc disk", 00:19:31.849 "block_size": 512, 00:19:31.849 "num_blocks": 65536, 00:19:31.849 "uuid": "3e8189f6-e1f1-4128-a271-29a3c29b230b", 00:19:31.849 "assigned_rate_limits": { 00:19:31.849 "rw_ios_per_sec": 0, 00:19:31.849 "rw_mbytes_per_sec": 0, 00:19:31.849 "r_mbytes_per_sec": 0, 00:19:31.849 "w_mbytes_per_sec": 0 00:19:31.849 }, 00:19:31.849 "claimed": true, 00:19:31.849 "claim_type": "exclusive_write", 00:19:31.849 "zoned": false, 00:19:31.849 "supported_io_types": { 00:19:31.849 "read": true, 00:19:31.849 "write": true, 00:19:31.849 "unmap": true, 00:19:31.849 "flush": true, 00:19:31.849 "reset": true, 00:19:31.849 "nvme_admin": false, 00:19:31.849 "nvme_io": false, 00:19:31.849 "nvme_io_md": false, 00:19:31.849 "write_zeroes": true, 00:19:31.849 "zcopy": true, 00:19:31.849 "get_zone_info": false, 00:19:31.849 "zone_management": false, 00:19:31.849 "zone_append": false, 00:19:31.849 "compare": false, 00:19:31.849 "compare_and_write": false, 00:19:31.849 "abort": true, 00:19:31.849 "seek_hole": false, 00:19:31.849 "seek_data": false, 00:19:31.849 "copy": true, 00:19:31.849 "nvme_iov_md": false 00:19:31.849 }, 00:19:31.849 "memory_domains": [ 00:19:31.849 { 00:19:31.849 "dma_device_id": "system", 00:19:31.849 "dma_device_type": 1 00:19:31.849 }, 00:19:31.849 { 00:19:31.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.849 "dma_device_type": 2 00:19:31.849 } 00:19:31.849 ], 00:19:31.849 "driver_specific": {} 00:19:31.849 } 00:19:31.849 ] 00:19:31.849 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:31.849 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:31.849 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:31.849 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:31.849 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:31.849 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:31.849 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:31.849 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:31.849 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:31.849 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:31.849 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:31.849 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.849 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:32.106 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:32.106 "name": "Existed_Raid", 00:19:32.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.106 "strip_size_kb": 0, 00:19:32.106 "state": "configuring", 00:19:32.106 "raid_level": "raid1", 00:19:32.106 "superblock": false, 00:19:32.106 "num_base_bdevs": 3, 00:19:32.106 "num_base_bdevs_discovered": 1, 00:19:32.106 "num_base_bdevs_operational": 3, 00:19:32.106 "base_bdevs_list": [ 00:19:32.106 { 00:19:32.106 "name": "BaseBdev1", 00:19:32.106 "uuid": "3e8189f6-e1f1-4128-a271-29a3c29b230b", 00:19:32.106 "is_configured": true, 00:19:32.106 "data_offset": 0, 00:19:32.106 "data_size": 65536 00:19:32.106 }, 00:19:32.106 { 00:19:32.106 "name": "BaseBdev2", 00:19:32.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.106 "is_configured": false, 00:19:32.106 "data_offset": 0, 00:19:32.106 "data_size": 0 00:19:32.106 }, 00:19:32.106 { 00:19:32.106 "name": "BaseBdev3", 00:19:32.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.106 "is_configured": false, 00:19:32.106 "data_offset": 0, 00:19:32.106 "data_size": 0 00:19:32.106 } 00:19:32.106 ] 00:19:32.106 }' 00:19:32.106 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:32.106 23:05:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.672 23:05:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:32.930 [2024-07-13 23:05:22.193480] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:32.930 [2024-07-13 23:05:22.193760] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:19:32.930 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:33.188 [2024-07-13 23:05:22.457574] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:33.188 [2024-07-13 23:05:22.460005] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:33.188 [2024-07-13 23:05:22.460225] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:33.188 [2024-07-13 23:05:22.460346] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:33.188 [2024-07-13 23:05:22.460434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:33.188 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:33.188 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:33.188 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:33.188 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:33.188 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:33.188 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:33.188 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:33.188 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:33.188 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:33.188 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:33.188 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:33.188 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:33.188 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.188 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:33.447 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:33.447 "name": "Existed_Raid", 00:19:33.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.447 "strip_size_kb": 0, 00:19:33.447 "state": "configuring", 00:19:33.447 "raid_level": "raid1", 00:19:33.447 "superblock": false, 00:19:33.447 "num_base_bdevs": 3, 00:19:33.447 "num_base_bdevs_discovered": 1, 00:19:33.447 "num_base_bdevs_operational": 3, 00:19:33.447 "base_bdevs_list": [ 00:19:33.447 { 00:19:33.447 "name": "BaseBdev1", 00:19:33.447 "uuid": "3e8189f6-e1f1-4128-a271-29a3c29b230b", 00:19:33.447 "is_configured": true, 00:19:33.447 "data_offset": 0, 00:19:33.447 "data_size": 65536 00:19:33.447 }, 00:19:33.447 { 00:19:33.447 "name": "BaseBdev2", 00:19:33.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.447 "is_configured": false, 00:19:33.447 "data_offset": 0, 00:19:33.447 "data_size": 0 00:19:33.447 }, 00:19:33.447 { 00:19:33.447 "name": "BaseBdev3", 00:19:33.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.447 "is_configured": false, 00:19:33.447 "data_offset": 0, 00:19:33.447 "data_size": 0 00:19:33.447 } 00:19:33.447 ] 00:19:33.447 }' 00:19:33.447 23:05:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:33.447 23:05:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.014 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:34.272 [2024-07-13 23:05:23.557991] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:34.272 BaseBdev2 00:19:34.272 23:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:34.272 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:34.272 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:34.272 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:34.272 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:34.272 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:34.272 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:34.530 23:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:34.788 [ 00:19:34.788 { 00:19:34.788 "name": "BaseBdev2", 00:19:34.788 "aliases": [ 00:19:34.788 "2fa56115-2f01-4051-907b-ccc6f0e7faed" 00:19:34.788 ], 00:19:34.788 "product_name": "Malloc disk", 00:19:34.788 "block_size": 512, 00:19:34.788 "num_blocks": 65536, 00:19:34.788 "uuid": "2fa56115-2f01-4051-907b-ccc6f0e7faed", 00:19:34.788 "assigned_rate_limits": { 00:19:34.788 "rw_ios_per_sec": 0, 00:19:34.788 "rw_mbytes_per_sec": 0, 00:19:34.788 "r_mbytes_per_sec": 0, 00:19:34.788 "w_mbytes_per_sec": 0 00:19:34.788 }, 00:19:34.788 "claimed": true, 00:19:34.788 "claim_type": "exclusive_write", 00:19:34.788 "zoned": false, 00:19:34.788 "supported_io_types": { 00:19:34.788 "read": true, 00:19:34.788 "write": true, 00:19:34.788 "unmap": true, 00:19:34.788 "flush": true, 00:19:34.788 "reset": true, 00:19:34.788 "nvme_admin": false, 00:19:34.788 "nvme_io": false, 00:19:34.788 "nvme_io_md": false, 00:19:34.788 "write_zeroes": true, 00:19:34.788 "zcopy": true, 00:19:34.788 "get_zone_info": false, 00:19:34.788 "zone_management": false, 00:19:34.788 "zone_append": false, 00:19:34.788 "compare": false, 00:19:34.788 "compare_and_write": false, 00:19:34.788 "abort": true, 00:19:34.788 "seek_hole": false, 00:19:34.788 "seek_data": false, 00:19:34.788 "copy": true, 00:19:34.788 "nvme_iov_md": false 00:19:34.788 }, 00:19:34.788 "memory_domains": [ 00:19:34.788 { 00:19:34.788 "dma_device_id": "system", 00:19:34.788 "dma_device_type": 1 00:19:34.788 }, 00:19:34.788 { 00:19:34.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.788 "dma_device_type": 2 00:19:34.788 } 00:19:34.788 ], 00:19:34.788 "driver_specific": {} 00:19:34.788 } 00:19:34.788 ] 00:19:34.788 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:34.788 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:34.788 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:34.788 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:34.788 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:34.788 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:34.788 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:34.788 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:34.788 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:34.788 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:34.788 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:34.788 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:34.788 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:34.788 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:34.788 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:35.046 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:35.046 "name": "Existed_Raid", 00:19:35.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.046 "strip_size_kb": 0, 00:19:35.046 "state": "configuring", 00:19:35.046 "raid_level": "raid1", 00:19:35.046 "superblock": false, 00:19:35.046 "num_base_bdevs": 3, 00:19:35.046 "num_base_bdevs_discovered": 2, 00:19:35.046 "num_base_bdevs_operational": 3, 00:19:35.046 "base_bdevs_list": [ 00:19:35.046 { 00:19:35.046 "name": "BaseBdev1", 00:19:35.046 "uuid": "3e8189f6-e1f1-4128-a271-29a3c29b230b", 00:19:35.046 "is_configured": true, 00:19:35.046 "data_offset": 0, 00:19:35.046 "data_size": 65536 00:19:35.046 }, 00:19:35.046 { 00:19:35.046 "name": "BaseBdev2", 00:19:35.046 "uuid": "2fa56115-2f01-4051-907b-ccc6f0e7faed", 00:19:35.046 "is_configured": true, 00:19:35.046 "data_offset": 0, 00:19:35.046 "data_size": 65536 00:19:35.046 }, 00:19:35.046 { 00:19:35.046 "name": "BaseBdev3", 00:19:35.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.046 "is_configured": false, 00:19:35.046 "data_offset": 0, 00:19:35.046 "data_size": 0 00:19:35.046 } 00:19:35.046 ] 00:19:35.046 }' 00:19:35.046 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:35.046 23:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.612 23:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:35.869 [2024-07-13 23:05:25.254135] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:35.869 [2024-07-13 23:05:25.254408] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:19:35.869 [2024-07-13 23:05:25.254455] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:35.869 [2024-07-13 23:05:25.254705] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:19:35.869 [2024-07-13 23:05:25.255282] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:19:35.869 [2024-07-13 23:05:25.255422] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:19:35.869 [2024-07-13 23:05:25.255830] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:35.869 BaseBdev3 00:19:35.869 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:19:35.869 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:35.869 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:35.869 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:35.869 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:35.869 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:35.870 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:36.437 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:36.437 [ 00:19:36.437 { 00:19:36.437 "name": "BaseBdev3", 00:19:36.437 "aliases": [ 00:19:36.437 "413f64c5-ede5-40a2-b97d-87edbda1b0d7" 00:19:36.437 ], 00:19:36.437 "product_name": "Malloc disk", 00:19:36.437 "block_size": 512, 00:19:36.437 "num_blocks": 65536, 00:19:36.437 "uuid": "413f64c5-ede5-40a2-b97d-87edbda1b0d7", 00:19:36.437 "assigned_rate_limits": { 00:19:36.437 "rw_ios_per_sec": 0, 00:19:36.437 "rw_mbytes_per_sec": 0, 00:19:36.437 "r_mbytes_per_sec": 0, 00:19:36.437 "w_mbytes_per_sec": 0 00:19:36.437 }, 00:19:36.437 "claimed": true, 00:19:36.437 "claim_type": "exclusive_write", 00:19:36.437 "zoned": false, 00:19:36.437 "supported_io_types": { 00:19:36.437 "read": true, 00:19:36.437 "write": true, 00:19:36.437 "unmap": true, 00:19:36.437 "flush": true, 00:19:36.437 "reset": true, 00:19:36.437 "nvme_admin": false, 00:19:36.437 "nvme_io": false, 00:19:36.437 "nvme_io_md": false, 00:19:36.437 "write_zeroes": true, 00:19:36.437 "zcopy": true, 00:19:36.437 "get_zone_info": false, 00:19:36.437 "zone_management": false, 00:19:36.437 "zone_append": false, 00:19:36.437 "compare": false, 00:19:36.437 "compare_and_write": false, 00:19:36.437 "abort": true, 00:19:36.437 "seek_hole": false, 00:19:36.437 "seek_data": false, 00:19:36.437 "copy": true, 00:19:36.437 "nvme_iov_md": false 00:19:36.437 }, 00:19:36.437 "memory_domains": [ 00:19:36.437 { 00:19:36.437 "dma_device_id": "system", 00:19:36.437 "dma_device_type": 1 00:19:36.437 }, 00:19:36.437 { 00:19:36.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.437 "dma_device_type": 2 00:19:36.437 } 00:19:36.437 ], 00:19:36.437 "driver_specific": {} 00:19:36.437 } 00:19:36.437 ] 00:19:36.437 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:36.437 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:36.437 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:36.437 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:36.437 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:36.437 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:36.437 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:36.437 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:36.437 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:36.437 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:36.437 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:36.437 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:36.437 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:36.437 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.437 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:36.696 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:36.696 "name": "Existed_Raid", 00:19:36.696 "uuid": "32a10376-8107-4c7d-bd6f-fa4707d1f98e", 00:19:36.696 "strip_size_kb": 0, 00:19:36.696 "state": "online", 00:19:36.696 "raid_level": "raid1", 00:19:36.696 "superblock": false, 00:19:36.696 "num_base_bdevs": 3, 00:19:36.696 "num_base_bdevs_discovered": 3, 00:19:36.696 "num_base_bdevs_operational": 3, 00:19:36.696 "base_bdevs_list": [ 00:19:36.696 { 00:19:36.696 "name": "BaseBdev1", 00:19:36.696 "uuid": "3e8189f6-e1f1-4128-a271-29a3c29b230b", 00:19:36.696 "is_configured": true, 00:19:36.696 "data_offset": 0, 00:19:36.696 "data_size": 65536 00:19:36.696 }, 00:19:36.696 { 00:19:36.696 "name": "BaseBdev2", 00:19:36.696 "uuid": "2fa56115-2f01-4051-907b-ccc6f0e7faed", 00:19:36.696 "is_configured": true, 00:19:36.696 "data_offset": 0, 00:19:36.696 "data_size": 65536 00:19:36.696 }, 00:19:36.696 { 00:19:36.696 "name": "BaseBdev3", 00:19:36.696 "uuid": "413f64c5-ede5-40a2-b97d-87edbda1b0d7", 00:19:36.696 "is_configured": true, 00:19:36.696 "data_offset": 0, 00:19:36.696 "data_size": 65536 00:19:36.696 } 00:19:36.696 ] 00:19:36.696 }' 00:19:36.696 23:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:36.696 23:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.263 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:37.263 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:37.263 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:37.263 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:37.263 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:37.263 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:37.263 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:37.263 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:37.522 [2024-07-13 23:05:26.897397] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:37.522 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:37.522 "name": "Existed_Raid", 00:19:37.522 "aliases": [ 00:19:37.522 "32a10376-8107-4c7d-bd6f-fa4707d1f98e" 00:19:37.522 ], 00:19:37.522 "product_name": "Raid Volume", 00:19:37.522 "block_size": 512, 00:19:37.522 "num_blocks": 65536, 00:19:37.522 "uuid": "32a10376-8107-4c7d-bd6f-fa4707d1f98e", 00:19:37.522 "assigned_rate_limits": { 00:19:37.522 "rw_ios_per_sec": 0, 00:19:37.522 "rw_mbytes_per_sec": 0, 00:19:37.522 "r_mbytes_per_sec": 0, 00:19:37.522 "w_mbytes_per_sec": 0 00:19:37.522 }, 00:19:37.522 "claimed": false, 00:19:37.522 "zoned": false, 00:19:37.522 "supported_io_types": { 00:19:37.522 "read": true, 00:19:37.522 "write": true, 00:19:37.522 "unmap": false, 00:19:37.522 "flush": false, 00:19:37.522 "reset": true, 00:19:37.522 "nvme_admin": false, 00:19:37.522 "nvme_io": false, 00:19:37.522 "nvme_io_md": false, 00:19:37.522 "write_zeroes": true, 00:19:37.522 "zcopy": false, 00:19:37.522 "get_zone_info": false, 00:19:37.522 "zone_management": false, 00:19:37.522 "zone_append": false, 00:19:37.522 "compare": false, 00:19:37.522 "compare_and_write": false, 00:19:37.522 "abort": false, 00:19:37.522 "seek_hole": false, 00:19:37.522 "seek_data": false, 00:19:37.522 "copy": false, 00:19:37.522 "nvme_iov_md": false 00:19:37.522 }, 00:19:37.522 "memory_domains": [ 00:19:37.522 { 00:19:37.522 "dma_device_id": "system", 00:19:37.522 "dma_device_type": 1 00:19:37.522 }, 00:19:37.522 { 00:19:37.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.522 "dma_device_type": 2 00:19:37.522 }, 00:19:37.522 { 00:19:37.522 "dma_device_id": "system", 00:19:37.522 "dma_device_type": 1 00:19:37.522 }, 00:19:37.522 { 00:19:37.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.522 "dma_device_type": 2 00:19:37.522 }, 00:19:37.522 { 00:19:37.522 "dma_device_id": "system", 00:19:37.522 "dma_device_type": 1 00:19:37.522 }, 00:19:37.522 { 00:19:37.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.522 "dma_device_type": 2 00:19:37.522 } 00:19:37.522 ], 00:19:37.522 "driver_specific": { 00:19:37.522 "raid": { 00:19:37.522 "uuid": "32a10376-8107-4c7d-bd6f-fa4707d1f98e", 00:19:37.522 "strip_size_kb": 0, 00:19:37.522 "state": "online", 00:19:37.522 "raid_level": "raid1", 00:19:37.522 "superblock": false, 00:19:37.522 "num_base_bdevs": 3, 00:19:37.522 "num_base_bdevs_discovered": 3, 00:19:37.522 "num_base_bdevs_operational": 3, 00:19:37.522 "base_bdevs_list": [ 00:19:37.522 { 00:19:37.522 "name": "BaseBdev1", 00:19:37.522 "uuid": "3e8189f6-e1f1-4128-a271-29a3c29b230b", 00:19:37.522 "is_configured": true, 00:19:37.522 "data_offset": 0, 00:19:37.522 "data_size": 65536 00:19:37.522 }, 00:19:37.522 { 00:19:37.522 "name": "BaseBdev2", 00:19:37.522 "uuid": "2fa56115-2f01-4051-907b-ccc6f0e7faed", 00:19:37.522 "is_configured": true, 00:19:37.522 "data_offset": 0, 00:19:37.522 "data_size": 65536 00:19:37.522 }, 00:19:37.522 { 00:19:37.522 "name": "BaseBdev3", 00:19:37.522 "uuid": "413f64c5-ede5-40a2-b97d-87edbda1b0d7", 00:19:37.522 "is_configured": true, 00:19:37.522 "data_offset": 0, 00:19:37.522 "data_size": 65536 00:19:37.522 } 00:19:37.522 ] 00:19:37.522 } 00:19:37.522 } 00:19:37.522 }' 00:19:37.522 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:37.781 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:37.781 BaseBdev2 00:19:37.781 BaseBdev3' 00:19:37.781 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:37.781 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:37.781 23:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:38.039 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:38.040 "name": "BaseBdev1", 00:19:38.040 "aliases": [ 00:19:38.040 "3e8189f6-e1f1-4128-a271-29a3c29b230b" 00:19:38.040 ], 00:19:38.040 "product_name": "Malloc disk", 00:19:38.040 "block_size": 512, 00:19:38.040 "num_blocks": 65536, 00:19:38.040 "uuid": "3e8189f6-e1f1-4128-a271-29a3c29b230b", 00:19:38.040 "assigned_rate_limits": { 00:19:38.040 "rw_ios_per_sec": 0, 00:19:38.040 "rw_mbytes_per_sec": 0, 00:19:38.040 "r_mbytes_per_sec": 0, 00:19:38.040 "w_mbytes_per_sec": 0 00:19:38.040 }, 00:19:38.040 "claimed": true, 00:19:38.040 "claim_type": "exclusive_write", 00:19:38.040 "zoned": false, 00:19:38.040 "supported_io_types": { 00:19:38.040 "read": true, 00:19:38.040 "write": true, 00:19:38.040 "unmap": true, 00:19:38.040 "flush": true, 00:19:38.040 "reset": true, 00:19:38.040 "nvme_admin": false, 00:19:38.040 "nvme_io": false, 00:19:38.040 "nvme_io_md": false, 00:19:38.040 "write_zeroes": true, 00:19:38.040 "zcopy": true, 00:19:38.040 "get_zone_info": false, 00:19:38.040 "zone_management": false, 00:19:38.040 "zone_append": false, 00:19:38.040 "compare": false, 00:19:38.040 "compare_and_write": false, 00:19:38.040 "abort": true, 00:19:38.040 "seek_hole": false, 00:19:38.040 "seek_data": false, 00:19:38.040 "copy": true, 00:19:38.040 "nvme_iov_md": false 00:19:38.040 }, 00:19:38.040 "memory_domains": [ 00:19:38.040 { 00:19:38.040 "dma_device_id": "system", 00:19:38.040 "dma_device_type": 1 00:19:38.040 }, 00:19:38.040 { 00:19:38.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:38.040 "dma_device_type": 2 00:19:38.040 } 00:19:38.040 ], 00:19:38.040 "driver_specific": {} 00:19:38.040 }' 00:19:38.040 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:38.040 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:38.040 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:38.040 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:38.040 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:38.040 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:38.040 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:38.040 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:38.298 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:38.298 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:38.298 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:38.298 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:38.298 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:38.298 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:38.298 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:38.557 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:38.557 "name": "BaseBdev2", 00:19:38.557 "aliases": [ 00:19:38.557 "2fa56115-2f01-4051-907b-ccc6f0e7faed" 00:19:38.557 ], 00:19:38.557 "product_name": "Malloc disk", 00:19:38.557 "block_size": 512, 00:19:38.557 "num_blocks": 65536, 00:19:38.557 "uuid": "2fa56115-2f01-4051-907b-ccc6f0e7faed", 00:19:38.557 "assigned_rate_limits": { 00:19:38.557 "rw_ios_per_sec": 0, 00:19:38.557 "rw_mbytes_per_sec": 0, 00:19:38.557 "r_mbytes_per_sec": 0, 00:19:38.557 "w_mbytes_per_sec": 0 00:19:38.557 }, 00:19:38.557 "claimed": true, 00:19:38.557 "claim_type": "exclusive_write", 00:19:38.557 "zoned": false, 00:19:38.557 "supported_io_types": { 00:19:38.557 "read": true, 00:19:38.557 "write": true, 00:19:38.557 "unmap": true, 00:19:38.557 "flush": true, 00:19:38.557 "reset": true, 00:19:38.557 "nvme_admin": false, 00:19:38.557 "nvme_io": false, 00:19:38.557 "nvme_io_md": false, 00:19:38.557 "write_zeroes": true, 00:19:38.557 "zcopy": true, 00:19:38.557 "get_zone_info": false, 00:19:38.557 "zone_management": false, 00:19:38.557 "zone_append": false, 00:19:38.557 "compare": false, 00:19:38.557 "compare_and_write": false, 00:19:38.557 "abort": true, 00:19:38.557 "seek_hole": false, 00:19:38.557 "seek_data": false, 00:19:38.557 "copy": true, 00:19:38.557 "nvme_iov_md": false 00:19:38.557 }, 00:19:38.557 "memory_domains": [ 00:19:38.557 { 00:19:38.557 "dma_device_id": "system", 00:19:38.557 "dma_device_type": 1 00:19:38.557 }, 00:19:38.557 { 00:19:38.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:38.557 "dma_device_type": 2 00:19:38.557 } 00:19:38.557 ], 00:19:38.557 "driver_specific": {} 00:19:38.557 }' 00:19:38.557 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:38.557 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:38.557 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:38.557 23:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:38.816 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:38.816 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:38.816 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:38.816 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:38.816 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:38.816 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:38.816 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:39.075 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:39.075 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:39.075 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:39.075 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:39.335 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:39.335 "name": "BaseBdev3", 00:19:39.335 "aliases": [ 00:19:39.335 "413f64c5-ede5-40a2-b97d-87edbda1b0d7" 00:19:39.335 ], 00:19:39.335 "product_name": "Malloc disk", 00:19:39.335 "block_size": 512, 00:19:39.335 "num_blocks": 65536, 00:19:39.335 "uuid": "413f64c5-ede5-40a2-b97d-87edbda1b0d7", 00:19:39.335 "assigned_rate_limits": { 00:19:39.335 "rw_ios_per_sec": 0, 00:19:39.335 "rw_mbytes_per_sec": 0, 00:19:39.335 "r_mbytes_per_sec": 0, 00:19:39.335 "w_mbytes_per_sec": 0 00:19:39.335 }, 00:19:39.335 "claimed": true, 00:19:39.335 "claim_type": "exclusive_write", 00:19:39.335 "zoned": false, 00:19:39.335 "supported_io_types": { 00:19:39.335 "read": true, 00:19:39.335 "write": true, 00:19:39.335 "unmap": true, 00:19:39.335 "flush": true, 00:19:39.335 "reset": true, 00:19:39.335 "nvme_admin": false, 00:19:39.335 "nvme_io": false, 00:19:39.335 "nvme_io_md": false, 00:19:39.335 "write_zeroes": true, 00:19:39.335 "zcopy": true, 00:19:39.335 "get_zone_info": false, 00:19:39.335 "zone_management": false, 00:19:39.335 "zone_append": false, 00:19:39.335 "compare": false, 00:19:39.335 "compare_and_write": false, 00:19:39.335 "abort": true, 00:19:39.335 "seek_hole": false, 00:19:39.335 "seek_data": false, 00:19:39.335 "copy": true, 00:19:39.335 "nvme_iov_md": false 00:19:39.335 }, 00:19:39.335 "memory_domains": [ 00:19:39.335 { 00:19:39.335 "dma_device_id": "system", 00:19:39.335 "dma_device_type": 1 00:19:39.335 }, 00:19:39.335 { 00:19:39.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:39.335 "dma_device_type": 2 00:19:39.335 } 00:19:39.335 ], 00:19:39.335 "driver_specific": {} 00:19:39.335 }' 00:19:39.335 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:39.335 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:39.335 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:39.335 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:39.335 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:39.594 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:39.594 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:39.594 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:39.594 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:39.594 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:39.594 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:39.594 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:39.595 23:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:39.853 [2024-07-13 23:05:29.212088] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:39.853 23:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:39.853 23:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:19:39.853 23:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:39.853 23:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:19:39.853 23:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:19:39.853 23:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:39.853 23:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:39.853 23:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:39.853 23:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:39.853 23:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:39.853 23:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:39.853 23:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:39.853 23:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:39.853 23:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:39.853 23:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:39.854 23:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.854 23:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:40.421 23:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:40.421 "name": "Existed_Raid", 00:19:40.421 "uuid": "32a10376-8107-4c7d-bd6f-fa4707d1f98e", 00:19:40.421 "strip_size_kb": 0, 00:19:40.421 "state": "online", 00:19:40.421 "raid_level": "raid1", 00:19:40.421 "superblock": false, 00:19:40.421 "num_base_bdevs": 3, 00:19:40.421 "num_base_bdevs_discovered": 2, 00:19:40.421 "num_base_bdevs_operational": 2, 00:19:40.421 "base_bdevs_list": [ 00:19:40.421 { 00:19:40.421 "name": null, 00:19:40.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.421 "is_configured": false, 00:19:40.421 "data_offset": 0, 00:19:40.421 "data_size": 65536 00:19:40.421 }, 00:19:40.421 { 00:19:40.421 "name": "BaseBdev2", 00:19:40.421 "uuid": "2fa56115-2f01-4051-907b-ccc6f0e7faed", 00:19:40.421 "is_configured": true, 00:19:40.421 "data_offset": 0, 00:19:40.421 "data_size": 65536 00:19:40.421 }, 00:19:40.421 { 00:19:40.421 "name": "BaseBdev3", 00:19:40.421 "uuid": "413f64c5-ede5-40a2-b97d-87edbda1b0d7", 00:19:40.421 "is_configured": true, 00:19:40.421 "data_offset": 0, 00:19:40.421 "data_size": 65536 00:19:40.421 } 00:19:40.421 ] 00:19:40.421 }' 00:19:40.421 23:05:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:40.421 23:05:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.988 23:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:40.988 23:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:40.988 23:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.988 23:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:41.247 23:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:41.247 23:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:41.247 23:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:41.506 [2024-07-13 23:05:30.758115] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:41.506 23:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:41.506 23:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:41.506 23:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.506 23:05:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:41.764 23:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:41.764 23:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:41.764 23:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:42.023 [2024-07-13 23:05:31.258637] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:42.023 [2024-07-13 23:05:31.258940] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:42.023 [2024-07-13 23:05:31.271978] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:42.023 [2024-07-13 23:05:31.272221] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:42.023 [2024-07-13 23:05:31.272325] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:19:42.023 23:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:42.023 23:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:42.023 23:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.023 23:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:42.282 23:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:42.282 23:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:42.282 23:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:19:42.282 23:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:19:42.282 23:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:42.282 23:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:42.541 BaseBdev2 00:19:42.541 23:05:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:19:42.541 23:05:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:42.541 23:05:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:42.541 23:05:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:42.541 23:05:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:42.541 23:05:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:42.541 23:05:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:42.803 23:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:43.062 [ 00:19:43.062 { 00:19:43.062 "name": "BaseBdev2", 00:19:43.062 "aliases": [ 00:19:43.062 "f8d49f66-5408-4e61-b2c3-687142cdb2d4" 00:19:43.062 ], 00:19:43.062 "product_name": "Malloc disk", 00:19:43.062 "block_size": 512, 00:19:43.062 "num_blocks": 65536, 00:19:43.062 "uuid": "f8d49f66-5408-4e61-b2c3-687142cdb2d4", 00:19:43.062 "assigned_rate_limits": { 00:19:43.062 "rw_ios_per_sec": 0, 00:19:43.062 "rw_mbytes_per_sec": 0, 00:19:43.062 "r_mbytes_per_sec": 0, 00:19:43.062 "w_mbytes_per_sec": 0 00:19:43.062 }, 00:19:43.062 "claimed": false, 00:19:43.062 "zoned": false, 00:19:43.062 "supported_io_types": { 00:19:43.062 "read": true, 00:19:43.062 "write": true, 00:19:43.062 "unmap": true, 00:19:43.062 "flush": true, 00:19:43.062 "reset": true, 00:19:43.062 "nvme_admin": false, 00:19:43.062 "nvme_io": false, 00:19:43.062 "nvme_io_md": false, 00:19:43.062 "write_zeroes": true, 00:19:43.062 "zcopy": true, 00:19:43.062 "get_zone_info": false, 00:19:43.062 "zone_management": false, 00:19:43.062 "zone_append": false, 00:19:43.062 "compare": false, 00:19:43.062 "compare_and_write": false, 00:19:43.062 "abort": true, 00:19:43.062 "seek_hole": false, 00:19:43.062 "seek_data": false, 00:19:43.062 "copy": true, 00:19:43.062 "nvme_iov_md": false 00:19:43.062 }, 00:19:43.062 "memory_domains": [ 00:19:43.062 { 00:19:43.062 "dma_device_id": "system", 00:19:43.062 "dma_device_type": 1 00:19:43.062 }, 00:19:43.062 { 00:19:43.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:43.062 "dma_device_type": 2 00:19:43.062 } 00:19:43.062 ], 00:19:43.062 "driver_specific": {} 00:19:43.062 } 00:19:43.062 ] 00:19:43.062 23:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:43.062 23:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:43.062 23:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:43.062 23:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:43.321 BaseBdev3 00:19:43.321 23:05:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:19:43.321 23:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:43.321 23:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:43.321 23:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:43.321 23:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:43.321 23:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:43.321 23:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:43.579 23:05:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:43.838 [ 00:19:43.838 { 00:19:43.838 "name": "BaseBdev3", 00:19:43.838 "aliases": [ 00:19:43.838 "b4a04c3e-65c8-45e2-b693-0c3dc455d5b1" 00:19:43.838 ], 00:19:43.838 "product_name": "Malloc disk", 00:19:43.838 "block_size": 512, 00:19:43.838 "num_blocks": 65536, 00:19:43.838 "uuid": "b4a04c3e-65c8-45e2-b693-0c3dc455d5b1", 00:19:43.838 "assigned_rate_limits": { 00:19:43.838 "rw_ios_per_sec": 0, 00:19:43.838 "rw_mbytes_per_sec": 0, 00:19:43.838 "r_mbytes_per_sec": 0, 00:19:43.838 "w_mbytes_per_sec": 0 00:19:43.838 }, 00:19:43.838 "claimed": false, 00:19:43.838 "zoned": false, 00:19:43.838 "supported_io_types": { 00:19:43.838 "read": true, 00:19:43.838 "write": true, 00:19:43.838 "unmap": true, 00:19:43.838 "flush": true, 00:19:43.838 "reset": true, 00:19:43.838 "nvme_admin": false, 00:19:43.838 "nvme_io": false, 00:19:43.838 "nvme_io_md": false, 00:19:43.838 "write_zeroes": true, 00:19:43.838 "zcopy": true, 00:19:43.838 "get_zone_info": false, 00:19:43.838 "zone_management": false, 00:19:43.838 "zone_append": false, 00:19:43.838 "compare": false, 00:19:43.838 "compare_and_write": false, 00:19:43.838 "abort": true, 00:19:43.838 "seek_hole": false, 00:19:43.838 "seek_data": false, 00:19:43.838 "copy": true, 00:19:43.838 "nvme_iov_md": false 00:19:43.838 }, 00:19:43.838 "memory_domains": [ 00:19:43.838 { 00:19:43.838 "dma_device_id": "system", 00:19:43.838 "dma_device_type": 1 00:19:43.838 }, 00:19:43.838 { 00:19:43.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:43.838 "dma_device_type": 2 00:19:43.838 } 00:19:43.838 ], 00:19:43.838 "driver_specific": {} 00:19:43.838 } 00:19:43.838 ] 00:19:43.838 23:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:43.838 23:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:43.838 23:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:43.838 23:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:43.838 [2024-07-13 23:05:33.219662] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:43.838 [2024-07-13 23:05:33.219951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:43.838 [2024-07-13 23:05:33.220099] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:43.838 [2024-07-13 23:05:33.222295] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:43.838 23:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:43.838 23:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:43.838 23:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:43.839 23:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:43.839 23:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:43.839 23:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:43.839 23:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:43.839 23:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:43.839 23:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:43.839 23:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:43.839 23:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.839 23:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:44.404 23:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:44.404 "name": "Existed_Raid", 00:19:44.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.404 "strip_size_kb": 0, 00:19:44.404 "state": "configuring", 00:19:44.404 "raid_level": "raid1", 00:19:44.404 "superblock": false, 00:19:44.404 "num_base_bdevs": 3, 00:19:44.404 "num_base_bdevs_discovered": 2, 00:19:44.404 "num_base_bdevs_operational": 3, 00:19:44.404 "base_bdevs_list": [ 00:19:44.404 { 00:19:44.404 "name": "BaseBdev1", 00:19:44.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.404 "is_configured": false, 00:19:44.404 "data_offset": 0, 00:19:44.404 "data_size": 0 00:19:44.404 }, 00:19:44.404 { 00:19:44.404 "name": "BaseBdev2", 00:19:44.404 "uuid": "f8d49f66-5408-4e61-b2c3-687142cdb2d4", 00:19:44.404 "is_configured": true, 00:19:44.404 "data_offset": 0, 00:19:44.404 "data_size": 65536 00:19:44.404 }, 00:19:44.404 { 00:19:44.404 "name": "BaseBdev3", 00:19:44.404 "uuid": "b4a04c3e-65c8-45e2-b693-0c3dc455d5b1", 00:19:44.404 "is_configured": true, 00:19:44.404 "data_offset": 0, 00:19:44.404 "data_size": 65536 00:19:44.404 } 00:19:44.404 ] 00:19:44.404 }' 00:19:44.404 23:05:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:44.404 23:05:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.969 23:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:44.969 [2024-07-13 23:05:34.364086] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:45.228 23:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:45.228 23:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:45.228 23:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:45.228 23:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:45.228 23:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:45.228 23:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:45.228 23:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:45.228 23:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:45.228 23:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:45.228 23:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:45.228 23:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.228 23:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:45.500 23:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:45.500 "name": "Existed_Raid", 00:19:45.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.500 "strip_size_kb": 0, 00:19:45.500 "state": "configuring", 00:19:45.500 "raid_level": "raid1", 00:19:45.500 "superblock": false, 00:19:45.500 "num_base_bdevs": 3, 00:19:45.500 "num_base_bdevs_discovered": 1, 00:19:45.500 "num_base_bdevs_operational": 3, 00:19:45.500 "base_bdevs_list": [ 00:19:45.500 { 00:19:45.500 "name": "BaseBdev1", 00:19:45.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.500 "is_configured": false, 00:19:45.500 "data_offset": 0, 00:19:45.500 "data_size": 0 00:19:45.500 }, 00:19:45.500 { 00:19:45.500 "name": null, 00:19:45.500 "uuid": "f8d49f66-5408-4e61-b2c3-687142cdb2d4", 00:19:45.500 "is_configured": false, 00:19:45.500 "data_offset": 0, 00:19:45.500 "data_size": 65536 00:19:45.500 }, 00:19:45.500 { 00:19:45.500 "name": "BaseBdev3", 00:19:45.500 "uuid": "b4a04c3e-65c8-45e2-b693-0c3dc455d5b1", 00:19:45.500 "is_configured": true, 00:19:45.500 "data_offset": 0, 00:19:45.500 "data_size": 65536 00:19:45.500 } 00:19:45.500 ] 00:19:45.500 }' 00:19:45.500 23:05:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:45.500 23:05:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.079 23:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.079 23:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:46.338 23:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:19:46.338 23:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:46.596 [2024-07-13 23:05:35.832823] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:46.596 BaseBdev1 00:19:46.596 23:05:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:19:46.596 23:05:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:46.596 23:05:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:46.596 23:05:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:46.596 23:05:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:46.596 23:05:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:46.596 23:05:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:46.855 23:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:47.113 [ 00:19:47.113 { 00:19:47.113 "name": "BaseBdev1", 00:19:47.113 "aliases": [ 00:19:47.113 "1104827f-1a9a-4e3b-ba26-ebf9cde7a952" 00:19:47.113 ], 00:19:47.113 "product_name": "Malloc disk", 00:19:47.113 "block_size": 512, 00:19:47.113 "num_blocks": 65536, 00:19:47.113 "uuid": "1104827f-1a9a-4e3b-ba26-ebf9cde7a952", 00:19:47.113 "assigned_rate_limits": { 00:19:47.113 "rw_ios_per_sec": 0, 00:19:47.113 "rw_mbytes_per_sec": 0, 00:19:47.113 "r_mbytes_per_sec": 0, 00:19:47.113 "w_mbytes_per_sec": 0 00:19:47.113 }, 00:19:47.113 "claimed": true, 00:19:47.113 "claim_type": "exclusive_write", 00:19:47.113 "zoned": false, 00:19:47.113 "supported_io_types": { 00:19:47.113 "read": true, 00:19:47.113 "write": true, 00:19:47.113 "unmap": true, 00:19:47.113 "flush": true, 00:19:47.113 "reset": true, 00:19:47.113 "nvme_admin": false, 00:19:47.113 "nvme_io": false, 00:19:47.113 "nvme_io_md": false, 00:19:47.113 "write_zeroes": true, 00:19:47.113 "zcopy": true, 00:19:47.113 "get_zone_info": false, 00:19:47.113 "zone_management": false, 00:19:47.113 "zone_append": false, 00:19:47.113 "compare": false, 00:19:47.113 "compare_and_write": false, 00:19:47.113 "abort": true, 00:19:47.113 "seek_hole": false, 00:19:47.113 "seek_data": false, 00:19:47.113 "copy": true, 00:19:47.113 "nvme_iov_md": false 00:19:47.113 }, 00:19:47.113 "memory_domains": [ 00:19:47.113 { 00:19:47.113 "dma_device_id": "system", 00:19:47.113 "dma_device_type": 1 00:19:47.113 }, 00:19:47.113 { 00:19:47.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.113 "dma_device_type": 2 00:19:47.113 } 00:19:47.113 ], 00:19:47.113 "driver_specific": {} 00:19:47.113 } 00:19:47.113 ] 00:19:47.113 23:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:47.113 23:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:47.113 23:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:47.113 23:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:47.113 23:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:47.113 23:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:47.113 23:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:47.113 23:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:47.113 23:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:47.113 23:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:47.113 23:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:47.113 23:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.113 23:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:47.372 23:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:47.372 "name": "Existed_Raid", 00:19:47.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.372 "strip_size_kb": 0, 00:19:47.372 "state": "configuring", 00:19:47.372 "raid_level": "raid1", 00:19:47.372 "superblock": false, 00:19:47.372 "num_base_bdevs": 3, 00:19:47.372 "num_base_bdevs_discovered": 2, 00:19:47.372 "num_base_bdevs_operational": 3, 00:19:47.372 "base_bdevs_list": [ 00:19:47.372 { 00:19:47.372 "name": "BaseBdev1", 00:19:47.372 "uuid": "1104827f-1a9a-4e3b-ba26-ebf9cde7a952", 00:19:47.372 "is_configured": true, 00:19:47.372 "data_offset": 0, 00:19:47.372 "data_size": 65536 00:19:47.372 }, 00:19:47.372 { 00:19:47.372 "name": null, 00:19:47.372 "uuid": "f8d49f66-5408-4e61-b2c3-687142cdb2d4", 00:19:47.372 "is_configured": false, 00:19:47.372 "data_offset": 0, 00:19:47.372 "data_size": 65536 00:19:47.372 }, 00:19:47.372 { 00:19:47.372 "name": "BaseBdev3", 00:19:47.372 "uuid": "b4a04c3e-65c8-45e2-b693-0c3dc455d5b1", 00:19:47.372 "is_configured": true, 00:19:47.372 "data_offset": 0, 00:19:47.372 "data_size": 65536 00:19:47.372 } 00:19:47.372 ] 00:19:47.372 }' 00:19:47.372 23:05:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:47.372 23:05:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.938 23:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.938 23:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:48.197 23:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:19:48.197 23:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:48.456 [2024-07-13 23:05:37.773455] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:48.456 23:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:48.456 23:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:48.456 23:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:48.456 23:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:48.456 23:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:48.456 23:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:48.456 23:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:48.456 23:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:48.456 23:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:48.456 23:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:48.456 23:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.456 23:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:48.714 23:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:48.714 "name": "Existed_Raid", 00:19:48.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.714 "strip_size_kb": 0, 00:19:48.714 "state": "configuring", 00:19:48.714 "raid_level": "raid1", 00:19:48.714 "superblock": false, 00:19:48.714 "num_base_bdevs": 3, 00:19:48.714 "num_base_bdevs_discovered": 1, 00:19:48.714 "num_base_bdevs_operational": 3, 00:19:48.714 "base_bdevs_list": [ 00:19:48.714 { 00:19:48.714 "name": "BaseBdev1", 00:19:48.714 "uuid": "1104827f-1a9a-4e3b-ba26-ebf9cde7a952", 00:19:48.714 "is_configured": true, 00:19:48.714 "data_offset": 0, 00:19:48.714 "data_size": 65536 00:19:48.714 }, 00:19:48.714 { 00:19:48.714 "name": null, 00:19:48.714 "uuid": "f8d49f66-5408-4e61-b2c3-687142cdb2d4", 00:19:48.714 "is_configured": false, 00:19:48.714 "data_offset": 0, 00:19:48.714 "data_size": 65536 00:19:48.714 }, 00:19:48.714 { 00:19:48.714 "name": null, 00:19:48.714 "uuid": "b4a04c3e-65c8-45e2-b693-0c3dc455d5b1", 00:19:48.714 "is_configured": false, 00:19:48.714 "data_offset": 0, 00:19:48.714 "data_size": 65536 00:19:48.714 } 00:19:48.714 ] 00:19:48.714 }' 00:19:48.714 23:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:48.714 23:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.649 23:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.649 23:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:49.649 23:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:19:49.649 23:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:49.908 [2024-07-13 23:05:39.253908] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:49.908 23:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:49.908 23:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:49.908 23:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:49.908 23:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:49.908 23:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:49.908 23:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:49.908 23:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:49.908 23:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:49.908 23:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:49.908 23:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:49.908 23:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.908 23:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:50.167 23:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:50.167 "name": "Existed_Raid", 00:19:50.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.167 "strip_size_kb": 0, 00:19:50.167 "state": "configuring", 00:19:50.167 "raid_level": "raid1", 00:19:50.167 "superblock": false, 00:19:50.167 "num_base_bdevs": 3, 00:19:50.167 "num_base_bdevs_discovered": 2, 00:19:50.167 "num_base_bdevs_operational": 3, 00:19:50.167 "base_bdevs_list": [ 00:19:50.167 { 00:19:50.167 "name": "BaseBdev1", 00:19:50.167 "uuid": "1104827f-1a9a-4e3b-ba26-ebf9cde7a952", 00:19:50.167 "is_configured": true, 00:19:50.167 "data_offset": 0, 00:19:50.167 "data_size": 65536 00:19:50.167 }, 00:19:50.167 { 00:19:50.167 "name": null, 00:19:50.167 "uuid": "f8d49f66-5408-4e61-b2c3-687142cdb2d4", 00:19:50.167 "is_configured": false, 00:19:50.167 "data_offset": 0, 00:19:50.167 "data_size": 65536 00:19:50.167 }, 00:19:50.167 { 00:19:50.167 "name": "BaseBdev3", 00:19:50.167 "uuid": "b4a04c3e-65c8-45e2-b693-0c3dc455d5b1", 00:19:50.167 "is_configured": true, 00:19:50.167 "data_offset": 0, 00:19:50.167 "data_size": 65536 00:19:50.167 } 00:19:50.167 ] 00:19:50.167 }' 00:19:50.167 23:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:50.167 23:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:50.734 23:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.734 23:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:50.993 23:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:19:50.993 23:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:51.251 [2024-07-13 23:05:40.534169] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:51.251 23:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:51.251 23:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:51.251 23:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:51.251 23:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:51.251 23:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:51.251 23:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:51.251 23:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:51.251 23:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:51.251 23:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:51.251 23:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:51.251 23:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:51.251 23:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:51.511 23:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:51.511 "name": "Existed_Raid", 00:19:51.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.511 "strip_size_kb": 0, 00:19:51.511 "state": "configuring", 00:19:51.511 "raid_level": "raid1", 00:19:51.511 "superblock": false, 00:19:51.511 "num_base_bdevs": 3, 00:19:51.511 "num_base_bdevs_discovered": 1, 00:19:51.511 "num_base_bdevs_operational": 3, 00:19:51.511 "base_bdevs_list": [ 00:19:51.511 { 00:19:51.511 "name": null, 00:19:51.511 "uuid": "1104827f-1a9a-4e3b-ba26-ebf9cde7a952", 00:19:51.511 "is_configured": false, 00:19:51.511 "data_offset": 0, 00:19:51.511 "data_size": 65536 00:19:51.511 }, 00:19:51.511 { 00:19:51.511 "name": null, 00:19:51.511 "uuid": "f8d49f66-5408-4e61-b2c3-687142cdb2d4", 00:19:51.511 "is_configured": false, 00:19:51.511 "data_offset": 0, 00:19:51.511 "data_size": 65536 00:19:51.511 }, 00:19:51.511 { 00:19:51.511 "name": "BaseBdev3", 00:19:51.511 "uuid": "b4a04c3e-65c8-45e2-b693-0c3dc455d5b1", 00:19:51.511 "is_configured": true, 00:19:51.511 "data_offset": 0, 00:19:51.511 "data_size": 65536 00:19:51.511 } 00:19:51.511 ] 00:19:51.511 }' 00:19:51.511 23:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:51.511 23:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.079 23:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.079 23:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:52.645 23:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:19:52.646 23:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:52.646 [2024-07-13 23:05:42.010739] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:52.646 23:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:52.646 23:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:52.646 23:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:52.646 23:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:52.646 23:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:52.646 23:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:52.646 23:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:52.646 23:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:52.646 23:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:52.646 23:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:52.646 23:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.646 23:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:52.904 23:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:52.904 "name": "Existed_Raid", 00:19:52.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.904 "strip_size_kb": 0, 00:19:52.904 "state": "configuring", 00:19:52.904 "raid_level": "raid1", 00:19:52.904 "superblock": false, 00:19:52.904 "num_base_bdevs": 3, 00:19:52.904 "num_base_bdevs_discovered": 2, 00:19:52.904 "num_base_bdevs_operational": 3, 00:19:52.904 "base_bdevs_list": [ 00:19:52.904 { 00:19:52.904 "name": null, 00:19:52.904 "uuid": "1104827f-1a9a-4e3b-ba26-ebf9cde7a952", 00:19:52.904 "is_configured": false, 00:19:52.904 "data_offset": 0, 00:19:52.904 "data_size": 65536 00:19:52.904 }, 00:19:52.904 { 00:19:52.904 "name": "BaseBdev2", 00:19:52.904 "uuid": "f8d49f66-5408-4e61-b2c3-687142cdb2d4", 00:19:52.904 "is_configured": true, 00:19:52.904 "data_offset": 0, 00:19:52.904 "data_size": 65536 00:19:52.904 }, 00:19:52.904 { 00:19:52.904 "name": "BaseBdev3", 00:19:52.904 "uuid": "b4a04c3e-65c8-45e2-b693-0c3dc455d5b1", 00:19:52.904 "is_configured": true, 00:19:52.904 "data_offset": 0, 00:19:52.904 "data_size": 65536 00:19:52.904 } 00:19:52.904 ] 00:19:52.904 }' 00:19:52.904 23:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:52.904 23:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.841 23:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:53.841 23:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.841 23:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:19:53.841 23:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.841 23:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:54.131 23:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 1104827f-1a9a-4e3b-ba26-ebf9cde7a952 00:19:54.388 [2024-07-13 23:05:43.670318] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:54.388 [2024-07-13 23:05:43.670609] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:19:54.388 [2024-07-13 23:05:43.670655] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:54.388 [2024-07-13 23:05:43.670835] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:19:54.388 [2024-07-13 23:05:43.671317] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:19:54.388 [2024-07-13 23:05:43.671464] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:19:54.388 [2024-07-13 23:05:43.671775] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:54.388 NewBaseBdev 00:19:54.388 23:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:19:54.388 23:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:19:54.388 23:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:54.388 23:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:54.388 23:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:54.388 23:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:54.388 23:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:54.646 23:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:54.904 [ 00:19:54.904 { 00:19:54.904 "name": "NewBaseBdev", 00:19:54.904 "aliases": [ 00:19:54.904 "1104827f-1a9a-4e3b-ba26-ebf9cde7a952" 00:19:54.904 ], 00:19:54.904 "product_name": "Malloc disk", 00:19:54.904 "block_size": 512, 00:19:54.904 "num_blocks": 65536, 00:19:54.904 "uuid": "1104827f-1a9a-4e3b-ba26-ebf9cde7a952", 00:19:54.904 "assigned_rate_limits": { 00:19:54.904 "rw_ios_per_sec": 0, 00:19:54.904 "rw_mbytes_per_sec": 0, 00:19:54.904 "r_mbytes_per_sec": 0, 00:19:54.904 "w_mbytes_per_sec": 0 00:19:54.904 }, 00:19:54.904 "claimed": true, 00:19:54.904 "claim_type": "exclusive_write", 00:19:54.904 "zoned": false, 00:19:54.904 "supported_io_types": { 00:19:54.904 "read": true, 00:19:54.904 "write": true, 00:19:54.904 "unmap": true, 00:19:54.904 "flush": true, 00:19:54.904 "reset": true, 00:19:54.904 "nvme_admin": false, 00:19:54.904 "nvme_io": false, 00:19:54.904 "nvme_io_md": false, 00:19:54.904 "write_zeroes": true, 00:19:54.904 "zcopy": true, 00:19:54.904 "get_zone_info": false, 00:19:54.904 "zone_management": false, 00:19:54.904 "zone_append": false, 00:19:54.904 "compare": false, 00:19:54.905 "compare_and_write": false, 00:19:54.905 "abort": true, 00:19:54.905 "seek_hole": false, 00:19:54.905 "seek_data": false, 00:19:54.905 "copy": true, 00:19:54.905 "nvme_iov_md": false 00:19:54.905 }, 00:19:54.905 "memory_domains": [ 00:19:54.905 { 00:19:54.905 "dma_device_id": "system", 00:19:54.905 "dma_device_type": 1 00:19:54.905 }, 00:19:54.905 { 00:19:54.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:54.905 "dma_device_type": 2 00:19:54.905 } 00:19:54.905 ], 00:19:54.905 "driver_specific": {} 00:19:54.905 } 00:19:54.905 ] 00:19:54.905 23:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:54.905 23:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:54.905 23:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:54.905 23:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:54.905 23:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:54.905 23:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:54.905 23:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:54.905 23:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:54.905 23:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:54.905 23:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:54.905 23:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:54.905 23:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.905 23:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:55.163 23:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:55.163 "name": "Existed_Raid", 00:19:55.163 "uuid": "af03083d-d532-41d0-ba14-42ce99f3eeff", 00:19:55.163 "strip_size_kb": 0, 00:19:55.163 "state": "online", 00:19:55.163 "raid_level": "raid1", 00:19:55.163 "superblock": false, 00:19:55.163 "num_base_bdevs": 3, 00:19:55.163 "num_base_bdevs_discovered": 3, 00:19:55.163 "num_base_bdevs_operational": 3, 00:19:55.163 "base_bdevs_list": [ 00:19:55.163 { 00:19:55.163 "name": "NewBaseBdev", 00:19:55.163 "uuid": "1104827f-1a9a-4e3b-ba26-ebf9cde7a952", 00:19:55.163 "is_configured": true, 00:19:55.163 "data_offset": 0, 00:19:55.163 "data_size": 65536 00:19:55.163 }, 00:19:55.163 { 00:19:55.163 "name": "BaseBdev2", 00:19:55.163 "uuid": "f8d49f66-5408-4e61-b2c3-687142cdb2d4", 00:19:55.163 "is_configured": true, 00:19:55.163 "data_offset": 0, 00:19:55.163 "data_size": 65536 00:19:55.163 }, 00:19:55.163 { 00:19:55.163 "name": "BaseBdev3", 00:19:55.163 "uuid": "b4a04c3e-65c8-45e2-b693-0c3dc455d5b1", 00:19:55.163 "is_configured": true, 00:19:55.163 "data_offset": 0, 00:19:55.163 "data_size": 65536 00:19:55.163 } 00:19:55.163 ] 00:19:55.163 }' 00:19:55.163 23:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:55.163 23:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.729 23:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:19:55.730 23:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:55.730 23:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:55.730 23:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:55.730 23:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:55.730 23:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:55.730 23:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:55.730 23:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:55.989 [2024-07-13 23:05:45.283089] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:55.989 23:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:55.989 "name": "Existed_Raid", 00:19:55.989 "aliases": [ 00:19:55.989 "af03083d-d532-41d0-ba14-42ce99f3eeff" 00:19:55.989 ], 00:19:55.989 "product_name": "Raid Volume", 00:19:55.989 "block_size": 512, 00:19:55.989 "num_blocks": 65536, 00:19:55.989 "uuid": "af03083d-d532-41d0-ba14-42ce99f3eeff", 00:19:55.989 "assigned_rate_limits": { 00:19:55.989 "rw_ios_per_sec": 0, 00:19:55.989 "rw_mbytes_per_sec": 0, 00:19:55.989 "r_mbytes_per_sec": 0, 00:19:55.989 "w_mbytes_per_sec": 0 00:19:55.989 }, 00:19:55.989 "claimed": false, 00:19:55.989 "zoned": false, 00:19:55.989 "supported_io_types": { 00:19:55.989 "read": true, 00:19:55.989 "write": true, 00:19:55.989 "unmap": false, 00:19:55.989 "flush": false, 00:19:55.989 "reset": true, 00:19:55.989 "nvme_admin": false, 00:19:55.989 "nvme_io": false, 00:19:55.989 "nvme_io_md": false, 00:19:55.989 "write_zeroes": true, 00:19:55.989 "zcopy": false, 00:19:55.989 "get_zone_info": false, 00:19:55.989 "zone_management": false, 00:19:55.989 "zone_append": false, 00:19:55.989 "compare": false, 00:19:55.989 "compare_and_write": false, 00:19:55.989 "abort": false, 00:19:55.989 "seek_hole": false, 00:19:55.989 "seek_data": false, 00:19:55.989 "copy": false, 00:19:55.989 "nvme_iov_md": false 00:19:55.989 }, 00:19:55.989 "memory_domains": [ 00:19:55.989 { 00:19:55.989 "dma_device_id": "system", 00:19:55.989 "dma_device_type": 1 00:19:55.989 }, 00:19:55.989 { 00:19:55.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.989 "dma_device_type": 2 00:19:55.989 }, 00:19:55.989 { 00:19:55.989 "dma_device_id": "system", 00:19:55.989 "dma_device_type": 1 00:19:55.989 }, 00:19:55.989 { 00:19:55.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.989 "dma_device_type": 2 00:19:55.989 }, 00:19:55.989 { 00:19:55.989 "dma_device_id": "system", 00:19:55.989 "dma_device_type": 1 00:19:55.989 }, 00:19:55.989 { 00:19:55.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.989 "dma_device_type": 2 00:19:55.989 } 00:19:55.989 ], 00:19:55.989 "driver_specific": { 00:19:55.989 "raid": { 00:19:55.989 "uuid": "af03083d-d532-41d0-ba14-42ce99f3eeff", 00:19:55.989 "strip_size_kb": 0, 00:19:55.989 "state": "online", 00:19:55.989 "raid_level": "raid1", 00:19:55.989 "superblock": false, 00:19:55.989 "num_base_bdevs": 3, 00:19:55.989 "num_base_bdevs_discovered": 3, 00:19:55.989 "num_base_bdevs_operational": 3, 00:19:55.989 "base_bdevs_list": [ 00:19:55.989 { 00:19:55.989 "name": "NewBaseBdev", 00:19:55.989 "uuid": "1104827f-1a9a-4e3b-ba26-ebf9cde7a952", 00:19:55.989 "is_configured": true, 00:19:55.989 "data_offset": 0, 00:19:55.989 "data_size": 65536 00:19:55.989 }, 00:19:55.989 { 00:19:55.989 "name": "BaseBdev2", 00:19:55.989 "uuid": "f8d49f66-5408-4e61-b2c3-687142cdb2d4", 00:19:55.989 "is_configured": true, 00:19:55.989 "data_offset": 0, 00:19:55.989 "data_size": 65536 00:19:55.989 }, 00:19:55.989 { 00:19:55.989 "name": "BaseBdev3", 00:19:55.989 "uuid": "b4a04c3e-65c8-45e2-b693-0c3dc455d5b1", 00:19:55.989 "is_configured": true, 00:19:55.989 "data_offset": 0, 00:19:55.989 "data_size": 65536 00:19:55.989 } 00:19:55.989 ] 00:19:55.989 } 00:19:55.989 } 00:19:55.989 }' 00:19:55.989 23:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:55.989 23:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:19:55.989 BaseBdev2 00:19:55.989 BaseBdev3' 00:19:55.989 23:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:55.989 23:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:55.989 23:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:56.248 23:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:56.248 "name": "NewBaseBdev", 00:19:56.248 "aliases": [ 00:19:56.248 "1104827f-1a9a-4e3b-ba26-ebf9cde7a952" 00:19:56.248 ], 00:19:56.248 "product_name": "Malloc disk", 00:19:56.248 "block_size": 512, 00:19:56.248 "num_blocks": 65536, 00:19:56.248 "uuid": "1104827f-1a9a-4e3b-ba26-ebf9cde7a952", 00:19:56.248 "assigned_rate_limits": { 00:19:56.248 "rw_ios_per_sec": 0, 00:19:56.248 "rw_mbytes_per_sec": 0, 00:19:56.248 "r_mbytes_per_sec": 0, 00:19:56.248 "w_mbytes_per_sec": 0 00:19:56.248 }, 00:19:56.248 "claimed": true, 00:19:56.248 "claim_type": "exclusive_write", 00:19:56.248 "zoned": false, 00:19:56.248 "supported_io_types": { 00:19:56.248 "read": true, 00:19:56.248 "write": true, 00:19:56.248 "unmap": true, 00:19:56.248 "flush": true, 00:19:56.248 "reset": true, 00:19:56.248 "nvme_admin": false, 00:19:56.248 "nvme_io": false, 00:19:56.248 "nvme_io_md": false, 00:19:56.248 "write_zeroes": true, 00:19:56.248 "zcopy": true, 00:19:56.248 "get_zone_info": false, 00:19:56.248 "zone_management": false, 00:19:56.248 "zone_append": false, 00:19:56.248 "compare": false, 00:19:56.248 "compare_and_write": false, 00:19:56.248 "abort": true, 00:19:56.248 "seek_hole": false, 00:19:56.248 "seek_data": false, 00:19:56.248 "copy": true, 00:19:56.248 "nvme_iov_md": false 00:19:56.248 }, 00:19:56.248 "memory_domains": [ 00:19:56.248 { 00:19:56.248 "dma_device_id": "system", 00:19:56.248 "dma_device_type": 1 00:19:56.248 }, 00:19:56.248 { 00:19:56.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:56.248 "dma_device_type": 2 00:19:56.248 } 00:19:56.248 ], 00:19:56.248 "driver_specific": {} 00:19:56.248 }' 00:19:56.248 23:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:56.506 23:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:56.506 23:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:56.506 23:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:56.506 23:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:56.506 23:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:56.506 23:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:56.506 23:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:56.765 23:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:56.765 23:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:56.765 23:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:56.765 23:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:56.765 23:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:56.765 23:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:56.765 23:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:57.023 23:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:57.023 "name": "BaseBdev2", 00:19:57.023 "aliases": [ 00:19:57.023 "f8d49f66-5408-4e61-b2c3-687142cdb2d4" 00:19:57.023 ], 00:19:57.023 "product_name": "Malloc disk", 00:19:57.023 "block_size": 512, 00:19:57.023 "num_blocks": 65536, 00:19:57.023 "uuid": "f8d49f66-5408-4e61-b2c3-687142cdb2d4", 00:19:57.023 "assigned_rate_limits": { 00:19:57.023 "rw_ios_per_sec": 0, 00:19:57.023 "rw_mbytes_per_sec": 0, 00:19:57.023 "r_mbytes_per_sec": 0, 00:19:57.023 "w_mbytes_per_sec": 0 00:19:57.023 }, 00:19:57.023 "claimed": true, 00:19:57.023 "claim_type": "exclusive_write", 00:19:57.023 "zoned": false, 00:19:57.023 "supported_io_types": { 00:19:57.023 "read": true, 00:19:57.023 "write": true, 00:19:57.023 "unmap": true, 00:19:57.023 "flush": true, 00:19:57.023 "reset": true, 00:19:57.023 "nvme_admin": false, 00:19:57.023 "nvme_io": false, 00:19:57.023 "nvme_io_md": false, 00:19:57.023 "write_zeroes": true, 00:19:57.023 "zcopy": true, 00:19:57.023 "get_zone_info": false, 00:19:57.023 "zone_management": false, 00:19:57.023 "zone_append": false, 00:19:57.023 "compare": false, 00:19:57.023 "compare_and_write": false, 00:19:57.023 "abort": true, 00:19:57.023 "seek_hole": false, 00:19:57.023 "seek_data": false, 00:19:57.023 "copy": true, 00:19:57.023 "nvme_iov_md": false 00:19:57.023 }, 00:19:57.023 "memory_domains": [ 00:19:57.023 { 00:19:57.023 "dma_device_id": "system", 00:19:57.023 "dma_device_type": 1 00:19:57.023 }, 00:19:57.023 { 00:19:57.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.023 "dma_device_type": 2 00:19:57.023 } 00:19:57.023 ], 00:19:57.023 "driver_specific": {} 00:19:57.023 }' 00:19:57.023 23:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:57.023 23:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:57.023 23:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:57.023 23:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:57.282 23:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:57.282 23:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:57.282 23:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:57.282 23:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:57.282 23:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:57.282 23:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:57.282 23:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:57.540 23:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:57.540 23:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:57.540 23:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:57.540 23:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:57.540 23:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:57.540 "name": "BaseBdev3", 00:19:57.540 "aliases": [ 00:19:57.540 "b4a04c3e-65c8-45e2-b693-0c3dc455d5b1" 00:19:57.540 ], 00:19:57.540 "product_name": "Malloc disk", 00:19:57.540 "block_size": 512, 00:19:57.540 "num_blocks": 65536, 00:19:57.540 "uuid": "b4a04c3e-65c8-45e2-b693-0c3dc455d5b1", 00:19:57.540 "assigned_rate_limits": { 00:19:57.540 "rw_ios_per_sec": 0, 00:19:57.540 "rw_mbytes_per_sec": 0, 00:19:57.540 "r_mbytes_per_sec": 0, 00:19:57.540 "w_mbytes_per_sec": 0 00:19:57.540 }, 00:19:57.540 "claimed": true, 00:19:57.540 "claim_type": "exclusive_write", 00:19:57.540 "zoned": false, 00:19:57.540 "supported_io_types": { 00:19:57.540 "read": true, 00:19:57.540 "write": true, 00:19:57.540 "unmap": true, 00:19:57.540 "flush": true, 00:19:57.540 "reset": true, 00:19:57.540 "nvme_admin": false, 00:19:57.540 "nvme_io": false, 00:19:57.540 "nvme_io_md": false, 00:19:57.540 "write_zeroes": true, 00:19:57.540 "zcopy": true, 00:19:57.540 "get_zone_info": false, 00:19:57.540 "zone_management": false, 00:19:57.540 "zone_append": false, 00:19:57.540 "compare": false, 00:19:57.540 "compare_and_write": false, 00:19:57.540 "abort": true, 00:19:57.540 "seek_hole": false, 00:19:57.540 "seek_data": false, 00:19:57.540 "copy": true, 00:19:57.540 "nvme_iov_md": false 00:19:57.540 }, 00:19:57.540 "memory_domains": [ 00:19:57.540 { 00:19:57.540 "dma_device_id": "system", 00:19:57.540 "dma_device_type": 1 00:19:57.540 }, 00:19:57.540 { 00:19:57.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.540 "dma_device_type": 2 00:19:57.540 } 00:19:57.540 ], 00:19:57.540 "driver_specific": {} 00:19:57.540 }' 00:19:57.540 23:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:57.823 23:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:57.823 23:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:57.823 23:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:57.823 23:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:57.823 23:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:57.823 23:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:57.823 23:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:58.082 23:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:58.082 23:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:58.082 23:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:58.082 23:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:58.082 23:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:58.341 [2024-07-13 23:05:47.615264] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:58.341 [2024-07-13 23:05:47.615492] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:58.341 [2024-07-13 23:05:47.615710] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:58.341 [2024-07-13 23:05:47.616133] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:58.341 [2024-07-13 23:05:47.616265] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:19:58.341 23:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 140951 00:19:58.341 23:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 140951 ']' 00:19:58.341 23:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 140951 00:19:58.341 23:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:19:58.341 23:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:58.341 23:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 140951 00:19:58.341 killing process with pid 140951 00:19:58.341 23:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:58.341 23:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:58.341 23:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 140951' 00:19:58.341 23:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 140951 00:19:58.341 [2024-07-13 23:05:47.656956] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:58.341 23:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 140951 00:19:58.341 [2024-07-13 23:05:47.695635] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:58.909 ************************************ 00:19:58.909 END TEST raid_state_function_test 00:19:58.909 ************************************ 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:19:58.909 00:19:58.909 real 0m30.254s 00:19:58.909 user 0m57.194s 00:19:58.909 sys 0m3.867s 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.909 23:05:48 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:58.909 23:05:48 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:19:58.909 23:05:48 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:58.909 23:05:48 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:58.909 23:05:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:58.909 ************************************ 00:19:58.909 START TEST raid_state_function_test_sb 00:19:58.909 ************************************ 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 true 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=141936 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 141936' 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:58.909 Process raid pid: 141936 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 141936 /var/tmp/spdk-raid.sock 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 141936 ']' 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:58.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:58.909 23:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.909 [2024-07-13 23:05:48.157243] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:19:58.909 [2024-07-13 23:05:48.157708] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.909 [2024-07-13 23:05:48.297544] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.167 [2024-07-13 23:05:48.383740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.167 [2024-07-13 23:05:48.461276] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:00.101 23:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:00.101 23:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:20:00.101 23:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:00.101 [2024-07-13 23:05:49.415200] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:00.101 [2024-07-13 23:05:49.415652] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:00.101 [2024-07-13 23:05:49.415770] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:00.101 [2024-07-13 23:05:49.415906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:00.101 [2024-07-13 23:05:49.416112] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:00.102 [2024-07-13 23:05:49.416209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:00.102 23:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:00.102 23:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:00.102 23:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:00.102 23:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:00.102 23:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:00.102 23:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:00.102 23:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:00.102 23:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:00.102 23:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:00.102 23:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:00.102 23:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.102 23:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:00.359 23:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:00.359 "name": "Existed_Raid", 00:20:00.359 "uuid": "451bfd39-a1b5-4d03-b8f5-df94c7642977", 00:20:00.359 "strip_size_kb": 0, 00:20:00.359 "state": "configuring", 00:20:00.359 "raid_level": "raid1", 00:20:00.359 "superblock": true, 00:20:00.359 "num_base_bdevs": 3, 00:20:00.359 "num_base_bdevs_discovered": 0, 00:20:00.359 "num_base_bdevs_operational": 3, 00:20:00.359 "base_bdevs_list": [ 00:20:00.359 { 00:20:00.359 "name": "BaseBdev1", 00:20:00.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.359 "is_configured": false, 00:20:00.359 "data_offset": 0, 00:20:00.359 "data_size": 0 00:20:00.359 }, 00:20:00.360 { 00:20:00.360 "name": "BaseBdev2", 00:20:00.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.360 "is_configured": false, 00:20:00.360 "data_offset": 0, 00:20:00.360 "data_size": 0 00:20:00.360 }, 00:20:00.360 { 00:20:00.360 "name": "BaseBdev3", 00:20:00.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.360 "is_configured": false, 00:20:00.360 "data_offset": 0, 00:20:00.360 "data_size": 0 00:20:00.360 } 00:20:00.360 ] 00:20:00.360 }' 00:20:00.360 23:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:00.360 23:05:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.924 23:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:01.182 [2024-07-13 23:05:50.463213] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:01.182 [2024-07-13 23:05:50.463498] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:20:01.182 23:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:01.440 [2024-07-13 23:05:50.675238] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:01.440 [2024-07-13 23:05:50.675529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:01.440 [2024-07-13 23:05:50.675635] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:01.440 [2024-07-13 23:05:50.675695] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:01.440 [2024-07-13 23:05:50.675812] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:01.440 [2024-07-13 23:05:50.675873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:01.440 23:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:01.699 [2024-07-13 23:05:50.945097] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:01.699 BaseBdev1 00:20:01.699 23:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:01.699 23:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:01.699 23:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:01.699 23:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:01.699 23:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:01.699 23:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:01.699 23:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:01.957 23:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:02.214 [ 00:20:02.214 { 00:20:02.214 "name": "BaseBdev1", 00:20:02.214 "aliases": [ 00:20:02.214 "a8260547-4b2b-45ef-813b-1f14dca84575" 00:20:02.214 ], 00:20:02.214 "product_name": "Malloc disk", 00:20:02.214 "block_size": 512, 00:20:02.214 "num_blocks": 65536, 00:20:02.214 "uuid": "a8260547-4b2b-45ef-813b-1f14dca84575", 00:20:02.214 "assigned_rate_limits": { 00:20:02.214 "rw_ios_per_sec": 0, 00:20:02.214 "rw_mbytes_per_sec": 0, 00:20:02.214 "r_mbytes_per_sec": 0, 00:20:02.214 "w_mbytes_per_sec": 0 00:20:02.214 }, 00:20:02.214 "claimed": true, 00:20:02.214 "claim_type": "exclusive_write", 00:20:02.214 "zoned": false, 00:20:02.214 "supported_io_types": { 00:20:02.214 "read": true, 00:20:02.214 "write": true, 00:20:02.214 "unmap": true, 00:20:02.214 "flush": true, 00:20:02.214 "reset": true, 00:20:02.214 "nvme_admin": false, 00:20:02.214 "nvme_io": false, 00:20:02.214 "nvme_io_md": false, 00:20:02.214 "write_zeroes": true, 00:20:02.214 "zcopy": true, 00:20:02.214 "get_zone_info": false, 00:20:02.214 "zone_management": false, 00:20:02.214 "zone_append": false, 00:20:02.214 "compare": false, 00:20:02.214 "compare_and_write": false, 00:20:02.214 "abort": true, 00:20:02.214 "seek_hole": false, 00:20:02.214 "seek_data": false, 00:20:02.214 "copy": true, 00:20:02.214 "nvme_iov_md": false 00:20:02.214 }, 00:20:02.214 "memory_domains": [ 00:20:02.214 { 00:20:02.214 "dma_device_id": "system", 00:20:02.214 "dma_device_type": 1 00:20:02.214 }, 00:20:02.214 { 00:20:02.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.214 "dma_device_type": 2 00:20:02.214 } 00:20:02.214 ], 00:20:02.214 "driver_specific": {} 00:20:02.214 } 00:20:02.214 ] 00:20:02.214 23:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:02.214 23:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:02.214 23:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:02.214 23:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:02.214 23:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:02.214 23:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:02.214 23:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:02.214 23:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:02.214 23:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:02.214 23:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:02.214 23:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:02.214 23:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:02.214 23:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:02.214 23:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:02.214 "name": "Existed_Raid", 00:20:02.214 "uuid": "bc59e7d2-2ad8-4611-b9ab-9317efbf3a41", 00:20:02.214 "strip_size_kb": 0, 00:20:02.214 "state": "configuring", 00:20:02.214 "raid_level": "raid1", 00:20:02.214 "superblock": true, 00:20:02.214 "num_base_bdevs": 3, 00:20:02.214 "num_base_bdevs_discovered": 1, 00:20:02.214 "num_base_bdevs_operational": 3, 00:20:02.214 "base_bdevs_list": [ 00:20:02.214 { 00:20:02.214 "name": "BaseBdev1", 00:20:02.215 "uuid": "a8260547-4b2b-45ef-813b-1f14dca84575", 00:20:02.215 "is_configured": true, 00:20:02.215 "data_offset": 2048, 00:20:02.215 "data_size": 63488 00:20:02.215 }, 00:20:02.215 { 00:20:02.215 "name": "BaseBdev2", 00:20:02.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.215 "is_configured": false, 00:20:02.215 "data_offset": 0, 00:20:02.215 "data_size": 0 00:20:02.215 }, 00:20:02.215 { 00:20:02.215 "name": "BaseBdev3", 00:20:02.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.215 "is_configured": false, 00:20:02.215 "data_offset": 0, 00:20:02.215 "data_size": 0 00:20:02.215 } 00:20:02.215 ] 00:20:02.215 }' 00:20:02.215 23:05:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:02.215 23:05:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:03.204 23:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:03.204 [2024-07-13 23:05:52.489607] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:03.204 [2024-07-13 23:05:52.489851] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:20:03.204 23:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:03.465 [2024-07-13 23:05:52.685695] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:03.465 [2024-07-13 23:05:52.687956] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:03.465 [2024-07-13 23:05:52.688166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:03.465 [2024-07-13 23:05:52.688283] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:03.465 [2024-07-13 23:05:52.688349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:03.465 23:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:03.465 23:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:03.465 23:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:03.465 23:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:03.465 23:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:03.465 23:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:03.465 23:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:03.465 23:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:03.465 23:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:03.465 23:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:03.465 23:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:03.465 23:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:03.465 23:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:03.465 23:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:03.724 23:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:03.724 "name": "Existed_Raid", 00:20:03.724 "uuid": "627d9880-4d39-4f00-b7e3-d851ab6f67b8", 00:20:03.724 "strip_size_kb": 0, 00:20:03.724 "state": "configuring", 00:20:03.724 "raid_level": "raid1", 00:20:03.724 "superblock": true, 00:20:03.724 "num_base_bdevs": 3, 00:20:03.724 "num_base_bdevs_discovered": 1, 00:20:03.724 "num_base_bdevs_operational": 3, 00:20:03.724 "base_bdevs_list": [ 00:20:03.724 { 00:20:03.724 "name": "BaseBdev1", 00:20:03.724 "uuid": "a8260547-4b2b-45ef-813b-1f14dca84575", 00:20:03.724 "is_configured": true, 00:20:03.724 "data_offset": 2048, 00:20:03.724 "data_size": 63488 00:20:03.724 }, 00:20:03.724 { 00:20:03.724 "name": "BaseBdev2", 00:20:03.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.724 "is_configured": false, 00:20:03.724 "data_offset": 0, 00:20:03.724 "data_size": 0 00:20:03.724 }, 00:20:03.724 { 00:20:03.724 "name": "BaseBdev3", 00:20:03.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.724 "is_configured": false, 00:20:03.724 "data_offset": 0, 00:20:03.724 "data_size": 0 00:20:03.724 } 00:20:03.724 ] 00:20:03.724 }' 00:20:03.724 23:05:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:03.724 23:05:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.289 23:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:04.547 [2024-07-13 23:05:53.949813] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:04.547 BaseBdev2 00:20:04.805 23:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:04.805 23:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:04.805 23:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:04.805 23:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:04.805 23:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:04.805 23:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:04.805 23:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:05.064 23:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:05.064 [ 00:20:05.064 { 00:20:05.064 "name": "BaseBdev2", 00:20:05.064 "aliases": [ 00:20:05.064 "b342f83f-aba1-44f6-a141-12e134d6a505" 00:20:05.064 ], 00:20:05.064 "product_name": "Malloc disk", 00:20:05.064 "block_size": 512, 00:20:05.064 "num_blocks": 65536, 00:20:05.064 "uuid": "b342f83f-aba1-44f6-a141-12e134d6a505", 00:20:05.064 "assigned_rate_limits": { 00:20:05.064 "rw_ios_per_sec": 0, 00:20:05.064 "rw_mbytes_per_sec": 0, 00:20:05.064 "r_mbytes_per_sec": 0, 00:20:05.064 "w_mbytes_per_sec": 0 00:20:05.064 }, 00:20:05.064 "claimed": true, 00:20:05.064 "claim_type": "exclusive_write", 00:20:05.064 "zoned": false, 00:20:05.064 "supported_io_types": { 00:20:05.064 "read": true, 00:20:05.064 "write": true, 00:20:05.064 "unmap": true, 00:20:05.064 "flush": true, 00:20:05.064 "reset": true, 00:20:05.064 "nvme_admin": false, 00:20:05.064 "nvme_io": false, 00:20:05.064 "nvme_io_md": false, 00:20:05.064 "write_zeroes": true, 00:20:05.064 "zcopy": true, 00:20:05.064 "get_zone_info": false, 00:20:05.064 "zone_management": false, 00:20:05.064 "zone_append": false, 00:20:05.064 "compare": false, 00:20:05.064 "compare_and_write": false, 00:20:05.064 "abort": true, 00:20:05.064 "seek_hole": false, 00:20:05.064 "seek_data": false, 00:20:05.064 "copy": true, 00:20:05.064 "nvme_iov_md": false 00:20:05.064 }, 00:20:05.064 "memory_domains": [ 00:20:05.064 { 00:20:05.064 "dma_device_id": "system", 00:20:05.064 "dma_device_type": 1 00:20:05.064 }, 00:20:05.064 { 00:20:05.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:05.064 "dma_device_type": 2 00:20:05.064 } 00:20:05.064 ], 00:20:05.064 "driver_specific": {} 00:20:05.064 } 00:20:05.064 ] 00:20:05.064 23:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:05.064 23:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:05.064 23:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:05.064 23:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:05.064 23:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:05.064 23:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:05.064 23:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:05.064 23:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:05.064 23:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:05.064 23:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:05.064 23:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:05.064 23:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:05.064 23:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:05.064 23:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.064 23:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:05.323 23:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:05.323 "name": "Existed_Raid", 00:20:05.323 "uuid": "627d9880-4d39-4f00-b7e3-d851ab6f67b8", 00:20:05.323 "strip_size_kb": 0, 00:20:05.323 "state": "configuring", 00:20:05.323 "raid_level": "raid1", 00:20:05.323 "superblock": true, 00:20:05.323 "num_base_bdevs": 3, 00:20:05.323 "num_base_bdevs_discovered": 2, 00:20:05.323 "num_base_bdevs_operational": 3, 00:20:05.323 "base_bdevs_list": [ 00:20:05.323 { 00:20:05.323 "name": "BaseBdev1", 00:20:05.323 "uuid": "a8260547-4b2b-45ef-813b-1f14dca84575", 00:20:05.323 "is_configured": true, 00:20:05.323 "data_offset": 2048, 00:20:05.323 "data_size": 63488 00:20:05.323 }, 00:20:05.323 { 00:20:05.323 "name": "BaseBdev2", 00:20:05.323 "uuid": "b342f83f-aba1-44f6-a141-12e134d6a505", 00:20:05.323 "is_configured": true, 00:20:05.323 "data_offset": 2048, 00:20:05.323 "data_size": 63488 00:20:05.323 }, 00:20:05.323 { 00:20:05.323 "name": "BaseBdev3", 00:20:05.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.323 "is_configured": false, 00:20:05.323 "data_offset": 0, 00:20:05.323 "data_size": 0 00:20:05.323 } 00:20:05.323 ] 00:20:05.323 }' 00:20:05.323 23:05:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:05.323 23:05:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.889 23:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:06.147 [2024-07-13 23:05:55.537873] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:06.147 [2024-07-13 23:05:55.538396] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:20:06.147 [2024-07-13 23:05:55.538529] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:06.147 [2024-07-13 23:05:55.538731] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:20:06.147 BaseBdev3 00:20:06.147 [2024-07-13 23:05:55.539322] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:20:06.147 [2024-07-13 23:05:55.539538] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:20:06.147 [2024-07-13 23:05:55.539797] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:06.147 23:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:06.147 23:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:06.147 23:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:06.147 23:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:06.147 23:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:06.147 23:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:06.147 23:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:06.405 23:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:06.663 [ 00:20:06.663 { 00:20:06.663 "name": "BaseBdev3", 00:20:06.663 "aliases": [ 00:20:06.663 "0eefeb0d-1293-4aed-b08b-41427b083377" 00:20:06.663 ], 00:20:06.663 "product_name": "Malloc disk", 00:20:06.663 "block_size": 512, 00:20:06.663 "num_blocks": 65536, 00:20:06.663 "uuid": "0eefeb0d-1293-4aed-b08b-41427b083377", 00:20:06.663 "assigned_rate_limits": { 00:20:06.663 "rw_ios_per_sec": 0, 00:20:06.663 "rw_mbytes_per_sec": 0, 00:20:06.663 "r_mbytes_per_sec": 0, 00:20:06.663 "w_mbytes_per_sec": 0 00:20:06.663 }, 00:20:06.663 "claimed": true, 00:20:06.663 "claim_type": "exclusive_write", 00:20:06.663 "zoned": false, 00:20:06.663 "supported_io_types": { 00:20:06.663 "read": true, 00:20:06.663 "write": true, 00:20:06.663 "unmap": true, 00:20:06.663 "flush": true, 00:20:06.663 "reset": true, 00:20:06.663 "nvme_admin": false, 00:20:06.663 "nvme_io": false, 00:20:06.663 "nvme_io_md": false, 00:20:06.663 "write_zeroes": true, 00:20:06.663 "zcopy": true, 00:20:06.663 "get_zone_info": false, 00:20:06.663 "zone_management": false, 00:20:06.663 "zone_append": false, 00:20:06.663 "compare": false, 00:20:06.663 "compare_and_write": false, 00:20:06.663 "abort": true, 00:20:06.663 "seek_hole": false, 00:20:06.663 "seek_data": false, 00:20:06.663 "copy": true, 00:20:06.663 "nvme_iov_md": false 00:20:06.663 }, 00:20:06.663 "memory_domains": [ 00:20:06.663 { 00:20:06.663 "dma_device_id": "system", 00:20:06.663 "dma_device_type": 1 00:20:06.663 }, 00:20:06.663 { 00:20:06.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.663 "dma_device_type": 2 00:20:06.663 } 00:20:06.663 ], 00:20:06.663 "driver_specific": {} 00:20:06.663 } 00:20:06.663 ] 00:20:06.663 23:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:06.663 23:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:06.663 23:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:06.663 23:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:06.663 23:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:06.663 23:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:06.663 23:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:06.663 23:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:06.663 23:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:06.663 23:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:06.663 23:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:06.663 23:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:06.663 23:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:06.663 23:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.663 23:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:06.921 23:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:06.921 "name": "Existed_Raid", 00:20:06.921 "uuid": "627d9880-4d39-4f00-b7e3-d851ab6f67b8", 00:20:06.921 "strip_size_kb": 0, 00:20:06.921 "state": "online", 00:20:06.921 "raid_level": "raid1", 00:20:06.921 "superblock": true, 00:20:06.921 "num_base_bdevs": 3, 00:20:06.921 "num_base_bdevs_discovered": 3, 00:20:06.921 "num_base_bdevs_operational": 3, 00:20:06.921 "base_bdevs_list": [ 00:20:06.921 { 00:20:06.921 "name": "BaseBdev1", 00:20:06.921 "uuid": "a8260547-4b2b-45ef-813b-1f14dca84575", 00:20:06.921 "is_configured": true, 00:20:06.921 "data_offset": 2048, 00:20:06.921 "data_size": 63488 00:20:06.921 }, 00:20:06.921 { 00:20:06.921 "name": "BaseBdev2", 00:20:06.921 "uuid": "b342f83f-aba1-44f6-a141-12e134d6a505", 00:20:06.921 "is_configured": true, 00:20:06.921 "data_offset": 2048, 00:20:06.921 "data_size": 63488 00:20:06.921 }, 00:20:06.921 { 00:20:06.921 "name": "BaseBdev3", 00:20:06.921 "uuid": "0eefeb0d-1293-4aed-b08b-41427b083377", 00:20:06.921 "is_configured": true, 00:20:06.921 "data_offset": 2048, 00:20:06.921 "data_size": 63488 00:20:06.921 } 00:20:06.921 ] 00:20:06.921 }' 00:20:06.921 23:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:06.921 23:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.855 23:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:07.855 23:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:07.855 23:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:07.855 23:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:07.855 23:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:07.855 23:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:07.855 23:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:07.855 23:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:07.855 [2024-07-13 23:05:57.174590] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:07.855 23:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:07.855 "name": "Existed_Raid", 00:20:07.855 "aliases": [ 00:20:07.855 "627d9880-4d39-4f00-b7e3-d851ab6f67b8" 00:20:07.855 ], 00:20:07.855 "product_name": "Raid Volume", 00:20:07.855 "block_size": 512, 00:20:07.855 "num_blocks": 63488, 00:20:07.855 "uuid": "627d9880-4d39-4f00-b7e3-d851ab6f67b8", 00:20:07.855 "assigned_rate_limits": { 00:20:07.855 "rw_ios_per_sec": 0, 00:20:07.855 "rw_mbytes_per_sec": 0, 00:20:07.855 "r_mbytes_per_sec": 0, 00:20:07.855 "w_mbytes_per_sec": 0 00:20:07.855 }, 00:20:07.855 "claimed": false, 00:20:07.855 "zoned": false, 00:20:07.855 "supported_io_types": { 00:20:07.855 "read": true, 00:20:07.855 "write": true, 00:20:07.855 "unmap": false, 00:20:07.855 "flush": false, 00:20:07.855 "reset": true, 00:20:07.855 "nvme_admin": false, 00:20:07.855 "nvme_io": false, 00:20:07.855 "nvme_io_md": false, 00:20:07.855 "write_zeroes": true, 00:20:07.855 "zcopy": false, 00:20:07.855 "get_zone_info": false, 00:20:07.855 "zone_management": false, 00:20:07.855 "zone_append": false, 00:20:07.855 "compare": false, 00:20:07.855 "compare_and_write": false, 00:20:07.855 "abort": false, 00:20:07.855 "seek_hole": false, 00:20:07.855 "seek_data": false, 00:20:07.855 "copy": false, 00:20:07.855 "nvme_iov_md": false 00:20:07.855 }, 00:20:07.855 "memory_domains": [ 00:20:07.855 { 00:20:07.855 "dma_device_id": "system", 00:20:07.855 "dma_device_type": 1 00:20:07.855 }, 00:20:07.855 { 00:20:07.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:07.855 "dma_device_type": 2 00:20:07.855 }, 00:20:07.855 { 00:20:07.855 "dma_device_id": "system", 00:20:07.855 "dma_device_type": 1 00:20:07.855 }, 00:20:07.855 { 00:20:07.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:07.855 "dma_device_type": 2 00:20:07.855 }, 00:20:07.855 { 00:20:07.855 "dma_device_id": "system", 00:20:07.855 "dma_device_type": 1 00:20:07.855 }, 00:20:07.855 { 00:20:07.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:07.855 "dma_device_type": 2 00:20:07.855 } 00:20:07.855 ], 00:20:07.855 "driver_specific": { 00:20:07.855 "raid": { 00:20:07.855 "uuid": "627d9880-4d39-4f00-b7e3-d851ab6f67b8", 00:20:07.855 "strip_size_kb": 0, 00:20:07.855 "state": "online", 00:20:07.855 "raid_level": "raid1", 00:20:07.855 "superblock": true, 00:20:07.855 "num_base_bdevs": 3, 00:20:07.855 "num_base_bdevs_discovered": 3, 00:20:07.855 "num_base_bdevs_operational": 3, 00:20:07.855 "base_bdevs_list": [ 00:20:07.855 { 00:20:07.855 "name": "BaseBdev1", 00:20:07.855 "uuid": "a8260547-4b2b-45ef-813b-1f14dca84575", 00:20:07.855 "is_configured": true, 00:20:07.855 "data_offset": 2048, 00:20:07.855 "data_size": 63488 00:20:07.855 }, 00:20:07.855 { 00:20:07.855 "name": "BaseBdev2", 00:20:07.855 "uuid": "b342f83f-aba1-44f6-a141-12e134d6a505", 00:20:07.855 "is_configured": true, 00:20:07.855 "data_offset": 2048, 00:20:07.855 "data_size": 63488 00:20:07.855 }, 00:20:07.855 { 00:20:07.855 "name": "BaseBdev3", 00:20:07.855 "uuid": "0eefeb0d-1293-4aed-b08b-41427b083377", 00:20:07.855 "is_configured": true, 00:20:07.855 "data_offset": 2048, 00:20:07.855 "data_size": 63488 00:20:07.855 } 00:20:07.855 ] 00:20:07.855 } 00:20:07.855 } 00:20:07.855 }' 00:20:07.856 23:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:07.856 23:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:07.856 BaseBdev2 00:20:07.856 BaseBdev3' 00:20:07.856 23:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:07.856 23:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:07.856 23:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:08.113 23:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:08.113 "name": "BaseBdev1", 00:20:08.113 "aliases": [ 00:20:08.113 "a8260547-4b2b-45ef-813b-1f14dca84575" 00:20:08.113 ], 00:20:08.113 "product_name": "Malloc disk", 00:20:08.113 "block_size": 512, 00:20:08.113 "num_blocks": 65536, 00:20:08.113 "uuid": "a8260547-4b2b-45ef-813b-1f14dca84575", 00:20:08.113 "assigned_rate_limits": { 00:20:08.113 "rw_ios_per_sec": 0, 00:20:08.113 "rw_mbytes_per_sec": 0, 00:20:08.113 "r_mbytes_per_sec": 0, 00:20:08.113 "w_mbytes_per_sec": 0 00:20:08.113 }, 00:20:08.113 "claimed": true, 00:20:08.113 "claim_type": "exclusive_write", 00:20:08.113 "zoned": false, 00:20:08.113 "supported_io_types": { 00:20:08.113 "read": true, 00:20:08.113 "write": true, 00:20:08.113 "unmap": true, 00:20:08.113 "flush": true, 00:20:08.113 "reset": true, 00:20:08.113 "nvme_admin": false, 00:20:08.113 "nvme_io": false, 00:20:08.113 "nvme_io_md": false, 00:20:08.113 "write_zeroes": true, 00:20:08.113 "zcopy": true, 00:20:08.113 "get_zone_info": false, 00:20:08.113 "zone_management": false, 00:20:08.113 "zone_append": false, 00:20:08.113 "compare": false, 00:20:08.113 "compare_and_write": false, 00:20:08.113 "abort": true, 00:20:08.113 "seek_hole": false, 00:20:08.113 "seek_data": false, 00:20:08.113 "copy": true, 00:20:08.113 "nvme_iov_md": false 00:20:08.113 }, 00:20:08.113 "memory_domains": [ 00:20:08.113 { 00:20:08.113 "dma_device_id": "system", 00:20:08.113 "dma_device_type": 1 00:20:08.113 }, 00:20:08.113 { 00:20:08.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:08.113 "dma_device_type": 2 00:20:08.113 } 00:20:08.113 ], 00:20:08.113 "driver_specific": {} 00:20:08.113 }' 00:20:08.113 23:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:08.113 23:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:08.371 23:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:08.371 23:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:08.371 23:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:08.371 23:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:08.371 23:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:08.371 23:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:08.371 23:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:08.371 23:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:08.629 23:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:08.629 23:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:08.629 23:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:08.629 23:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:08.629 23:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:08.887 23:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:08.887 "name": "BaseBdev2", 00:20:08.887 "aliases": [ 00:20:08.887 "b342f83f-aba1-44f6-a141-12e134d6a505" 00:20:08.887 ], 00:20:08.887 "product_name": "Malloc disk", 00:20:08.887 "block_size": 512, 00:20:08.887 "num_blocks": 65536, 00:20:08.887 "uuid": "b342f83f-aba1-44f6-a141-12e134d6a505", 00:20:08.887 "assigned_rate_limits": { 00:20:08.887 "rw_ios_per_sec": 0, 00:20:08.887 "rw_mbytes_per_sec": 0, 00:20:08.887 "r_mbytes_per_sec": 0, 00:20:08.887 "w_mbytes_per_sec": 0 00:20:08.887 }, 00:20:08.887 "claimed": true, 00:20:08.887 "claim_type": "exclusive_write", 00:20:08.887 "zoned": false, 00:20:08.887 "supported_io_types": { 00:20:08.887 "read": true, 00:20:08.887 "write": true, 00:20:08.887 "unmap": true, 00:20:08.887 "flush": true, 00:20:08.887 "reset": true, 00:20:08.887 "nvme_admin": false, 00:20:08.887 "nvme_io": false, 00:20:08.887 "nvme_io_md": false, 00:20:08.887 "write_zeroes": true, 00:20:08.887 "zcopy": true, 00:20:08.887 "get_zone_info": false, 00:20:08.887 "zone_management": false, 00:20:08.887 "zone_append": false, 00:20:08.887 "compare": false, 00:20:08.887 "compare_and_write": false, 00:20:08.887 "abort": true, 00:20:08.887 "seek_hole": false, 00:20:08.887 "seek_data": false, 00:20:08.887 "copy": true, 00:20:08.887 "nvme_iov_md": false 00:20:08.887 }, 00:20:08.887 "memory_domains": [ 00:20:08.887 { 00:20:08.887 "dma_device_id": "system", 00:20:08.887 "dma_device_type": 1 00:20:08.887 }, 00:20:08.887 { 00:20:08.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:08.887 "dma_device_type": 2 00:20:08.887 } 00:20:08.887 ], 00:20:08.887 "driver_specific": {} 00:20:08.887 }' 00:20:08.887 23:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:08.887 23:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:08.887 23:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:08.887 23:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:08.887 23:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:09.145 23:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:09.145 23:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:09.145 23:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:09.145 23:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:09.145 23:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:09.145 23:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:09.145 23:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:09.145 23:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:09.145 23:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:09.145 23:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:09.711 23:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:09.711 "name": "BaseBdev3", 00:20:09.711 "aliases": [ 00:20:09.711 "0eefeb0d-1293-4aed-b08b-41427b083377" 00:20:09.711 ], 00:20:09.711 "product_name": "Malloc disk", 00:20:09.711 "block_size": 512, 00:20:09.711 "num_blocks": 65536, 00:20:09.711 "uuid": "0eefeb0d-1293-4aed-b08b-41427b083377", 00:20:09.711 "assigned_rate_limits": { 00:20:09.711 "rw_ios_per_sec": 0, 00:20:09.711 "rw_mbytes_per_sec": 0, 00:20:09.711 "r_mbytes_per_sec": 0, 00:20:09.711 "w_mbytes_per_sec": 0 00:20:09.711 }, 00:20:09.711 "claimed": true, 00:20:09.711 "claim_type": "exclusive_write", 00:20:09.711 "zoned": false, 00:20:09.711 "supported_io_types": { 00:20:09.711 "read": true, 00:20:09.711 "write": true, 00:20:09.711 "unmap": true, 00:20:09.711 "flush": true, 00:20:09.711 "reset": true, 00:20:09.711 "nvme_admin": false, 00:20:09.711 "nvme_io": false, 00:20:09.711 "nvme_io_md": false, 00:20:09.711 "write_zeroes": true, 00:20:09.711 "zcopy": true, 00:20:09.711 "get_zone_info": false, 00:20:09.711 "zone_management": false, 00:20:09.711 "zone_append": false, 00:20:09.711 "compare": false, 00:20:09.711 "compare_and_write": false, 00:20:09.711 "abort": true, 00:20:09.711 "seek_hole": false, 00:20:09.711 "seek_data": false, 00:20:09.711 "copy": true, 00:20:09.711 "nvme_iov_md": false 00:20:09.711 }, 00:20:09.711 "memory_domains": [ 00:20:09.711 { 00:20:09.711 "dma_device_id": "system", 00:20:09.711 "dma_device_type": 1 00:20:09.711 }, 00:20:09.711 { 00:20:09.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.711 "dma_device_type": 2 00:20:09.711 } 00:20:09.711 ], 00:20:09.711 "driver_specific": {} 00:20:09.711 }' 00:20:09.711 23:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:09.711 23:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:09.711 23:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:09.711 23:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:09.711 23:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:09.711 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:09.711 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:09.711 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:09.711 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:09.711 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:09.968 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:09.968 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:09.968 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:10.225 [2024-07-13 23:05:59.462998] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:10.226 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:10.226 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:20:10.226 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:10.226 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:20:10.226 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:20:10.226 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:10.226 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:10.226 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:10.226 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:10.226 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:10.226 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:10.226 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:10.226 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:10.226 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:10.226 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:10.226 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.226 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:10.482 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:10.482 "name": "Existed_Raid", 00:20:10.482 "uuid": "627d9880-4d39-4f00-b7e3-d851ab6f67b8", 00:20:10.482 "strip_size_kb": 0, 00:20:10.482 "state": "online", 00:20:10.482 "raid_level": "raid1", 00:20:10.482 "superblock": true, 00:20:10.482 "num_base_bdevs": 3, 00:20:10.482 "num_base_bdevs_discovered": 2, 00:20:10.482 "num_base_bdevs_operational": 2, 00:20:10.482 "base_bdevs_list": [ 00:20:10.482 { 00:20:10.482 "name": null, 00:20:10.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.482 "is_configured": false, 00:20:10.482 "data_offset": 2048, 00:20:10.482 "data_size": 63488 00:20:10.482 }, 00:20:10.482 { 00:20:10.482 "name": "BaseBdev2", 00:20:10.482 "uuid": "b342f83f-aba1-44f6-a141-12e134d6a505", 00:20:10.482 "is_configured": true, 00:20:10.482 "data_offset": 2048, 00:20:10.482 "data_size": 63488 00:20:10.482 }, 00:20:10.482 { 00:20:10.482 "name": "BaseBdev3", 00:20:10.482 "uuid": "0eefeb0d-1293-4aed-b08b-41427b083377", 00:20:10.482 "is_configured": true, 00:20:10.482 "data_offset": 2048, 00:20:10.482 "data_size": 63488 00:20:10.482 } 00:20:10.482 ] 00:20:10.482 }' 00:20:10.482 23:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:10.482 23:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.050 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:11.050 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:11.050 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:11.050 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.318 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:11.318 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:11.318 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:11.636 [2024-07-13 23:06:00.796855] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:11.636 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:11.636 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:11.636 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.636 23:06:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:11.893 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:11.893 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:11.893 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:12.149 [2024-07-13 23:06:01.350625] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:12.149 [2024-07-13 23:06:01.350927] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:12.149 [2024-07-13 23:06:01.363625] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:12.149 [2024-07-13 23:06:01.363852] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:12.149 [2024-07-13 23:06:01.363953] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:20:12.149 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:12.149 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:12.149 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.149 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:12.407 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:12.408 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:12.408 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:20:12.408 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:12.408 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:12.408 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:12.408 BaseBdev2 00:20:12.408 23:06:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:12.408 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:12.408 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:12.408 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:12.408 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:12.666 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:12.666 23:06:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:12.925 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:12.925 [ 00:20:12.925 { 00:20:12.925 "name": "BaseBdev2", 00:20:12.925 "aliases": [ 00:20:12.925 "2f30e6e9-9f88-43ef-8fce-82a3cf3fdd6b" 00:20:12.925 ], 00:20:12.925 "product_name": "Malloc disk", 00:20:12.925 "block_size": 512, 00:20:12.925 "num_blocks": 65536, 00:20:12.925 "uuid": "2f30e6e9-9f88-43ef-8fce-82a3cf3fdd6b", 00:20:12.925 "assigned_rate_limits": { 00:20:12.925 "rw_ios_per_sec": 0, 00:20:12.925 "rw_mbytes_per_sec": 0, 00:20:12.925 "r_mbytes_per_sec": 0, 00:20:12.925 "w_mbytes_per_sec": 0 00:20:12.925 }, 00:20:12.925 "claimed": false, 00:20:12.925 "zoned": false, 00:20:12.925 "supported_io_types": { 00:20:12.925 "read": true, 00:20:12.925 "write": true, 00:20:12.925 "unmap": true, 00:20:12.925 "flush": true, 00:20:12.925 "reset": true, 00:20:12.925 "nvme_admin": false, 00:20:12.925 "nvme_io": false, 00:20:12.925 "nvme_io_md": false, 00:20:12.925 "write_zeroes": true, 00:20:12.925 "zcopy": true, 00:20:12.925 "get_zone_info": false, 00:20:12.925 "zone_management": false, 00:20:12.925 "zone_append": false, 00:20:12.925 "compare": false, 00:20:12.925 "compare_and_write": false, 00:20:12.925 "abort": true, 00:20:12.925 "seek_hole": false, 00:20:12.925 "seek_data": false, 00:20:12.925 "copy": true, 00:20:12.925 "nvme_iov_md": false 00:20:12.925 }, 00:20:12.925 "memory_domains": [ 00:20:12.925 { 00:20:12.925 "dma_device_id": "system", 00:20:12.925 "dma_device_type": 1 00:20:12.925 }, 00:20:12.925 { 00:20:12.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:12.925 "dma_device_type": 2 00:20:12.925 } 00:20:12.925 ], 00:20:12.925 "driver_specific": {} 00:20:12.925 } 00:20:12.925 ] 00:20:12.925 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:12.925 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:12.925 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:12.925 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:13.183 BaseBdev3 00:20:13.183 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:13.183 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:13.183 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:13.183 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:13.183 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:13.183 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:13.183 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:13.441 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:13.699 [ 00:20:13.699 { 00:20:13.699 "name": "BaseBdev3", 00:20:13.699 "aliases": [ 00:20:13.699 "95695bd1-8de9-4f59-a6f8-b41159ffde1a" 00:20:13.699 ], 00:20:13.699 "product_name": "Malloc disk", 00:20:13.699 "block_size": 512, 00:20:13.699 "num_blocks": 65536, 00:20:13.699 "uuid": "95695bd1-8de9-4f59-a6f8-b41159ffde1a", 00:20:13.699 "assigned_rate_limits": { 00:20:13.699 "rw_ios_per_sec": 0, 00:20:13.699 "rw_mbytes_per_sec": 0, 00:20:13.699 "r_mbytes_per_sec": 0, 00:20:13.699 "w_mbytes_per_sec": 0 00:20:13.699 }, 00:20:13.699 "claimed": false, 00:20:13.699 "zoned": false, 00:20:13.699 "supported_io_types": { 00:20:13.699 "read": true, 00:20:13.699 "write": true, 00:20:13.699 "unmap": true, 00:20:13.699 "flush": true, 00:20:13.699 "reset": true, 00:20:13.699 "nvme_admin": false, 00:20:13.699 "nvme_io": false, 00:20:13.699 "nvme_io_md": false, 00:20:13.699 "write_zeroes": true, 00:20:13.699 "zcopy": true, 00:20:13.699 "get_zone_info": false, 00:20:13.699 "zone_management": false, 00:20:13.699 "zone_append": false, 00:20:13.699 "compare": false, 00:20:13.699 "compare_and_write": false, 00:20:13.699 "abort": true, 00:20:13.699 "seek_hole": false, 00:20:13.699 "seek_data": false, 00:20:13.699 "copy": true, 00:20:13.699 "nvme_iov_md": false 00:20:13.699 }, 00:20:13.699 "memory_domains": [ 00:20:13.699 { 00:20:13.699 "dma_device_id": "system", 00:20:13.699 "dma_device_type": 1 00:20:13.699 }, 00:20:13.699 { 00:20:13.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:13.699 "dma_device_type": 2 00:20:13.699 } 00:20:13.699 ], 00:20:13.699 "driver_specific": {} 00:20:13.699 } 00:20:13.699 ] 00:20:13.699 23:06:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:13.699 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:13.699 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:13.699 23:06:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:13.957 [2024-07-13 23:06:03.255385] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:13.957 [2024-07-13 23:06:03.255682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:13.957 [2024-07-13 23:06:03.255859] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:13.957 [2024-07-13 23:06:03.258546] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:13.958 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:13.958 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:13.958 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:13.958 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:13.958 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:13.958 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:13.958 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:13.958 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:13.958 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:13.958 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:13.958 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.958 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:14.215 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:14.215 "name": "Existed_Raid", 00:20:14.215 "uuid": "27f41024-dc1e-4252-8f69-4fb378780b61", 00:20:14.215 "strip_size_kb": 0, 00:20:14.215 "state": "configuring", 00:20:14.215 "raid_level": "raid1", 00:20:14.215 "superblock": true, 00:20:14.215 "num_base_bdevs": 3, 00:20:14.215 "num_base_bdevs_discovered": 2, 00:20:14.215 "num_base_bdevs_operational": 3, 00:20:14.215 "base_bdevs_list": [ 00:20:14.215 { 00:20:14.215 "name": "BaseBdev1", 00:20:14.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.215 "is_configured": false, 00:20:14.215 "data_offset": 0, 00:20:14.215 "data_size": 0 00:20:14.215 }, 00:20:14.215 { 00:20:14.215 "name": "BaseBdev2", 00:20:14.215 "uuid": "2f30e6e9-9f88-43ef-8fce-82a3cf3fdd6b", 00:20:14.215 "is_configured": true, 00:20:14.215 "data_offset": 2048, 00:20:14.215 "data_size": 63488 00:20:14.215 }, 00:20:14.215 { 00:20:14.215 "name": "BaseBdev3", 00:20:14.215 "uuid": "95695bd1-8de9-4f59-a6f8-b41159ffde1a", 00:20:14.215 "is_configured": true, 00:20:14.215 "data_offset": 2048, 00:20:14.215 "data_size": 63488 00:20:14.215 } 00:20:14.215 ] 00:20:14.215 }' 00:20:14.215 23:06:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:14.215 23:06:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.803 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:15.060 [2024-07-13 23:06:04.371589] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:15.060 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:15.060 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:15.060 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:15.060 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:15.060 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:15.060 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:15.061 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:15.061 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:15.061 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:15.061 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:15.061 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.061 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:15.319 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:15.319 "name": "Existed_Raid", 00:20:15.319 "uuid": "27f41024-dc1e-4252-8f69-4fb378780b61", 00:20:15.319 "strip_size_kb": 0, 00:20:15.319 "state": "configuring", 00:20:15.319 "raid_level": "raid1", 00:20:15.319 "superblock": true, 00:20:15.319 "num_base_bdevs": 3, 00:20:15.319 "num_base_bdevs_discovered": 1, 00:20:15.319 "num_base_bdevs_operational": 3, 00:20:15.319 "base_bdevs_list": [ 00:20:15.319 { 00:20:15.319 "name": "BaseBdev1", 00:20:15.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.319 "is_configured": false, 00:20:15.319 "data_offset": 0, 00:20:15.319 "data_size": 0 00:20:15.319 }, 00:20:15.319 { 00:20:15.319 "name": null, 00:20:15.319 "uuid": "2f30e6e9-9f88-43ef-8fce-82a3cf3fdd6b", 00:20:15.319 "is_configured": false, 00:20:15.319 "data_offset": 2048, 00:20:15.319 "data_size": 63488 00:20:15.319 }, 00:20:15.319 { 00:20:15.319 "name": "BaseBdev3", 00:20:15.319 "uuid": "95695bd1-8de9-4f59-a6f8-b41159ffde1a", 00:20:15.319 "is_configured": true, 00:20:15.319 "data_offset": 2048, 00:20:15.319 "data_size": 63488 00:20:15.319 } 00:20:15.319 ] 00:20:15.319 }' 00:20:15.319 23:06:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:15.319 23:06:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.885 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.885 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:16.143 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:16.143 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:16.401 [2024-07-13 23:06:05.775285] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:16.401 BaseBdev1 00:20:16.401 23:06:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:16.401 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:16.401 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:16.401 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:16.401 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:16.401 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:16.401 23:06:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:16.659 23:06:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:16.917 [ 00:20:16.917 { 00:20:16.917 "name": "BaseBdev1", 00:20:16.917 "aliases": [ 00:20:16.917 "c65d750b-ce81-40f5-9475-80d8839c30f0" 00:20:16.917 ], 00:20:16.917 "product_name": "Malloc disk", 00:20:16.917 "block_size": 512, 00:20:16.917 "num_blocks": 65536, 00:20:16.917 "uuid": "c65d750b-ce81-40f5-9475-80d8839c30f0", 00:20:16.917 "assigned_rate_limits": { 00:20:16.917 "rw_ios_per_sec": 0, 00:20:16.917 "rw_mbytes_per_sec": 0, 00:20:16.917 "r_mbytes_per_sec": 0, 00:20:16.917 "w_mbytes_per_sec": 0 00:20:16.917 }, 00:20:16.917 "claimed": true, 00:20:16.917 "claim_type": "exclusive_write", 00:20:16.917 "zoned": false, 00:20:16.917 "supported_io_types": { 00:20:16.917 "read": true, 00:20:16.917 "write": true, 00:20:16.917 "unmap": true, 00:20:16.917 "flush": true, 00:20:16.917 "reset": true, 00:20:16.917 "nvme_admin": false, 00:20:16.917 "nvme_io": false, 00:20:16.917 "nvme_io_md": false, 00:20:16.917 "write_zeroes": true, 00:20:16.917 "zcopy": true, 00:20:16.917 "get_zone_info": false, 00:20:16.917 "zone_management": false, 00:20:16.917 "zone_append": false, 00:20:16.917 "compare": false, 00:20:16.917 "compare_and_write": false, 00:20:16.917 "abort": true, 00:20:16.917 "seek_hole": false, 00:20:16.917 "seek_data": false, 00:20:16.917 "copy": true, 00:20:16.917 "nvme_iov_md": false 00:20:16.917 }, 00:20:16.917 "memory_domains": [ 00:20:16.917 { 00:20:16.917 "dma_device_id": "system", 00:20:16.917 "dma_device_type": 1 00:20:16.917 }, 00:20:16.917 { 00:20:16.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:16.917 "dma_device_type": 2 00:20:16.917 } 00:20:16.917 ], 00:20:16.917 "driver_specific": {} 00:20:16.917 } 00:20:16.917 ] 00:20:16.917 23:06:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:16.917 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:16.917 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:16.917 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:16.917 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:16.917 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:16.917 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:16.917 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:16.917 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:16.917 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:16.918 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:16.918 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.918 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:17.202 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:17.202 "name": "Existed_Raid", 00:20:17.202 "uuid": "27f41024-dc1e-4252-8f69-4fb378780b61", 00:20:17.202 "strip_size_kb": 0, 00:20:17.202 "state": "configuring", 00:20:17.202 "raid_level": "raid1", 00:20:17.202 "superblock": true, 00:20:17.202 "num_base_bdevs": 3, 00:20:17.202 "num_base_bdevs_discovered": 2, 00:20:17.202 "num_base_bdevs_operational": 3, 00:20:17.202 "base_bdevs_list": [ 00:20:17.202 { 00:20:17.202 "name": "BaseBdev1", 00:20:17.202 "uuid": "c65d750b-ce81-40f5-9475-80d8839c30f0", 00:20:17.202 "is_configured": true, 00:20:17.202 "data_offset": 2048, 00:20:17.202 "data_size": 63488 00:20:17.202 }, 00:20:17.202 { 00:20:17.202 "name": null, 00:20:17.202 "uuid": "2f30e6e9-9f88-43ef-8fce-82a3cf3fdd6b", 00:20:17.202 "is_configured": false, 00:20:17.202 "data_offset": 2048, 00:20:17.202 "data_size": 63488 00:20:17.202 }, 00:20:17.202 { 00:20:17.202 "name": "BaseBdev3", 00:20:17.202 "uuid": "95695bd1-8de9-4f59-a6f8-b41159ffde1a", 00:20:17.202 "is_configured": true, 00:20:17.202 "data_offset": 2048, 00:20:17.202 "data_size": 63488 00:20:17.202 } 00:20:17.202 ] 00:20:17.202 }' 00:20:17.202 23:06:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:17.202 23:06:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.769 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.769 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:18.028 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:18.028 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:18.286 [2024-07-13 23:06:07.589579] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:18.286 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:18.286 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:18.286 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:18.286 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:18.286 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:18.286 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:18.286 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:18.286 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:18.286 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:18.286 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:18.286 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.286 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:18.544 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:18.544 "name": "Existed_Raid", 00:20:18.544 "uuid": "27f41024-dc1e-4252-8f69-4fb378780b61", 00:20:18.544 "strip_size_kb": 0, 00:20:18.544 "state": "configuring", 00:20:18.544 "raid_level": "raid1", 00:20:18.544 "superblock": true, 00:20:18.544 "num_base_bdevs": 3, 00:20:18.544 "num_base_bdevs_discovered": 1, 00:20:18.544 "num_base_bdevs_operational": 3, 00:20:18.544 "base_bdevs_list": [ 00:20:18.544 { 00:20:18.544 "name": "BaseBdev1", 00:20:18.544 "uuid": "c65d750b-ce81-40f5-9475-80d8839c30f0", 00:20:18.544 "is_configured": true, 00:20:18.544 "data_offset": 2048, 00:20:18.544 "data_size": 63488 00:20:18.544 }, 00:20:18.544 { 00:20:18.544 "name": null, 00:20:18.544 "uuid": "2f30e6e9-9f88-43ef-8fce-82a3cf3fdd6b", 00:20:18.544 "is_configured": false, 00:20:18.544 "data_offset": 2048, 00:20:18.544 "data_size": 63488 00:20:18.544 }, 00:20:18.544 { 00:20:18.544 "name": null, 00:20:18.544 "uuid": "95695bd1-8de9-4f59-a6f8-b41159ffde1a", 00:20:18.544 "is_configured": false, 00:20:18.544 "data_offset": 2048, 00:20:18.544 "data_size": 63488 00:20:18.544 } 00:20:18.544 ] 00:20:18.544 }' 00:20:18.544 23:06:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:18.544 23:06:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.111 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.111 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:19.370 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:20:19.370 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:19.628 [2024-07-13 23:06:08.889849] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:19.628 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:19.628 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:19.628 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:19.628 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:19.628 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:19.628 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:19.628 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:19.628 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:19.628 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:19.628 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:19.628 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.628 23:06:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:19.886 23:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:19.886 "name": "Existed_Raid", 00:20:19.886 "uuid": "27f41024-dc1e-4252-8f69-4fb378780b61", 00:20:19.886 "strip_size_kb": 0, 00:20:19.886 "state": "configuring", 00:20:19.886 "raid_level": "raid1", 00:20:19.886 "superblock": true, 00:20:19.886 "num_base_bdevs": 3, 00:20:19.886 "num_base_bdevs_discovered": 2, 00:20:19.886 "num_base_bdevs_operational": 3, 00:20:19.886 "base_bdevs_list": [ 00:20:19.886 { 00:20:19.886 "name": "BaseBdev1", 00:20:19.886 "uuid": "c65d750b-ce81-40f5-9475-80d8839c30f0", 00:20:19.886 "is_configured": true, 00:20:19.886 "data_offset": 2048, 00:20:19.886 "data_size": 63488 00:20:19.886 }, 00:20:19.886 { 00:20:19.886 "name": null, 00:20:19.886 "uuid": "2f30e6e9-9f88-43ef-8fce-82a3cf3fdd6b", 00:20:19.886 "is_configured": false, 00:20:19.886 "data_offset": 2048, 00:20:19.886 "data_size": 63488 00:20:19.886 }, 00:20:19.886 { 00:20:19.886 "name": "BaseBdev3", 00:20:19.886 "uuid": "95695bd1-8de9-4f59-a6f8-b41159ffde1a", 00:20:19.886 "is_configured": true, 00:20:19.887 "data_offset": 2048, 00:20:19.887 "data_size": 63488 00:20:19.887 } 00:20:19.887 ] 00:20:19.887 }' 00:20:19.887 23:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:19.887 23:06:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.453 23:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:20.453 23:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.711 23:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:20:20.711 23:06:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:20.969 [2024-07-13 23:06:10.170124] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:20.969 23:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:20.969 23:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:20.969 23:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:20.969 23:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:20.969 23:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:20.969 23:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:20.969 23:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:20.969 23:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:20.969 23:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:20.969 23:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:20.969 23:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.969 23:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:21.228 23:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:21.228 "name": "Existed_Raid", 00:20:21.228 "uuid": "27f41024-dc1e-4252-8f69-4fb378780b61", 00:20:21.228 "strip_size_kb": 0, 00:20:21.228 "state": "configuring", 00:20:21.228 "raid_level": "raid1", 00:20:21.228 "superblock": true, 00:20:21.228 "num_base_bdevs": 3, 00:20:21.228 "num_base_bdevs_discovered": 1, 00:20:21.228 "num_base_bdevs_operational": 3, 00:20:21.228 "base_bdevs_list": [ 00:20:21.228 { 00:20:21.228 "name": null, 00:20:21.228 "uuid": "c65d750b-ce81-40f5-9475-80d8839c30f0", 00:20:21.228 "is_configured": false, 00:20:21.228 "data_offset": 2048, 00:20:21.228 "data_size": 63488 00:20:21.228 }, 00:20:21.228 { 00:20:21.228 "name": null, 00:20:21.228 "uuid": "2f30e6e9-9f88-43ef-8fce-82a3cf3fdd6b", 00:20:21.228 "is_configured": false, 00:20:21.228 "data_offset": 2048, 00:20:21.228 "data_size": 63488 00:20:21.228 }, 00:20:21.228 { 00:20:21.228 "name": "BaseBdev3", 00:20:21.228 "uuid": "95695bd1-8de9-4f59-a6f8-b41159ffde1a", 00:20:21.228 "is_configured": true, 00:20:21.228 "data_offset": 2048, 00:20:21.228 "data_size": 63488 00:20:21.228 } 00:20:21.228 ] 00:20:21.228 }' 00:20:21.228 23:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:21.228 23:06:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.795 23:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.795 23:06:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:22.053 23:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:20:22.053 23:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:22.053 [2024-07-13 23:06:11.456531] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:22.312 23:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:22.312 23:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:22.312 23:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:22.312 23:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:22.312 23:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:22.312 23:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:22.312 23:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:22.312 23:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:22.312 23:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:22.312 23:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:22.312 23:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.312 23:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:22.578 23:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:22.578 "name": "Existed_Raid", 00:20:22.578 "uuid": "27f41024-dc1e-4252-8f69-4fb378780b61", 00:20:22.578 "strip_size_kb": 0, 00:20:22.578 "state": "configuring", 00:20:22.578 "raid_level": "raid1", 00:20:22.578 "superblock": true, 00:20:22.578 "num_base_bdevs": 3, 00:20:22.578 "num_base_bdevs_discovered": 2, 00:20:22.578 "num_base_bdevs_operational": 3, 00:20:22.578 "base_bdevs_list": [ 00:20:22.578 { 00:20:22.578 "name": null, 00:20:22.578 "uuid": "c65d750b-ce81-40f5-9475-80d8839c30f0", 00:20:22.578 "is_configured": false, 00:20:22.578 "data_offset": 2048, 00:20:22.578 "data_size": 63488 00:20:22.578 }, 00:20:22.578 { 00:20:22.578 "name": "BaseBdev2", 00:20:22.578 "uuid": "2f30e6e9-9f88-43ef-8fce-82a3cf3fdd6b", 00:20:22.578 "is_configured": true, 00:20:22.578 "data_offset": 2048, 00:20:22.578 "data_size": 63488 00:20:22.578 }, 00:20:22.578 { 00:20:22.578 "name": "BaseBdev3", 00:20:22.578 "uuid": "95695bd1-8de9-4f59-a6f8-b41159ffde1a", 00:20:22.578 "is_configured": true, 00:20:22.578 "data_offset": 2048, 00:20:22.578 "data_size": 63488 00:20:22.578 } 00:20:22.578 ] 00:20:22.578 }' 00:20:22.578 23:06:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:22.578 23:06:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.157 23:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.157 23:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:23.416 23:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:23.416 23:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.416 23:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:23.675 23:06:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u c65d750b-ce81-40f5-9475-80d8839c30f0 00:20:23.935 [2024-07-13 23:06:13.157205] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:23.935 [2024-07-13 23:06:13.157699] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:20:23.935 [2024-07-13 23:06:13.157839] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:23.935 [2024-07-13 23:06:13.157959] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:20:23.935 NewBaseBdev 00:20:23.935 [2024-07-13 23:06:13.158531] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:20:23.935 [2024-07-13 23:06:13.158547] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:20:23.935 [2024-07-13 23:06:13.158697] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:23.935 23:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:23.935 23:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:20:23.935 23:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:23.935 23:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:23.935 23:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:23.935 23:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:23.935 23:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:24.193 23:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:24.451 [ 00:20:24.451 { 00:20:24.451 "name": "NewBaseBdev", 00:20:24.451 "aliases": [ 00:20:24.451 "c65d750b-ce81-40f5-9475-80d8839c30f0" 00:20:24.451 ], 00:20:24.451 "product_name": "Malloc disk", 00:20:24.451 "block_size": 512, 00:20:24.451 "num_blocks": 65536, 00:20:24.451 "uuid": "c65d750b-ce81-40f5-9475-80d8839c30f0", 00:20:24.451 "assigned_rate_limits": { 00:20:24.451 "rw_ios_per_sec": 0, 00:20:24.451 "rw_mbytes_per_sec": 0, 00:20:24.451 "r_mbytes_per_sec": 0, 00:20:24.451 "w_mbytes_per_sec": 0 00:20:24.451 }, 00:20:24.451 "claimed": true, 00:20:24.451 "claim_type": "exclusive_write", 00:20:24.451 "zoned": false, 00:20:24.451 "supported_io_types": { 00:20:24.451 "read": true, 00:20:24.451 "write": true, 00:20:24.451 "unmap": true, 00:20:24.451 "flush": true, 00:20:24.451 "reset": true, 00:20:24.451 "nvme_admin": false, 00:20:24.451 "nvme_io": false, 00:20:24.451 "nvme_io_md": false, 00:20:24.451 "write_zeroes": true, 00:20:24.451 "zcopy": true, 00:20:24.451 "get_zone_info": false, 00:20:24.451 "zone_management": false, 00:20:24.451 "zone_append": false, 00:20:24.451 "compare": false, 00:20:24.451 "compare_and_write": false, 00:20:24.451 "abort": true, 00:20:24.451 "seek_hole": false, 00:20:24.451 "seek_data": false, 00:20:24.451 "copy": true, 00:20:24.451 "nvme_iov_md": false 00:20:24.451 }, 00:20:24.451 "memory_domains": [ 00:20:24.451 { 00:20:24.451 "dma_device_id": "system", 00:20:24.451 "dma_device_type": 1 00:20:24.451 }, 00:20:24.451 { 00:20:24.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:24.451 "dma_device_type": 2 00:20:24.451 } 00:20:24.451 ], 00:20:24.451 "driver_specific": {} 00:20:24.451 } 00:20:24.451 ] 00:20:24.451 23:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:24.451 23:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:24.451 23:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:24.451 23:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:24.451 23:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:24.451 23:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:24.451 23:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:24.451 23:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:24.451 23:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:24.451 23:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:24.451 23:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:24.451 23:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.451 23:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:24.708 23:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:24.708 "name": "Existed_Raid", 00:20:24.708 "uuid": "27f41024-dc1e-4252-8f69-4fb378780b61", 00:20:24.708 "strip_size_kb": 0, 00:20:24.708 "state": "online", 00:20:24.708 "raid_level": "raid1", 00:20:24.708 "superblock": true, 00:20:24.708 "num_base_bdevs": 3, 00:20:24.708 "num_base_bdevs_discovered": 3, 00:20:24.708 "num_base_bdevs_operational": 3, 00:20:24.708 "base_bdevs_list": [ 00:20:24.708 { 00:20:24.708 "name": "NewBaseBdev", 00:20:24.708 "uuid": "c65d750b-ce81-40f5-9475-80d8839c30f0", 00:20:24.708 "is_configured": true, 00:20:24.708 "data_offset": 2048, 00:20:24.708 "data_size": 63488 00:20:24.708 }, 00:20:24.708 { 00:20:24.708 "name": "BaseBdev2", 00:20:24.708 "uuid": "2f30e6e9-9f88-43ef-8fce-82a3cf3fdd6b", 00:20:24.708 "is_configured": true, 00:20:24.708 "data_offset": 2048, 00:20:24.708 "data_size": 63488 00:20:24.708 }, 00:20:24.708 { 00:20:24.708 "name": "BaseBdev3", 00:20:24.708 "uuid": "95695bd1-8de9-4f59-a6f8-b41159ffde1a", 00:20:24.708 "is_configured": true, 00:20:24.708 "data_offset": 2048, 00:20:24.708 "data_size": 63488 00:20:24.708 } 00:20:24.708 ] 00:20:24.708 }' 00:20:24.708 23:06:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:24.708 23:06:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.274 23:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:25.274 23:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:25.274 23:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:25.274 23:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:25.274 23:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:25.274 23:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:25.274 23:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:25.274 23:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:25.533 [2024-07-13 23:06:14.789922] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:25.533 23:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:25.533 "name": "Existed_Raid", 00:20:25.533 "aliases": [ 00:20:25.533 "27f41024-dc1e-4252-8f69-4fb378780b61" 00:20:25.533 ], 00:20:25.533 "product_name": "Raid Volume", 00:20:25.533 "block_size": 512, 00:20:25.533 "num_blocks": 63488, 00:20:25.533 "uuid": "27f41024-dc1e-4252-8f69-4fb378780b61", 00:20:25.533 "assigned_rate_limits": { 00:20:25.533 "rw_ios_per_sec": 0, 00:20:25.533 "rw_mbytes_per_sec": 0, 00:20:25.533 "r_mbytes_per_sec": 0, 00:20:25.533 "w_mbytes_per_sec": 0 00:20:25.533 }, 00:20:25.533 "claimed": false, 00:20:25.533 "zoned": false, 00:20:25.533 "supported_io_types": { 00:20:25.533 "read": true, 00:20:25.533 "write": true, 00:20:25.533 "unmap": false, 00:20:25.533 "flush": false, 00:20:25.533 "reset": true, 00:20:25.533 "nvme_admin": false, 00:20:25.533 "nvme_io": false, 00:20:25.533 "nvme_io_md": false, 00:20:25.533 "write_zeroes": true, 00:20:25.533 "zcopy": false, 00:20:25.533 "get_zone_info": false, 00:20:25.533 "zone_management": false, 00:20:25.533 "zone_append": false, 00:20:25.533 "compare": false, 00:20:25.533 "compare_and_write": false, 00:20:25.533 "abort": false, 00:20:25.533 "seek_hole": false, 00:20:25.533 "seek_data": false, 00:20:25.533 "copy": false, 00:20:25.533 "nvme_iov_md": false 00:20:25.533 }, 00:20:25.533 "memory_domains": [ 00:20:25.533 { 00:20:25.533 "dma_device_id": "system", 00:20:25.533 "dma_device_type": 1 00:20:25.533 }, 00:20:25.533 { 00:20:25.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:25.533 "dma_device_type": 2 00:20:25.533 }, 00:20:25.533 { 00:20:25.533 "dma_device_id": "system", 00:20:25.533 "dma_device_type": 1 00:20:25.533 }, 00:20:25.533 { 00:20:25.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:25.533 "dma_device_type": 2 00:20:25.533 }, 00:20:25.533 { 00:20:25.533 "dma_device_id": "system", 00:20:25.533 "dma_device_type": 1 00:20:25.533 }, 00:20:25.533 { 00:20:25.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:25.533 "dma_device_type": 2 00:20:25.533 } 00:20:25.533 ], 00:20:25.533 "driver_specific": { 00:20:25.533 "raid": { 00:20:25.533 "uuid": "27f41024-dc1e-4252-8f69-4fb378780b61", 00:20:25.533 "strip_size_kb": 0, 00:20:25.533 "state": "online", 00:20:25.533 "raid_level": "raid1", 00:20:25.533 "superblock": true, 00:20:25.533 "num_base_bdevs": 3, 00:20:25.533 "num_base_bdevs_discovered": 3, 00:20:25.533 "num_base_bdevs_operational": 3, 00:20:25.533 "base_bdevs_list": [ 00:20:25.533 { 00:20:25.533 "name": "NewBaseBdev", 00:20:25.533 "uuid": "c65d750b-ce81-40f5-9475-80d8839c30f0", 00:20:25.533 "is_configured": true, 00:20:25.533 "data_offset": 2048, 00:20:25.533 "data_size": 63488 00:20:25.533 }, 00:20:25.533 { 00:20:25.533 "name": "BaseBdev2", 00:20:25.533 "uuid": "2f30e6e9-9f88-43ef-8fce-82a3cf3fdd6b", 00:20:25.533 "is_configured": true, 00:20:25.533 "data_offset": 2048, 00:20:25.533 "data_size": 63488 00:20:25.533 }, 00:20:25.533 { 00:20:25.533 "name": "BaseBdev3", 00:20:25.533 "uuid": "95695bd1-8de9-4f59-a6f8-b41159ffde1a", 00:20:25.533 "is_configured": true, 00:20:25.533 "data_offset": 2048, 00:20:25.533 "data_size": 63488 00:20:25.533 } 00:20:25.533 ] 00:20:25.533 } 00:20:25.533 } 00:20:25.533 }' 00:20:25.533 23:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:25.533 23:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:25.533 BaseBdev2 00:20:25.533 BaseBdev3' 00:20:25.533 23:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:25.533 23:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:25.533 23:06:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:25.792 23:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:25.792 "name": "NewBaseBdev", 00:20:25.792 "aliases": [ 00:20:25.792 "c65d750b-ce81-40f5-9475-80d8839c30f0" 00:20:25.792 ], 00:20:25.792 "product_name": "Malloc disk", 00:20:25.792 "block_size": 512, 00:20:25.792 "num_blocks": 65536, 00:20:25.792 "uuid": "c65d750b-ce81-40f5-9475-80d8839c30f0", 00:20:25.792 "assigned_rate_limits": { 00:20:25.792 "rw_ios_per_sec": 0, 00:20:25.792 "rw_mbytes_per_sec": 0, 00:20:25.792 "r_mbytes_per_sec": 0, 00:20:25.792 "w_mbytes_per_sec": 0 00:20:25.792 }, 00:20:25.792 "claimed": true, 00:20:25.792 "claim_type": "exclusive_write", 00:20:25.792 "zoned": false, 00:20:25.792 "supported_io_types": { 00:20:25.792 "read": true, 00:20:25.792 "write": true, 00:20:25.792 "unmap": true, 00:20:25.792 "flush": true, 00:20:25.792 "reset": true, 00:20:25.792 "nvme_admin": false, 00:20:25.792 "nvme_io": false, 00:20:25.792 "nvme_io_md": false, 00:20:25.792 "write_zeroes": true, 00:20:25.792 "zcopy": true, 00:20:25.792 "get_zone_info": false, 00:20:25.792 "zone_management": false, 00:20:25.792 "zone_append": false, 00:20:25.792 "compare": false, 00:20:25.792 "compare_and_write": false, 00:20:25.792 "abort": true, 00:20:25.792 "seek_hole": false, 00:20:25.792 "seek_data": false, 00:20:25.792 "copy": true, 00:20:25.792 "nvme_iov_md": false 00:20:25.792 }, 00:20:25.792 "memory_domains": [ 00:20:25.792 { 00:20:25.792 "dma_device_id": "system", 00:20:25.792 "dma_device_type": 1 00:20:25.792 }, 00:20:25.792 { 00:20:25.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:25.792 "dma_device_type": 2 00:20:25.792 } 00:20:25.792 ], 00:20:25.792 "driver_specific": {} 00:20:25.792 }' 00:20:25.792 23:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:25.792 23:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:25.792 23:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:25.792 23:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:26.051 23:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:26.051 23:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:26.051 23:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:26.051 23:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:26.051 23:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:26.051 23:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:26.051 23:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:26.309 23:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:26.309 23:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:26.309 23:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:26.309 23:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:26.568 23:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:26.568 "name": "BaseBdev2", 00:20:26.568 "aliases": [ 00:20:26.568 "2f30e6e9-9f88-43ef-8fce-82a3cf3fdd6b" 00:20:26.568 ], 00:20:26.568 "product_name": "Malloc disk", 00:20:26.568 "block_size": 512, 00:20:26.568 "num_blocks": 65536, 00:20:26.568 "uuid": "2f30e6e9-9f88-43ef-8fce-82a3cf3fdd6b", 00:20:26.568 "assigned_rate_limits": { 00:20:26.568 "rw_ios_per_sec": 0, 00:20:26.568 "rw_mbytes_per_sec": 0, 00:20:26.568 "r_mbytes_per_sec": 0, 00:20:26.568 "w_mbytes_per_sec": 0 00:20:26.568 }, 00:20:26.568 "claimed": true, 00:20:26.568 "claim_type": "exclusive_write", 00:20:26.568 "zoned": false, 00:20:26.568 "supported_io_types": { 00:20:26.568 "read": true, 00:20:26.568 "write": true, 00:20:26.568 "unmap": true, 00:20:26.568 "flush": true, 00:20:26.568 "reset": true, 00:20:26.568 "nvme_admin": false, 00:20:26.568 "nvme_io": false, 00:20:26.568 "nvme_io_md": false, 00:20:26.568 "write_zeroes": true, 00:20:26.568 "zcopy": true, 00:20:26.568 "get_zone_info": false, 00:20:26.568 "zone_management": false, 00:20:26.568 "zone_append": false, 00:20:26.568 "compare": false, 00:20:26.568 "compare_and_write": false, 00:20:26.568 "abort": true, 00:20:26.568 "seek_hole": false, 00:20:26.568 "seek_data": false, 00:20:26.568 "copy": true, 00:20:26.568 "nvme_iov_md": false 00:20:26.568 }, 00:20:26.568 "memory_domains": [ 00:20:26.568 { 00:20:26.568 "dma_device_id": "system", 00:20:26.568 "dma_device_type": 1 00:20:26.568 }, 00:20:26.568 { 00:20:26.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.568 "dma_device_type": 2 00:20:26.568 } 00:20:26.568 ], 00:20:26.568 "driver_specific": {} 00:20:26.568 }' 00:20:26.568 23:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:26.568 23:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:26.568 23:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:26.568 23:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:26.568 23:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:26.568 23:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:26.568 23:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:26.827 23:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:26.827 23:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:26.827 23:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:26.827 23:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:26.827 23:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:26.827 23:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:26.827 23:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:26.827 23:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:27.086 23:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:27.086 "name": "BaseBdev3", 00:20:27.086 "aliases": [ 00:20:27.086 "95695bd1-8de9-4f59-a6f8-b41159ffde1a" 00:20:27.086 ], 00:20:27.086 "product_name": "Malloc disk", 00:20:27.086 "block_size": 512, 00:20:27.086 "num_blocks": 65536, 00:20:27.086 "uuid": "95695bd1-8de9-4f59-a6f8-b41159ffde1a", 00:20:27.086 "assigned_rate_limits": { 00:20:27.086 "rw_ios_per_sec": 0, 00:20:27.086 "rw_mbytes_per_sec": 0, 00:20:27.086 "r_mbytes_per_sec": 0, 00:20:27.086 "w_mbytes_per_sec": 0 00:20:27.086 }, 00:20:27.086 "claimed": true, 00:20:27.086 "claim_type": "exclusive_write", 00:20:27.086 "zoned": false, 00:20:27.086 "supported_io_types": { 00:20:27.086 "read": true, 00:20:27.086 "write": true, 00:20:27.086 "unmap": true, 00:20:27.086 "flush": true, 00:20:27.086 "reset": true, 00:20:27.086 "nvme_admin": false, 00:20:27.086 "nvme_io": false, 00:20:27.086 "nvme_io_md": false, 00:20:27.086 "write_zeroes": true, 00:20:27.086 "zcopy": true, 00:20:27.086 "get_zone_info": false, 00:20:27.086 "zone_management": false, 00:20:27.086 "zone_append": false, 00:20:27.086 "compare": false, 00:20:27.086 "compare_and_write": false, 00:20:27.086 "abort": true, 00:20:27.086 "seek_hole": false, 00:20:27.086 "seek_data": false, 00:20:27.086 "copy": true, 00:20:27.086 "nvme_iov_md": false 00:20:27.086 }, 00:20:27.086 "memory_domains": [ 00:20:27.086 { 00:20:27.086 "dma_device_id": "system", 00:20:27.086 "dma_device_type": 1 00:20:27.086 }, 00:20:27.086 { 00:20:27.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.086 "dma_device_type": 2 00:20:27.086 } 00:20:27.086 ], 00:20:27.086 "driver_specific": {} 00:20:27.086 }' 00:20:27.086 23:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:27.086 23:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:27.086 23:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:27.086 23:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:27.343 23:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:27.343 23:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:27.343 23:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:27.343 23:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:27.343 23:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:27.343 23:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:27.343 23:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:27.601 23:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:27.601 23:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:27.601 [2024-07-13 23:06:16.998078] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:27.601 [2024-07-13 23:06:16.998311] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:27.601 [2024-07-13 23:06:16.998500] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:27.601 [2024-07-13 23:06:16.998938] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:27.601 [2024-07-13 23:06:16.999048] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:20:27.860 23:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 141936 00:20:27.860 23:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 141936 ']' 00:20:27.860 23:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 141936 00:20:27.860 23:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:20:27.860 23:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:27.860 23:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 141936 00:20:27.860 killing process with pid 141936 00:20:27.860 23:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:27.860 23:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:27.860 23:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 141936' 00:20:27.860 23:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 141936 00:20:27.860 [2024-07-13 23:06:17.051103] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:27.860 23:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 141936 00:20:27.860 [2024-07-13 23:06:17.086837] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:28.119 ************************************ 00:20:28.119 END TEST raid_state_function_test_sb 00:20:28.119 ************************************ 00:20:28.119 23:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:20:28.119 00:20:28.119 real 0m29.303s 00:20:28.119 user 0m55.509s 00:20:28.119 sys 0m3.623s 00:20:28.119 23:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:28.119 23:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.119 23:06:17 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:28.119 23:06:17 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:20:28.119 23:06:17 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:20:28.119 23:06:17 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:28.119 23:06:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:28.119 ************************************ 00:20:28.119 START TEST raid_superblock_test 00:20:28.119 ************************************ 00:20:28.119 23:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 3 00:20:28.119 23:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:20:28.119 23:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:20:28.119 23:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:20:28.119 23:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:20:28.119 23:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:20:28.119 23:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:20:28.119 23:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:20:28.119 23:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:20:28.119 23:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:20:28.119 23:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:20:28.119 23:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:20:28.119 23:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:20:28.119 23:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:20:28.119 23:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:20:28.119 23:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:20:28.119 23:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=142914 00:20:28.119 23:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 142914 /var/tmp/spdk-raid.sock 00:20:28.119 23:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 142914 ']' 00:20:28.119 23:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:28.119 23:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:28.119 23:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:28.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:28.119 23:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:28.119 23:06:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.119 23:06:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:20:28.119 [2024-07-13 23:06:17.518185] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:20:28.119 [2024-07-13 23:06:17.518860] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142914 ] 00:20:28.379 [2024-07-13 23:06:17.667850] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.379 [2024-07-13 23:06:17.758830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.637 [2024-07-13 23:06:17.817693] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:29.205 23:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:29.205 23:06:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:20:29.205 23:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:20:29.205 23:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:29.205 23:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:20:29.205 23:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:20:29.205 23:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:29.205 23:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:29.205 23:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:20:29.205 23:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:29.205 23:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:20:29.464 malloc1 00:20:29.464 23:06:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:29.722 [2024-07-13 23:06:19.083824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:29.722 [2024-07-13 23:06:19.084183] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:29.722 [2024-07-13 23:06:19.084345] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:20:29.722 [2024-07-13 23:06:19.084507] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:29.722 [2024-07-13 23:06:19.087161] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:29.722 [2024-07-13 23:06:19.087407] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:29.722 pt1 00:20:29.722 23:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:20:29.722 23:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:29.722 23:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:20:29.722 23:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:20:29.722 23:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:29.722 23:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:29.722 23:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:20:29.722 23:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:29.722 23:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:20:29.981 malloc2 00:20:29.981 23:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:30.240 [2024-07-13 23:06:19.554574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:30.240 [2024-07-13 23:06:19.554915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:30.240 [2024-07-13 23:06:19.554996] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:20:30.240 [2024-07-13 23:06:19.555317] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:30.240 [2024-07-13 23:06:19.557854] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:30.240 [2024-07-13 23:06:19.558051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:30.240 pt2 00:20:30.240 23:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:20:30.240 23:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:30.240 23:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:20:30.240 23:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:20:30.240 23:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:30.240 23:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:30.240 23:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:20:30.240 23:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:30.240 23:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:20:30.498 malloc3 00:20:30.498 23:06:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:30.757 [2024-07-13 23:06:20.044239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:30.757 [2024-07-13 23:06:20.044575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:30.757 [2024-07-13 23:06:20.044746] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:30.758 [2024-07-13 23:06:20.044926] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:30.758 [2024-07-13 23:06:20.047436] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:30.758 [2024-07-13 23:06:20.047669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:30.758 pt3 00:20:30.758 23:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:20:30.758 23:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:30.758 23:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:20:31.016 [2024-07-13 23:06:20.268396] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:31.016 [2024-07-13 23:06:20.270603] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:31.016 [2024-07-13 23:06:20.270845] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:31.016 [2024-07-13 23:06:20.271153] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:20:31.016 [2024-07-13 23:06:20.271297] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:31.016 [2024-07-13 23:06:20.271508] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:20:31.016 [2024-07-13 23:06:20.272096] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:20:31.016 [2024-07-13 23:06:20.272237] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880 00:20:31.016 [2024-07-13 23:06:20.272576] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:31.016 23:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:31.016 23:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:31.016 23:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:31.016 23:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:31.016 23:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:31.016 23:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:31.016 23:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:31.016 23:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:31.017 23:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:31.017 23:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:31.017 23:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.017 23:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.275 23:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:31.275 "name": "raid_bdev1", 00:20:31.275 "uuid": "1f3c73ae-8196-458d-bb06-bbd503c594b8", 00:20:31.275 "strip_size_kb": 0, 00:20:31.275 "state": "online", 00:20:31.275 "raid_level": "raid1", 00:20:31.275 "superblock": true, 00:20:31.275 "num_base_bdevs": 3, 00:20:31.275 "num_base_bdevs_discovered": 3, 00:20:31.275 "num_base_bdevs_operational": 3, 00:20:31.275 "base_bdevs_list": [ 00:20:31.275 { 00:20:31.275 "name": "pt1", 00:20:31.275 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:31.275 "is_configured": true, 00:20:31.275 "data_offset": 2048, 00:20:31.276 "data_size": 63488 00:20:31.276 }, 00:20:31.276 { 00:20:31.276 "name": "pt2", 00:20:31.276 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:31.276 "is_configured": true, 00:20:31.276 "data_offset": 2048, 00:20:31.276 "data_size": 63488 00:20:31.276 }, 00:20:31.276 { 00:20:31.276 "name": "pt3", 00:20:31.276 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:31.276 "is_configured": true, 00:20:31.276 "data_offset": 2048, 00:20:31.276 "data_size": 63488 00:20:31.276 } 00:20:31.276 ] 00:20:31.276 }' 00:20:31.276 23:06:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:31.276 23:06:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.842 23:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:20:31.842 23:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:31.842 23:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:31.842 23:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:31.842 23:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:31.842 23:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:31.842 23:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:31.842 23:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:32.100 [2024-07-13 23:06:21.373090] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:32.100 23:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:32.100 "name": "raid_bdev1", 00:20:32.100 "aliases": [ 00:20:32.100 "1f3c73ae-8196-458d-bb06-bbd503c594b8" 00:20:32.100 ], 00:20:32.100 "product_name": "Raid Volume", 00:20:32.100 "block_size": 512, 00:20:32.101 "num_blocks": 63488, 00:20:32.101 "uuid": "1f3c73ae-8196-458d-bb06-bbd503c594b8", 00:20:32.101 "assigned_rate_limits": { 00:20:32.101 "rw_ios_per_sec": 0, 00:20:32.101 "rw_mbytes_per_sec": 0, 00:20:32.101 "r_mbytes_per_sec": 0, 00:20:32.101 "w_mbytes_per_sec": 0 00:20:32.101 }, 00:20:32.101 "claimed": false, 00:20:32.101 "zoned": false, 00:20:32.101 "supported_io_types": { 00:20:32.101 "read": true, 00:20:32.101 "write": true, 00:20:32.101 "unmap": false, 00:20:32.101 "flush": false, 00:20:32.101 "reset": true, 00:20:32.101 "nvme_admin": false, 00:20:32.101 "nvme_io": false, 00:20:32.101 "nvme_io_md": false, 00:20:32.101 "write_zeroes": true, 00:20:32.101 "zcopy": false, 00:20:32.101 "get_zone_info": false, 00:20:32.101 "zone_management": false, 00:20:32.101 "zone_append": false, 00:20:32.101 "compare": false, 00:20:32.101 "compare_and_write": false, 00:20:32.101 "abort": false, 00:20:32.101 "seek_hole": false, 00:20:32.101 "seek_data": false, 00:20:32.101 "copy": false, 00:20:32.101 "nvme_iov_md": false 00:20:32.101 }, 00:20:32.101 "memory_domains": [ 00:20:32.101 { 00:20:32.101 "dma_device_id": "system", 00:20:32.101 "dma_device_type": 1 00:20:32.101 }, 00:20:32.101 { 00:20:32.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:32.101 "dma_device_type": 2 00:20:32.101 }, 00:20:32.101 { 00:20:32.101 "dma_device_id": "system", 00:20:32.101 "dma_device_type": 1 00:20:32.101 }, 00:20:32.101 { 00:20:32.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:32.101 "dma_device_type": 2 00:20:32.101 }, 00:20:32.101 { 00:20:32.101 "dma_device_id": "system", 00:20:32.101 "dma_device_type": 1 00:20:32.101 }, 00:20:32.101 { 00:20:32.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:32.101 "dma_device_type": 2 00:20:32.101 } 00:20:32.101 ], 00:20:32.101 "driver_specific": { 00:20:32.101 "raid": { 00:20:32.101 "uuid": "1f3c73ae-8196-458d-bb06-bbd503c594b8", 00:20:32.101 "strip_size_kb": 0, 00:20:32.101 "state": "online", 00:20:32.101 "raid_level": "raid1", 00:20:32.101 "superblock": true, 00:20:32.101 "num_base_bdevs": 3, 00:20:32.101 "num_base_bdevs_discovered": 3, 00:20:32.101 "num_base_bdevs_operational": 3, 00:20:32.101 "base_bdevs_list": [ 00:20:32.101 { 00:20:32.101 "name": "pt1", 00:20:32.101 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:32.101 "is_configured": true, 00:20:32.101 "data_offset": 2048, 00:20:32.101 "data_size": 63488 00:20:32.101 }, 00:20:32.101 { 00:20:32.101 "name": "pt2", 00:20:32.101 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:32.101 "is_configured": true, 00:20:32.101 "data_offset": 2048, 00:20:32.101 "data_size": 63488 00:20:32.101 }, 00:20:32.101 { 00:20:32.101 "name": "pt3", 00:20:32.101 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:32.101 "is_configured": true, 00:20:32.101 "data_offset": 2048, 00:20:32.101 "data_size": 63488 00:20:32.101 } 00:20:32.101 ] 00:20:32.101 } 00:20:32.101 } 00:20:32.101 }' 00:20:32.101 23:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:32.101 23:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:32.101 pt2 00:20:32.101 pt3' 00:20:32.101 23:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:32.101 23:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:32.101 23:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:32.360 23:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:32.360 "name": "pt1", 00:20:32.360 "aliases": [ 00:20:32.360 "00000000-0000-0000-0000-000000000001" 00:20:32.360 ], 00:20:32.360 "product_name": "passthru", 00:20:32.360 "block_size": 512, 00:20:32.360 "num_blocks": 65536, 00:20:32.360 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:32.360 "assigned_rate_limits": { 00:20:32.360 "rw_ios_per_sec": 0, 00:20:32.360 "rw_mbytes_per_sec": 0, 00:20:32.360 "r_mbytes_per_sec": 0, 00:20:32.360 "w_mbytes_per_sec": 0 00:20:32.360 }, 00:20:32.360 "claimed": true, 00:20:32.360 "claim_type": "exclusive_write", 00:20:32.360 "zoned": false, 00:20:32.360 "supported_io_types": { 00:20:32.360 "read": true, 00:20:32.360 "write": true, 00:20:32.360 "unmap": true, 00:20:32.360 "flush": true, 00:20:32.360 "reset": true, 00:20:32.360 "nvme_admin": false, 00:20:32.360 "nvme_io": false, 00:20:32.360 "nvme_io_md": false, 00:20:32.360 "write_zeroes": true, 00:20:32.360 "zcopy": true, 00:20:32.360 "get_zone_info": false, 00:20:32.360 "zone_management": false, 00:20:32.360 "zone_append": false, 00:20:32.360 "compare": false, 00:20:32.360 "compare_and_write": false, 00:20:32.360 "abort": true, 00:20:32.360 "seek_hole": false, 00:20:32.360 "seek_data": false, 00:20:32.360 "copy": true, 00:20:32.360 "nvme_iov_md": false 00:20:32.360 }, 00:20:32.360 "memory_domains": [ 00:20:32.360 { 00:20:32.360 "dma_device_id": "system", 00:20:32.360 "dma_device_type": 1 00:20:32.360 }, 00:20:32.360 { 00:20:32.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:32.360 "dma_device_type": 2 00:20:32.360 } 00:20:32.360 ], 00:20:32.360 "driver_specific": { 00:20:32.360 "passthru": { 00:20:32.360 "name": "pt1", 00:20:32.360 "base_bdev_name": "malloc1" 00:20:32.360 } 00:20:32.360 } 00:20:32.360 }' 00:20:32.360 23:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:32.360 23:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:32.360 23:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:32.360 23:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:32.618 23:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:32.618 23:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:32.618 23:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:32.618 23:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:32.618 23:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:32.618 23:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:32.618 23:06:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:32.877 23:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:32.877 23:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:32.877 23:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:32.877 23:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:33.135 23:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:33.135 "name": "pt2", 00:20:33.135 "aliases": [ 00:20:33.135 "00000000-0000-0000-0000-000000000002" 00:20:33.135 ], 00:20:33.135 "product_name": "passthru", 00:20:33.135 "block_size": 512, 00:20:33.135 "num_blocks": 65536, 00:20:33.135 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:33.135 "assigned_rate_limits": { 00:20:33.135 "rw_ios_per_sec": 0, 00:20:33.135 "rw_mbytes_per_sec": 0, 00:20:33.135 "r_mbytes_per_sec": 0, 00:20:33.135 "w_mbytes_per_sec": 0 00:20:33.135 }, 00:20:33.135 "claimed": true, 00:20:33.135 "claim_type": "exclusive_write", 00:20:33.135 "zoned": false, 00:20:33.135 "supported_io_types": { 00:20:33.135 "read": true, 00:20:33.135 "write": true, 00:20:33.135 "unmap": true, 00:20:33.135 "flush": true, 00:20:33.135 "reset": true, 00:20:33.135 "nvme_admin": false, 00:20:33.135 "nvme_io": false, 00:20:33.135 "nvme_io_md": false, 00:20:33.135 "write_zeroes": true, 00:20:33.135 "zcopy": true, 00:20:33.135 "get_zone_info": false, 00:20:33.135 "zone_management": false, 00:20:33.135 "zone_append": false, 00:20:33.135 "compare": false, 00:20:33.135 "compare_and_write": false, 00:20:33.135 "abort": true, 00:20:33.135 "seek_hole": false, 00:20:33.135 "seek_data": false, 00:20:33.135 "copy": true, 00:20:33.136 "nvme_iov_md": false 00:20:33.136 }, 00:20:33.136 "memory_domains": [ 00:20:33.136 { 00:20:33.136 "dma_device_id": "system", 00:20:33.136 "dma_device_type": 1 00:20:33.136 }, 00:20:33.136 { 00:20:33.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:33.136 "dma_device_type": 2 00:20:33.136 } 00:20:33.136 ], 00:20:33.136 "driver_specific": { 00:20:33.136 "passthru": { 00:20:33.136 "name": "pt2", 00:20:33.136 "base_bdev_name": "malloc2" 00:20:33.136 } 00:20:33.136 } 00:20:33.136 }' 00:20:33.136 23:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:33.136 23:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:33.136 23:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:33.136 23:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:33.136 23:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:33.136 23:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:33.136 23:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:33.136 23:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:33.394 23:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:33.394 23:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:33.394 23:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:33.394 23:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:33.394 23:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:33.394 23:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:33.394 23:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:20:33.652 23:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:33.652 "name": "pt3", 00:20:33.652 "aliases": [ 00:20:33.652 "00000000-0000-0000-0000-000000000003" 00:20:33.652 ], 00:20:33.652 "product_name": "passthru", 00:20:33.652 "block_size": 512, 00:20:33.652 "num_blocks": 65536, 00:20:33.652 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:33.652 "assigned_rate_limits": { 00:20:33.652 "rw_ios_per_sec": 0, 00:20:33.652 "rw_mbytes_per_sec": 0, 00:20:33.652 "r_mbytes_per_sec": 0, 00:20:33.652 "w_mbytes_per_sec": 0 00:20:33.652 }, 00:20:33.652 "claimed": true, 00:20:33.652 "claim_type": "exclusive_write", 00:20:33.652 "zoned": false, 00:20:33.652 "supported_io_types": { 00:20:33.652 "read": true, 00:20:33.652 "write": true, 00:20:33.652 "unmap": true, 00:20:33.652 "flush": true, 00:20:33.652 "reset": true, 00:20:33.652 "nvme_admin": false, 00:20:33.652 "nvme_io": false, 00:20:33.652 "nvme_io_md": false, 00:20:33.652 "write_zeroes": true, 00:20:33.652 "zcopy": true, 00:20:33.652 "get_zone_info": false, 00:20:33.652 "zone_management": false, 00:20:33.652 "zone_append": false, 00:20:33.652 "compare": false, 00:20:33.652 "compare_and_write": false, 00:20:33.652 "abort": true, 00:20:33.652 "seek_hole": false, 00:20:33.652 "seek_data": false, 00:20:33.652 "copy": true, 00:20:33.652 "nvme_iov_md": false 00:20:33.652 }, 00:20:33.652 "memory_domains": [ 00:20:33.652 { 00:20:33.652 "dma_device_id": "system", 00:20:33.652 "dma_device_type": 1 00:20:33.652 }, 00:20:33.652 { 00:20:33.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:33.652 "dma_device_type": 2 00:20:33.652 } 00:20:33.652 ], 00:20:33.652 "driver_specific": { 00:20:33.652 "passthru": { 00:20:33.652 "name": "pt3", 00:20:33.652 "base_bdev_name": "malloc3" 00:20:33.652 } 00:20:33.652 } 00:20:33.652 }' 00:20:33.652 23:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:33.652 23:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:33.652 23:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:33.652 23:06:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:33.652 23:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:33.910 23:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:33.910 23:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:33.910 23:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:33.910 23:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:33.910 23:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:33.910 23:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:33.910 23:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:33.910 23:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:33.910 23:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:20:34.168 [2024-07-13 23:06:23.505571] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:34.168 23:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=1f3c73ae-8196-458d-bb06-bbd503c594b8 00:20:34.168 23:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 1f3c73ae-8196-458d-bb06-bbd503c594b8 ']' 00:20:34.168 23:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:34.425 [2024-07-13 23:06:23.765444] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:34.425 [2024-07-13 23:06:23.765691] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:34.425 [2024-07-13 23:06:23.766008] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:34.425 [2024-07-13 23:06:23.766253] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:34.425 [2024-07-13 23:06:23.766375] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline 00:20:34.425 23:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.425 23:06:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:20:34.693 23:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:20:34.693 23:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:20:34.693 23:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:34.693 23:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:34.962 23:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:34.962 23:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:35.220 23:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:35.220 23:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:35.478 23:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:20:35.478 23:06:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:35.736 23:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:20:35.736 23:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:35.736 23:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:20:35.736 23:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:35.736 23:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:35.736 23:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:35.736 23:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:35.736 23:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:35.736 23:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:35.736 23:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:35.736 23:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:35.736 23:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:35.736 23:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:35.994 [2024-07-13 23:06:25.325750] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:35.994 [2024-07-13 23:06:25.328012] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:35.994 [2024-07-13 23:06:25.328238] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:35.994 [2024-07-13 23:06:25.328347] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:35.994 [2024-07-13 23:06:25.328660] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:35.994 [2024-07-13 23:06:25.328824] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:20:35.994 [2024-07-13 23:06:25.329033] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:35.994 [2024-07-13 23:06:25.329158] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state configuring 00:20:35.994 request: 00:20:35.994 { 00:20:35.994 "name": "raid_bdev1", 00:20:35.994 "raid_level": "raid1", 00:20:35.994 "base_bdevs": [ 00:20:35.994 "malloc1", 00:20:35.994 "malloc2", 00:20:35.994 "malloc3" 00:20:35.994 ], 00:20:35.994 "superblock": false, 00:20:35.994 "method": "bdev_raid_create", 00:20:35.994 "req_id": 1 00:20:35.994 } 00:20:35.994 Got JSON-RPC error response 00:20:35.994 response: 00:20:35.994 { 00:20:35.994 "code": -17, 00:20:35.994 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:35.994 } 00:20:35.994 23:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:20:35.994 23:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:35.994 23:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:35.994 23:06:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:35.994 23:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.994 23:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:20:36.252 23:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:20:36.252 23:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:20:36.252 23:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:36.510 [2024-07-13 23:06:25.848804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:36.510 [2024-07-13 23:06:25.849104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:36.510 [2024-07-13 23:06:25.849261] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:20:36.510 [2024-07-13 23:06:25.849402] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:36.510 [2024-07-13 23:06:25.851975] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:36.510 [2024-07-13 23:06:25.852151] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:36.510 [2024-07-13 23:06:25.852382] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:36.510 [2024-07-13 23:06:25.852544] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:36.510 pt1 00:20:36.510 23:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:36.511 23:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:36.511 23:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:36.511 23:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:36.511 23:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:36.511 23:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:36.511 23:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:36.511 23:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:36.511 23:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:36.511 23:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:36.511 23:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.511 23:06:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.770 23:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:36.770 "name": "raid_bdev1", 00:20:36.770 "uuid": "1f3c73ae-8196-458d-bb06-bbd503c594b8", 00:20:36.770 "strip_size_kb": 0, 00:20:36.770 "state": "configuring", 00:20:36.770 "raid_level": "raid1", 00:20:36.770 "superblock": true, 00:20:36.770 "num_base_bdevs": 3, 00:20:36.770 "num_base_bdevs_discovered": 1, 00:20:36.770 "num_base_bdevs_operational": 3, 00:20:36.770 "base_bdevs_list": [ 00:20:36.770 { 00:20:36.770 "name": "pt1", 00:20:36.770 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:36.770 "is_configured": true, 00:20:36.770 "data_offset": 2048, 00:20:36.770 "data_size": 63488 00:20:36.770 }, 00:20:36.770 { 00:20:36.770 "name": null, 00:20:36.770 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:36.770 "is_configured": false, 00:20:36.770 "data_offset": 2048, 00:20:36.770 "data_size": 63488 00:20:36.770 }, 00:20:36.770 { 00:20:36.770 "name": null, 00:20:36.770 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:36.770 "is_configured": false, 00:20:36.770 "data_offset": 2048, 00:20:36.770 "data_size": 63488 00:20:36.770 } 00:20:36.770 ] 00:20:36.770 }' 00:20:36.770 23:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:36.770 23:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.705 23:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:20:37.705 23:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:37.705 [2024-07-13 23:06:26.969190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:37.705 [2024-07-13 23:06:26.969498] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:37.705 [2024-07-13 23:06:26.969644] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:20:37.705 [2024-07-13 23:06:26.969766] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:37.705 [2024-07-13 23:06:26.970322] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:37.705 [2024-07-13 23:06:26.970522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:37.705 [2024-07-13 23:06:26.970792] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:37.705 [2024-07-13 23:06:26.970925] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:37.705 pt2 00:20:37.705 23:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:37.964 [2024-07-13 23:06:27.237268] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:37.964 23:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:37.964 23:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:37.964 23:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:37.964 23:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:37.964 23:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:37.964 23:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:37.964 23:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:37.964 23:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:37.964 23:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:37.964 23:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:37.964 23:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.964 23:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.222 23:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:38.222 "name": "raid_bdev1", 00:20:38.222 "uuid": "1f3c73ae-8196-458d-bb06-bbd503c594b8", 00:20:38.222 "strip_size_kb": 0, 00:20:38.222 "state": "configuring", 00:20:38.222 "raid_level": "raid1", 00:20:38.222 "superblock": true, 00:20:38.222 "num_base_bdevs": 3, 00:20:38.222 "num_base_bdevs_discovered": 1, 00:20:38.222 "num_base_bdevs_operational": 3, 00:20:38.222 "base_bdevs_list": [ 00:20:38.222 { 00:20:38.222 "name": "pt1", 00:20:38.222 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:38.222 "is_configured": true, 00:20:38.222 "data_offset": 2048, 00:20:38.222 "data_size": 63488 00:20:38.223 }, 00:20:38.223 { 00:20:38.223 "name": null, 00:20:38.223 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:38.223 "is_configured": false, 00:20:38.223 "data_offset": 2048, 00:20:38.223 "data_size": 63488 00:20:38.223 }, 00:20:38.223 { 00:20:38.223 "name": null, 00:20:38.223 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:38.223 "is_configured": false, 00:20:38.223 "data_offset": 2048, 00:20:38.223 "data_size": 63488 00:20:38.223 } 00:20:38.223 ] 00:20:38.223 }' 00:20:38.223 23:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:38.223 23:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.790 23:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:20:38.790 23:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:38.790 23:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:39.048 [2024-07-13 23:06:28.398635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:39.048 [2024-07-13 23:06:28.398978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:39.048 [2024-07-13 23:06:28.399060] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:39.048 [2024-07-13 23:06:28.399326] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:39.048 [2024-07-13 23:06:28.399898] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:39.048 [2024-07-13 23:06:28.400074] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:39.048 [2024-07-13 23:06:28.400318] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:39.048 [2024-07-13 23:06:28.400450] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:39.048 pt2 00:20:39.048 23:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:20:39.048 23:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:39.048 23:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:39.306 [2024-07-13 23:06:28.622639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:39.306 [2024-07-13 23:06:28.622945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:39.306 [2024-07-13 23:06:28.623101] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:39.306 [2024-07-13 23:06:28.623259] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:39.306 [2024-07-13 23:06:28.623796] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:39.306 [2024-07-13 23:06:28.623988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:39.306 [2024-07-13 23:06:28.624225] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:39.306 [2024-07-13 23:06:28.624352] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:39.306 [2024-07-13 23:06:28.624599] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:20:39.306 [2024-07-13 23:06:28.624723] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:39.306 [2024-07-13 23:06:28.624837] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:20:39.306 [2024-07-13 23:06:28.625424] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:20:39.306 [2024-07-13 23:06:28.625553] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:20:39.306 [2024-07-13 23:06:28.625745] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:39.306 pt3 00:20:39.306 23:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:20:39.306 23:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:39.306 23:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:39.306 23:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:39.306 23:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:39.306 23:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:39.306 23:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:39.306 23:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:39.306 23:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:39.306 23:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:39.306 23:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:39.306 23:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:39.306 23:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.306 23:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.564 23:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:39.564 "name": "raid_bdev1", 00:20:39.564 "uuid": "1f3c73ae-8196-458d-bb06-bbd503c594b8", 00:20:39.564 "strip_size_kb": 0, 00:20:39.564 "state": "online", 00:20:39.564 "raid_level": "raid1", 00:20:39.564 "superblock": true, 00:20:39.564 "num_base_bdevs": 3, 00:20:39.564 "num_base_bdevs_discovered": 3, 00:20:39.564 "num_base_bdevs_operational": 3, 00:20:39.564 "base_bdevs_list": [ 00:20:39.564 { 00:20:39.564 "name": "pt1", 00:20:39.564 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:39.564 "is_configured": true, 00:20:39.564 "data_offset": 2048, 00:20:39.564 "data_size": 63488 00:20:39.564 }, 00:20:39.564 { 00:20:39.564 "name": "pt2", 00:20:39.564 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:39.564 "is_configured": true, 00:20:39.564 "data_offset": 2048, 00:20:39.564 "data_size": 63488 00:20:39.564 }, 00:20:39.564 { 00:20:39.564 "name": "pt3", 00:20:39.564 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:39.564 "is_configured": true, 00:20:39.564 "data_offset": 2048, 00:20:39.564 "data_size": 63488 00:20:39.564 } 00:20:39.564 ] 00:20:39.564 }' 00:20:39.564 23:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:39.564 23:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.131 23:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:20:40.131 23:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:40.131 23:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:40.131 23:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:40.131 23:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:40.131 23:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:40.131 23:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:40.131 23:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:40.389 [2024-07-13 23:06:29.755291] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:40.389 23:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:40.389 "name": "raid_bdev1", 00:20:40.389 "aliases": [ 00:20:40.389 "1f3c73ae-8196-458d-bb06-bbd503c594b8" 00:20:40.389 ], 00:20:40.390 "product_name": "Raid Volume", 00:20:40.390 "block_size": 512, 00:20:40.390 "num_blocks": 63488, 00:20:40.390 "uuid": "1f3c73ae-8196-458d-bb06-bbd503c594b8", 00:20:40.390 "assigned_rate_limits": { 00:20:40.390 "rw_ios_per_sec": 0, 00:20:40.390 "rw_mbytes_per_sec": 0, 00:20:40.390 "r_mbytes_per_sec": 0, 00:20:40.390 "w_mbytes_per_sec": 0 00:20:40.390 }, 00:20:40.390 "claimed": false, 00:20:40.390 "zoned": false, 00:20:40.390 "supported_io_types": { 00:20:40.390 "read": true, 00:20:40.390 "write": true, 00:20:40.390 "unmap": false, 00:20:40.390 "flush": false, 00:20:40.390 "reset": true, 00:20:40.390 "nvme_admin": false, 00:20:40.390 "nvme_io": false, 00:20:40.390 "nvme_io_md": false, 00:20:40.390 "write_zeroes": true, 00:20:40.390 "zcopy": false, 00:20:40.390 "get_zone_info": false, 00:20:40.390 "zone_management": false, 00:20:40.390 "zone_append": false, 00:20:40.390 "compare": false, 00:20:40.390 "compare_and_write": false, 00:20:40.390 "abort": false, 00:20:40.390 "seek_hole": false, 00:20:40.390 "seek_data": false, 00:20:40.390 "copy": false, 00:20:40.390 "nvme_iov_md": false 00:20:40.390 }, 00:20:40.390 "memory_domains": [ 00:20:40.390 { 00:20:40.390 "dma_device_id": "system", 00:20:40.390 "dma_device_type": 1 00:20:40.390 }, 00:20:40.390 { 00:20:40.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.390 "dma_device_type": 2 00:20:40.390 }, 00:20:40.390 { 00:20:40.390 "dma_device_id": "system", 00:20:40.390 "dma_device_type": 1 00:20:40.390 }, 00:20:40.390 { 00:20:40.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.390 "dma_device_type": 2 00:20:40.390 }, 00:20:40.390 { 00:20:40.390 "dma_device_id": "system", 00:20:40.390 "dma_device_type": 1 00:20:40.390 }, 00:20:40.390 { 00:20:40.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.390 "dma_device_type": 2 00:20:40.390 } 00:20:40.390 ], 00:20:40.390 "driver_specific": { 00:20:40.390 "raid": { 00:20:40.390 "uuid": "1f3c73ae-8196-458d-bb06-bbd503c594b8", 00:20:40.390 "strip_size_kb": 0, 00:20:40.390 "state": "online", 00:20:40.390 "raid_level": "raid1", 00:20:40.390 "superblock": true, 00:20:40.390 "num_base_bdevs": 3, 00:20:40.390 "num_base_bdevs_discovered": 3, 00:20:40.390 "num_base_bdevs_operational": 3, 00:20:40.390 "base_bdevs_list": [ 00:20:40.390 { 00:20:40.390 "name": "pt1", 00:20:40.390 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:40.390 "is_configured": true, 00:20:40.390 "data_offset": 2048, 00:20:40.390 "data_size": 63488 00:20:40.390 }, 00:20:40.390 { 00:20:40.390 "name": "pt2", 00:20:40.390 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:40.390 "is_configured": true, 00:20:40.390 "data_offset": 2048, 00:20:40.390 "data_size": 63488 00:20:40.390 }, 00:20:40.390 { 00:20:40.390 "name": "pt3", 00:20:40.390 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:40.390 "is_configured": true, 00:20:40.390 "data_offset": 2048, 00:20:40.390 "data_size": 63488 00:20:40.390 } 00:20:40.390 ] 00:20:40.390 } 00:20:40.390 } 00:20:40.390 }' 00:20:40.390 23:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:40.649 23:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:40.649 pt2 00:20:40.649 pt3' 00:20:40.649 23:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:40.649 23:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:40.649 23:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:40.908 23:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:40.908 "name": "pt1", 00:20:40.908 "aliases": [ 00:20:40.908 "00000000-0000-0000-0000-000000000001" 00:20:40.908 ], 00:20:40.908 "product_name": "passthru", 00:20:40.908 "block_size": 512, 00:20:40.908 "num_blocks": 65536, 00:20:40.908 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:40.908 "assigned_rate_limits": { 00:20:40.908 "rw_ios_per_sec": 0, 00:20:40.908 "rw_mbytes_per_sec": 0, 00:20:40.908 "r_mbytes_per_sec": 0, 00:20:40.908 "w_mbytes_per_sec": 0 00:20:40.908 }, 00:20:40.908 "claimed": true, 00:20:40.908 "claim_type": "exclusive_write", 00:20:40.908 "zoned": false, 00:20:40.908 "supported_io_types": { 00:20:40.908 "read": true, 00:20:40.908 "write": true, 00:20:40.908 "unmap": true, 00:20:40.908 "flush": true, 00:20:40.908 "reset": true, 00:20:40.908 "nvme_admin": false, 00:20:40.908 "nvme_io": false, 00:20:40.908 "nvme_io_md": false, 00:20:40.908 "write_zeroes": true, 00:20:40.908 "zcopy": true, 00:20:40.908 "get_zone_info": false, 00:20:40.908 "zone_management": false, 00:20:40.908 "zone_append": false, 00:20:40.908 "compare": false, 00:20:40.908 "compare_and_write": false, 00:20:40.908 "abort": true, 00:20:40.908 "seek_hole": false, 00:20:40.908 "seek_data": false, 00:20:40.908 "copy": true, 00:20:40.908 "nvme_iov_md": false 00:20:40.908 }, 00:20:40.908 "memory_domains": [ 00:20:40.908 { 00:20:40.908 "dma_device_id": "system", 00:20:40.908 "dma_device_type": 1 00:20:40.908 }, 00:20:40.908 { 00:20:40.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.908 "dma_device_type": 2 00:20:40.908 } 00:20:40.908 ], 00:20:40.908 "driver_specific": { 00:20:40.908 "passthru": { 00:20:40.908 "name": "pt1", 00:20:40.908 "base_bdev_name": "malloc1" 00:20:40.908 } 00:20:40.908 } 00:20:40.908 }' 00:20:40.908 23:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:40.908 23:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:40.908 23:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:40.908 23:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:40.908 23:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:40.908 23:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:40.908 23:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:41.166 23:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:41.166 23:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:41.166 23:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:41.166 23:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:41.166 23:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:41.166 23:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:41.166 23:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:41.166 23:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:41.424 23:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:41.424 "name": "pt2", 00:20:41.424 "aliases": [ 00:20:41.424 "00000000-0000-0000-0000-000000000002" 00:20:41.424 ], 00:20:41.424 "product_name": "passthru", 00:20:41.424 "block_size": 512, 00:20:41.424 "num_blocks": 65536, 00:20:41.424 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:41.424 "assigned_rate_limits": { 00:20:41.424 "rw_ios_per_sec": 0, 00:20:41.424 "rw_mbytes_per_sec": 0, 00:20:41.424 "r_mbytes_per_sec": 0, 00:20:41.424 "w_mbytes_per_sec": 0 00:20:41.424 }, 00:20:41.424 "claimed": true, 00:20:41.424 "claim_type": "exclusive_write", 00:20:41.424 "zoned": false, 00:20:41.424 "supported_io_types": { 00:20:41.424 "read": true, 00:20:41.424 "write": true, 00:20:41.424 "unmap": true, 00:20:41.424 "flush": true, 00:20:41.424 "reset": true, 00:20:41.424 "nvme_admin": false, 00:20:41.424 "nvme_io": false, 00:20:41.424 "nvme_io_md": false, 00:20:41.424 "write_zeroes": true, 00:20:41.424 "zcopy": true, 00:20:41.424 "get_zone_info": false, 00:20:41.424 "zone_management": false, 00:20:41.424 "zone_append": false, 00:20:41.424 "compare": false, 00:20:41.424 "compare_and_write": false, 00:20:41.424 "abort": true, 00:20:41.424 "seek_hole": false, 00:20:41.424 "seek_data": false, 00:20:41.424 "copy": true, 00:20:41.424 "nvme_iov_md": false 00:20:41.424 }, 00:20:41.424 "memory_domains": [ 00:20:41.424 { 00:20:41.424 "dma_device_id": "system", 00:20:41.424 "dma_device_type": 1 00:20:41.424 }, 00:20:41.424 { 00:20:41.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:41.424 "dma_device_type": 2 00:20:41.424 } 00:20:41.424 ], 00:20:41.424 "driver_specific": { 00:20:41.424 "passthru": { 00:20:41.424 "name": "pt2", 00:20:41.424 "base_bdev_name": "malloc2" 00:20:41.424 } 00:20:41.424 } 00:20:41.424 }' 00:20:41.424 23:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:41.424 23:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:41.683 23:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:41.683 23:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:41.683 23:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:41.683 23:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:41.683 23:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:41.683 23:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:41.683 23:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:41.683 23:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:41.940 23:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:41.940 23:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:41.940 23:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:41.940 23:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:20:41.940 23:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:42.199 23:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:42.199 "name": "pt3", 00:20:42.199 "aliases": [ 00:20:42.199 "00000000-0000-0000-0000-000000000003" 00:20:42.199 ], 00:20:42.199 "product_name": "passthru", 00:20:42.199 "block_size": 512, 00:20:42.199 "num_blocks": 65536, 00:20:42.199 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:42.199 "assigned_rate_limits": { 00:20:42.199 "rw_ios_per_sec": 0, 00:20:42.199 "rw_mbytes_per_sec": 0, 00:20:42.199 "r_mbytes_per_sec": 0, 00:20:42.199 "w_mbytes_per_sec": 0 00:20:42.199 }, 00:20:42.199 "claimed": true, 00:20:42.199 "claim_type": "exclusive_write", 00:20:42.199 "zoned": false, 00:20:42.199 "supported_io_types": { 00:20:42.199 "read": true, 00:20:42.199 "write": true, 00:20:42.199 "unmap": true, 00:20:42.199 "flush": true, 00:20:42.199 "reset": true, 00:20:42.199 "nvme_admin": false, 00:20:42.199 "nvme_io": false, 00:20:42.199 "nvme_io_md": false, 00:20:42.199 "write_zeroes": true, 00:20:42.199 "zcopy": true, 00:20:42.199 "get_zone_info": false, 00:20:42.199 "zone_management": false, 00:20:42.199 "zone_append": false, 00:20:42.199 "compare": false, 00:20:42.199 "compare_and_write": false, 00:20:42.199 "abort": true, 00:20:42.199 "seek_hole": false, 00:20:42.199 "seek_data": false, 00:20:42.199 "copy": true, 00:20:42.199 "nvme_iov_md": false 00:20:42.199 }, 00:20:42.199 "memory_domains": [ 00:20:42.199 { 00:20:42.199 "dma_device_id": "system", 00:20:42.199 "dma_device_type": 1 00:20:42.199 }, 00:20:42.199 { 00:20:42.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.199 "dma_device_type": 2 00:20:42.199 } 00:20:42.199 ], 00:20:42.199 "driver_specific": { 00:20:42.199 "passthru": { 00:20:42.199 "name": "pt3", 00:20:42.199 "base_bdev_name": "malloc3" 00:20:42.199 } 00:20:42.199 } 00:20:42.199 }' 00:20:42.199 23:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:42.199 23:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:42.199 23:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:42.199 23:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:42.199 23:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:42.199 23:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:42.199 23:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:42.457 23:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:42.457 23:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:42.457 23:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:42.457 23:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:42.457 23:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:42.457 23:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:42.457 23:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:20:42.715 [2024-07-13 23:06:32.005994] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:42.715 23:06:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 1f3c73ae-8196-458d-bb06-bbd503c594b8 '!=' 1f3c73ae-8196-458d-bb06-bbd503c594b8 ']' 00:20:42.715 23:06:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:20:42.715 23:06:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:42.715 23:06:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:20:42.715 23:06:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:42.973 [2024-07-13 23:06:32.225788] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:42.973 23:06:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:42.973 23:06:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:42.973 23:06:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:42.973 23:06:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:42.973 23:06:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:42.973 23:06:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:42.973 23:06:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:42.973 23:06:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:42.973 23:06:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:42.973 23:06:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:42.973 23:06:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.973 23:06:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.232 23:06:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:43.232 "name": "raid_bdev1", 00:20:43.232 "uuid": "1f3c73ae-8196-458d-bb06-bbd503c594b8", 00:20:43.232 "strip_size_kb": 0, 00:20:43.232 "state": "online", 00:20:43.232 "raid_level": "raid1", 00:20:43.232 "superblock": true, 00:20:43.232 "num_base_bdevs": 3, 00:20:43.232 "num_base_bdevs_discovered": 2, 00:20:43.232 "num_base_bdevs_operational": 2, 00:20:43.232 "base_bdevs_list": [ 00:20:43.232 { 00:20:43.232 "name": null, 00:20:43.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.232 "is_configured": false, 00:20:43.232 "data_offset": 2048, 00:20:43.232 "data_size": 63488 00:20:43.232 }, 00:20:43.232 { 00:20:43.232 "name": "pt2", 00:20:43.232 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:43.232 "is_configured": true, 00:20:43.232 "data_offset": 2048, 00:20:43.232 "data_size": 63488 00:20:43.232 }, 00:20:43.232 { 00:20:43.232 "name": "pt3", 00:20:43.232 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:43.232 "is_configured": true, 00:20:43.232 "data_offset": 2048, 00:20:43.232 "data_size": 63488 00:20:43.232 } 00:20:43.232 ] 00:20:43.232 }' 00:20:43.232 23:06:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:43.232 23:06:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.798 23:06:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:44.056 [2024-07-13 23:06:33.253981] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:44.056 [2024-07-13 23:06:33.254212] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:44.056 [2024-07-13 23:06:33.254426] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:44.056 [2024-07-13 23:06:33.254650] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:44.056 [2024-07-13 23:06:33.254802] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:20:44.056 23:06:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.056 23:06:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:20:44.313 23:06:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:20:44.313 23:06:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:20:44.313 23:06:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:20:44.313 23:06:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:20:44.313 23:06:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:44.571 23:06:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:20:44.571 23:06:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:20:44.571 23:06:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:44.829 23:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:20:44.829 23:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:20:44.829 23:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:20:44.829 23:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:20:44.829 23:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:45.087 [2024-07-13 23:06:34.258193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:45.087 [2024-07-13 23:06:34.258481] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.087 [2024-07-13 23:06:34.258653] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:45.087 [2024-07-13 23:06:34.258839] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.087 [2024-07-13 23:06:34.261433] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.087 [2024-07-13 23:06:34.261606] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:45.087 [2024-07-13 23:06:34.261860] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:45.087 [2024-07-13 23:06:34.262004] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:45.087 pt2 00:20:45.087 23:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:45.087 23:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:45.087 23:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:45.087 23:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:45.087 23:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:45.087 23:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:45.087 23:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:45.087 23:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:45.087 23:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:45.087 23:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:45.087 23:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.087 23:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.344 23:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:45.344 "name": "raid_bdev1", 00:20:45.344 "uuid": "1f3c73ae-8196-458d-bb06-bbd503c594b8", 00:20:45.344 "strip_size_kb": 0, 00:20:45.344 "state": "configuring", 00:20:45.344 "raid_level": "raid1", 00:20:45.344 "superblock": true, 00:20:45.344 "num_base_bdevs": 3, 00:20:45.344 "num_base_bdevs_discovered": 1, 00:20:45.344 "num_base_bdevs_operational": 2, 00:20:45.344 "base_bdevs_list": [ 00:20:45.344 { 00:20:45.344 "name": null, 00:20:45.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.344 "is_configured": false, 00:20:45.344 "data_offset": 2048, 00:20:45.344 "data_size": 63488 00:20:45.344 }, 00:20:45.344 { 00:20:45.344 "name": "pt2", 00:20:45.344 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:45.344 "is_configured": true, 00:20:45.344 "data_offset": 2048, 00:20:45.344 "data_size": 63488 00:20:45.344 }, 00:20:45.344 { 00:20:45.344 "name": null, 00:20:45.344 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:45.344 "is_configured": false, 00:20:45.344 "data_offset": 2048, 00:20:45.344 "data_size": 63488 00:20:45.344 } 00:20:45.344 ] 00:20:45.344 }' 00:20:45.344 23:06:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:45.344 23:06:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.912 23:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:20:45.912 23:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:20:45.912 23:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:20:45.912 23:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:45.912 [2024-07-13 23:06:35.298557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:45.912 [2024-07-13 23:06:35.299070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.912 [2024-07-13 23:06:35.299277] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:45.912 [2024-07-13 23:06:35.299425] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.912 [2024-07-13 23:06:35.300063] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.912 [2024-07-13 23:06:35.300229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:45.912 [2024-07-13 23:06:35.300455] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:45.912 [2024-07-13 23:06:35.300587] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:45.912 [2024-07-13 23:06:35.300766] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:20:45.912 [2024-07-13 23:06:35.300887] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:45.912 [2024-07-13 23:06:35.301143] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:20:45.912 [2024-07-13 23:06:35.301646] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:20:45.912 [2024-07-13 23:06:35.301774] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:20:45.912 [2024-07-13 23:06:35.301981] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:45.912 pt3 00:20:45.912 23:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:45.912 23:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:45.912 23:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:45.912 23:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:45.912 23:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:45.912 23:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:45.912 23:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:45.912 23:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:45.912 23:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:45.912 23:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:46.171 23:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.171 23:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.171 23:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:46.171 "name": "raid_bdev1", 00:20:46.171 "uuid": "1f3c73ae-8196-458d-bb06-bbd503c594b8", 00:20:46.171 "strip_size_kb": 0, 00:20:46.171 "state": "online", 00:20:46.171 "raid_level": "raid1", 00:20:46.171 "superblock": true, 00:20:46.171 "num_base_bdevs": 3, 00:20:46.171 "num_base_bdevs_discovered": 2, 00:20:46.171 "num_base_bdevs_operational": 2, 00:20:46.171 "base_bdevs_list": [ 00:20:46.171 { 00:20:46.171 "name": null, 00:20:46.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.171 "is_configured": false, 00:20:46.171 "data_offset": 2048, 00:20:46.171 "data_size": 63488 00:20:46.171 }, 00:20:46.171 { 00:20:46.171 "name": "pt2", 00:20:46.171 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:46.171 "is_configured": true, 00:20:46.171 "data_offset": 2048, 00:20:46.171 "data_size": 63488 00:20:46.171 }, 00:20:46.171 { 00:20:46.171 "name": "pt3", 00:20:46.171 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:46.171 "is_configured": true, 00:20:46.171 "data_offset": 2048, 00:20:46.171 "data_size": 63488 00:20:46.171 } 00:20:46.171 ] 00:20:46.171 }' 00:20:46.171 23:06:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:46.171 23:06:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.738 23:06:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:46.996 [2024-07-13 23:06:36.322825] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:46.996 [2024-07-13 23:06:36.323029] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:46.996 [2024-07-13 23:06:36.323245] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:46.996 [2024-07-13 23:06:36.323442] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:46.996 [2024-07-13 23:06:36.323584] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:20:46.996 23:06:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.996 23:06:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:20:47.254 23:06:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:20:47.254 23:06:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:20:47.254 23:06:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:20:47.254 23:06:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:20:47.254 23:06:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:47.515 23:06:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:47.780 [2024-07-13 23:06:37.067036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:47.780 [2024-07-13 23:06:37.067397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:47.780 [2024-07-13 23:06:37.067629] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:47.780 [2024-07-13 23:06:37.067792] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:47.780 [2024-07-13 23:06:37.070329] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:47.780 [2024-07-13 23:06:37.070515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:47.780 [2024-07-13 23:06:37.070775] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:47.780 [2024-07-13 23:06:37.070925] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:47.780 [2024-07-13 23:06:37.071253] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:47.780 [2024-07-13 23:06:37.071389] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:47.780 [2024-07-13 23:06:37.071543] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring 00:20:47.780 [2024-07-13 23:06:37.071711] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:47.780 pt1 00:20:47.780 23:06:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:20:47.780 23:06:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:47.780 23:06:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:47.780 23:06:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:47.780 23:06:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:47.780 23:06:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:47.780 23:06:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:47.780 23:06:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:47.780 23:06:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:47.780 23:06:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:47.780 23:06:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:47.780 23:06:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.780 23:06:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.038 23:06:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:48.038 "name": "raid_bdev1", 00:20:48.038 "uuid": "1f3c73ae-8196-458d-bb06-bbd503c594b8", 00:20:48.038 "strip_size_kb": 0, 00:20:48.038 "state": "configuring", 00:20:48.038 "raid_level": "raid1", 00:20:48.038 "superblock": true, 00:20:48.038 "num_base_bdevs": 3, 00:20:48.038 "num_base_bdevs_discovered": 1, 00:20:48.038 "num_base_bdevs_operational": 2, 00:20:48.038 "base_bdevs_list": [ 00:20:48.038 { 00:20:48.038 "name": null, 00:20:48.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.038 "is_configured": false, 00:20:48.038 "data_offset": 2048, 00:20:48.038 "data_size": 63488 00:20:48.038 }, 00:20:48.038 { 00:20:48.038 "name": "pt2", 00:20:48.038 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:48.038 "is_configured": true, 00:20:48.038 "data_offset": 2048, 00:20:48.038 "data_size": 63488 00:20:48.038 }, 00:20:48.038 { 00:20:48.038 "name": null, 00:20:48.038 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:48.039 "is_configured": false, 00:20:48.039 "data_offset": 2048, 00:20:48.039 "data_size": 63488 00:20:48.039 } 00:20:48.039 ] 00:20:48.039 }' 00:20:48.039 23:06:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:48.039 23:06:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.604 23:06:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:20:48.604 23:06:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:48.864 23:06:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:20:48.864 23:06:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:49.123 [2024-07-13 23:06:38.409549] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:49.123 [2024-07-13 23:06:38.410094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:49.123 [2024-07-13 23:06:38.410284] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:49.123 [2024-07-13 23:06:38.410469] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:49.123 [2024-07-13 23:06:38.411297] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:49.123 [2024-07-13 23:06:38.411502] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:49.123 [2024-07-13 23:06:38.411785] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:49.123 [2024-07-13 23:06:38.411978] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:49.123 [2024-07-13 23:06:38.412336] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:20:49.123 [2024-07-13 23:06:38.412506] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:49.123 [2024-07-13 23:06:38.412693] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002bb0 00:20:49.123 [2024-07-13 23:06:38.413378] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:20:49.123 [2024-07-13 23:06:38.413552] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:20:49.123 [2024-07-13 23:06:38.413878] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:49.123 pt3 00:20:49.123 23:06:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:49.123 23:06:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:49.123 23:06:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:49.123 23:06:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:49.123 23:06:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:49.123 23:06:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:49.123 23:06:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:49.123 23:06:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:49.123 23:06:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:49.123 23:06:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:49.123 23:06:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.123 23:06:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.381 23:06:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:49.381 "name": "raid_bdev1", 00:20:49.381 "uuid": "1f3c73ae-8196-458d-bb06-bbd503c594b8", 00:20:49.381 "strip_size_kb": 0, 00:20:49.381 "state": "online", 00:20:49.381 "raid_level": "raid1", 00:20:49.381 "superblock": true, 00:20:49.381 "num_base_bdevs": 3, 00:20:49.381 "num_base_bdevs_discovered": 2, 00:20:49.381 "num_base_bdevs_operational": 2, 00:20:49.381 "base_bdevs_list": [ 00:20:49.381 { 00:20:49.381 "name": null, 00:20:49.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.381 "is_configured": false, 00:20:49.381 "data_offset": 2048, 00:20:49.381 "data_size": 63488 00:20:49.381 }, 00:20:49.381 { 00:20:49.381 "name": "pt2", 00:20:49.381 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:49.381 "is_configured": true, 00:20:49.381 "data_offset": 2048, 00:20:49.381 "data_size": 63488 00:20:49.381 }, 00:20:49.381 { 00:20:49.381 "name": "pt3", 00:20:49.381 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:49.381 "is_configured": true, 00:20:49.381 "data_offset": 2048, 00:20:49.381 "data_size": 63488 00:20:49.381 } 00:20:49.381 ] 00:20:49.381 }' 00:20:49.381 23:06:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:49.381 23:06:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.947 23:06:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:20:49.947 23:06:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:50.205 23:06:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:20:50.205 23:06:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:50.205 23:06:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:20:50.463 [2024-07-13 23:06:39.742607] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:50.463 23:06:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 1f3c73ae-8196-458d-bb06-bbd503c594b8 '!=' 1f3c73ae-8196-458d-bb06-bbd503c594b8 ']' 00:20:50.463 23:06:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 142914 00:20:50.463 23:06:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 142914 ']' 00:20:50.463 23:06:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 142914 00:20:50.463 23:06:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:20:50.463 23:06:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:50.463 23:06:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 142914 00:20:50.463 23:06:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:50.463 23:06:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:50.463 23:06:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 142914' 00:20:50.463 killing process with pid 142914 00:20:50.463 23:06:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 142914 00:20:50.463 23:06:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 142914 00:20:50.463 [2024-07-13 23:06:39.784384] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:50.463 [2024-07-13 23:06:39.784478] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:50.463 [2024-07-13 23:06:39.784616] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:50.463 [2024-07-13 23:06:39.784745] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:20:50.463 [2024-07-13 23:06:39.824043] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:51.030 23:06:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:20:51.030 00:20:51.030 real 0m22.689s 00:20:51.030 user 0m42.987s 00:20:51.030 sys 0m2.607s 00:20:51.030 23:06:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:51.030 23:06:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.030 ************************************ 00:20:51.030 END TEST raid_superblock_test 00:20:51.030 ************************************ 00:20:51.030 23:06:40 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:51.030 23:06:40 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:20:51.030 23:06:40 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:51.030 23:06:40 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:51.030 23:06:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:51.030 ************************************ 00:20:51.030 START TEST raid_read_error_test 00:20:51.030 ************************************ 00:20:51.030 23:06:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 read 00:20:51.030 23:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:20:51.030 23:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:20:51.030 23:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:20:51.030 23:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:20:51.030 23:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:51.030 23:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:20:51.030 23:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:51.030 23:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:51.030 23:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:20:51.030 23:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:51.030 23:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:51.030 23:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:20:51.030 23:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:51.030 23:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:51.030 23:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:51.030 23:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:20:51.030 23:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:20:51.030 23:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:20:51.030 23:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:20:51.030 23:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:20:51.030 23:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:20:51.030 23:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:20:51.030 23:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:20:51.030 23:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:20:51.030 23:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.3yW9ZERTmN 00:20:51.031 23:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=143656 00:20:51.031 23:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:51.031 23:06:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 143656 /var/tmp/spdk-raid.sock 00:20:51.031 23:06:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 143656 ']' 00:20:51.031 23:06:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:51.031 23:06:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:51.031 23:06:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:51.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:51.031 23:06:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:51.031 23:06:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.031 [2024-07-13 23:06:40.281496] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:20:51.031 [2024-07-13 23:06:40.281988] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143656 ] 00:20:51.031 [2024-07-13 23:06:40.430289] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.289 [2024-07-13 23:06:40.519641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.289 [2024-07-13 23:06:40.579483] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:51.857 23:06:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:51.857 23:06:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:20:51.857 23:06:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:51.857 23:06:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:52.116 BaseBdev1_malloc 00:20:52.116 23:06:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:20:52.375 true 00:20:52.375 23:06:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:52.633 [2024-07-13 23:06:41.966781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:52.633 [2024-07-13 23:06:41.967140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:52.633 [2024-07-13 23:06:41.967345] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:20:52.633 [2024-07-13 23:06:41.967542] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:52.633 [2024-07-13 23:06:41.970361] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:52.633 [2024-07-13 23:06:41.970555] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:52.633 BaseBdev1 00:20:52.633 23:06:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:52.633 23:06:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:52.892 BaseBdev2_malloc 00:20:52.892 23:06:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:20:53.151 true 00:20:53.151 23:06:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:53.410 [2024-07-13 23:06:42.693923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:53.410 [2024-07-13 23:06:42.694211] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:53.410 [2024-07-13 23:06:42.694402] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:20:53.410 [2024-07-13 23:06:42.694572] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:53.410 [2024-07-13 23:06:42.697232] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:53.410 [2024-07-13 23:06:42.697441] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:53.410 BaseBdev2 00:20:53.410 23:06:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:53.410 23:06:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:53.668 BaseBdev3_malloc 00:20:53.668 23:06:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:20:53.927 true 00:20:53.927 23:06:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:54.184 [2024-07-13 23:06:43.380437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:54.184 [2024-07-13 23:06:43.380753] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:54.184 [2024-07-13 23:06:43.380973] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:20:54.184 [2024-07-13 23:06:43.381150] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:54.184 [2024-07-13 23:06:43.383789] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:54.184 [2024-07-13 23:06:43.383974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:54.184 BaseBdev3 00:20:54.184 23:06:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:20:54.441 [2024-07-13 23:06:43.608725] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:54.441 [2024-07-13 23:06:43.610973] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:54.441 [2024-07-13 23:06:43.611261] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:54.441 [2024-07-13 23:06:43.611694] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:20:54.441 [2024-07-13 23:06:43.611834] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:54.441 [2024-07-13 23:06:43.612028] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:20:54.441 [2024-07-13 23:06:43.612604] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:20:54.441 [2024-07-13 23:06:43.612736] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:20:54.441 [2024-07-13 23:06:43.613107] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:54.441 23:06:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:54.441 23:06:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:54.441 23:06:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:54.441 23:06:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:54.441 23:06:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:54.441 23:06:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:54.441 23:06:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:54.441 23:06:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:54.441 23:06:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:54.441 23:06:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:54.441 23:06:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.441 23:06:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.699 23:06:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:54.699 "name": "raid_bdev1", 00:20:54.699 "uuid": "a02a2fa5-4cb4-4b08-82c1-565997c08674", 00:20:54.699 "strip_size_kb": 0, 00:20:54.699 "state": "online", 00:20:54.699 "raid_level": "raid1", 00:20:54.699 "superblock": true, 00:20:54.699 "num_base_bdevs": 3, 00:20:54.699 "num_base_bdevs_discovered": 3, 00:20:54.699 "num_base_bdevs_operational": 3, 00:20:54.699 "base_bdevs_list": [ 00:20:54.699 { 00:20:54.699 "name": "BaseBdev1", 00:20:54.699 "uuid": "51ff2578-1399-5c83-893c-b86f37969676", 00:20:54.699 "is_configured": true, 00:20:54.699 "data_offset": 2048, 00:20:54.699 "data_size": 63488 00:20:54.699 }, 00:20:54.699 { 00:20:54.699 "name": "BaseBdev2", 00:20:54.699 "uuid": "94c65f0f-6893-52dc-933d-e0d82aee774b", 00:20:54.699 "is_configured": true, 00:20:54.699 "data_offset": 2048, 00:20:54.699 "data_size": 63488 00:20:54.699 }, 00:20:54.699 { 00:20:54.699 "name": "BaseBdev3", 00:20:54.699 "uuid": "65d27e63-2f42-5ed8-b43f-ed208d71cb72", 00:20:54.699 "is_configured": true, 00:20:54.699 "data_offset": 2048, 00:20:54.699 "data_size": 63488 00:20:54.699 } 00:20:54.699 ] 00:20:54.699 }' 00:20:54.699 23:06:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:54.699 23:06:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.265 23:06:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:20:55.265 23:06:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:55.265 [2024-07-13 23:06:44.613771] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:20:56.201 23:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:20:56.459 23:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:20:56.459 23:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:20:56.459 23:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:20:56.459 23:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:20:56.459 23:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:56.459 23:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:56.459 23:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:56.459 23:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:56.459 23:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:56.459 23:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:56.459 23:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:56.459 23:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:56.459 23:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:56.459 23:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:56.459 23:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.459 23:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.717 23:06:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:56.717 "name": "raid_bdev1", 00:20:56.717 "uuid": "a02a2fa5-4cb4-4b08-82c1-565997c08674", 00:20:56.717 "strip_size_kb": 0, 00:20:56.717 "state": "online", 00:20:56.717 "raid_level": "raid1", 00:20:56.717 "superblock": true, 00:20:56.717 "num_base_bdevs": 3, 00:20:56.717 "num_base_bdevs_discovered": 3, 00:20:56.717 "num_base_bdevs_operational": 3, 00:20:56.717 "base_bdevs_list": [ 00:20:56.717 { 00:20:56.717 "name": "BaseBdev1", 00:20:56.717 "uuid": "51ff2578-1399-5c83-893c-b86f37969676", 00:20:56.717 "is_configured": true, 00:20:56.717 "data_offset": 2048, 00:20:56.717 "data_size": 63488 00:20:56.717 }, 00:20:56.717 { 00:20:56.717 "name": "BaseBdev2", 00:20:56.717 "uuid": "94c65f0f-6893-52dc-933d-e0d82aee774b", 00:20:56.717 "is_configured": true, 00:20:56.718 "data_offset": 2048, 00:20:56.718 "data_size": 63488 00:20:56.718 }, 00:20:56.718 { 00:20:56.718 "name": "BaseBdev3", 00:20:56.718 "uuid": "65d27e63-2f42-5ed8-b43f-ed208d71cb72", 00:20:56.718 "is_configured": true, 00:20:56.718 "data_offset": 2048, 00:20:56.718 "data_size": 63488 00:20:56.718 } 00:20:56.718 ] 00:20:56.718 }' 00:20:56.718 23:06:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:56.718 23:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.651 23:06:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:57.651 [2024-07-13 23:06:46.947590] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:57.651 [2024-07-13 23:06:46.947903] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:57.651 [2024-07-13 23:06:46.950649] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:57.651 [2024-07-13 23:06:46.950850] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:57.651 [2024-07-13 23:06:46.951003] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:57.651 [2024-07-13 23:06:46.951172] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:20:57.651 0 00:20:57.651 23:06:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 143656 00:20:57.651 23:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 143656 ']' 00:20:57.651 23:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 143656 00:20:57.651 23:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:20:57.651 23:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:57.651 23:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 143656 00:20:57.651 killing process with pid 143656 00:20:57.651 23:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:57.651 23:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:57.651 23:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 143656' 00:20:57.652 23:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 143656 00:20:57.652 [2024-07-13 23:06:46.990553] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:57.652 23:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 143656 00:20:57.652 [2024-07-13 23:06:47.014221] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:57.910 23:06:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.3yW9ZERTmN 00:20:57.910 23:06:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:20:57.910 23:06:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:20:57.910 23:06:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:20:57.910 23:06:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:20:57.910 23:06:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:57.910 23:06:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:20:57.910 23:06:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:20:57.910 00:20:57.910 real 0m7.059s 00:20:57.910 user 0m11.498s 00:20:57.910 ************************************ 00:20:57.910 END TEST raid_read_error_test 00:20:57.910 ************************************ 00:20:57.910 sys 0m0.869s 00:20:57.910 23:06:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:57.910 23:06:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.910 23:06:47 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:57.910 23:06:47 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:20:57.910 23:06:47 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:57.910 23:06:47 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:57.910 23:06:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:58.167 ************************************ 00:20:58.168 START TEST raid_write_error_test 00:20:58.168 ************************************ 00:20:58.168 23:06:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 write 00:20:58.168 23:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:20:58.168 23:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:20:58.168 23:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:20:58.168 23:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:20:58.168 23:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:58.168 23:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:20:58.168 23:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:58.168 23:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:58.168 23:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:20:58.168 23:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:58.168 23:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:58.168 23:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:20:58.168 23:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:58.168 23:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:58.168 23:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:58.168 23:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:20:58.168 23:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:20:58.168 23:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:20:58.168 23:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:20:58.168 23:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:20:58.168 23:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:20:58.168 23:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:20:58.168 23:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:20:58.168 23:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:20:58.168 23:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.IkDgSMAxuY 00:20:58.168 23:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=143851 00:20:58.168 23:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 143851 /var/tmp/spdk-raid.sock 00:20:58.168 23:06:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:58.168 23:06:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 143851 ']' 00:20:58.168 23:06:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:58.168 23:06:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:58.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:58.168 23:06:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:58.168 23:06:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:58.168 23:06:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.168 [2024-07-13 23:06:47.401053] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:20:58.168 [2024-07-13 23:06:47.401589] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143851 ] 00:20:58.168 [2024-07-13 23:06:47.549257] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.426 [2024-07-13 23:06:47.622699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.426 [2024-07-13 23:06:47.683063] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:59.359 23:06:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:59.359 23:06:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:20:59.359 23:06:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:59.359 23:06:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:59.359 BaseBdev1_malloc 00:20:59.359 23:06:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:20:59.617 true 00:20:59.617 23:06:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:59.875 [2024-07-13 23:06:49.186966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:59.875 [2024-07-13 23:06:49.187464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:59.875 [2024-07-13 23:06:49.187648] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:20:59.875 [2024-07-13 23:06:49.187853] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:59.875 [2024-07-13 23:06:49.190868] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:59.875 [2024-07-13 23:06:49.191106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:59.875 BaseBdev1 00:20:59.875 23:06:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:59.875 23:06:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:00.133 BaseBdev2_malloc 00:21:00.133 23:06:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:21:00.391 true 00:21:00.392 23:06:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:00.650 [2024-07-13 23:06:49.889781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:00.650 [2024-07-13 23:06:49.890190] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:00.650 [2024-07-13 23:06:49.890386] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:21:00.650 [2024-07-13 23:06:49.890582] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:00.650 [2024-07-13 23:06:49.893420] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:00.650 [2024-07-13 23:06:49.893637] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:00.650 BaseBdev2 00:21:00.650 23:06:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:00.650 23:06:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:00.908 BaseBdev3_malloc 00:21:00.908 23:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:21:01.165 true 00:21:01.165 23:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:01.423 [2024-07-13 23:06:50.601428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:01.423 [2024-07-13 23:06:50.601951] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:01.423 [2024-07-13 23:06:50.602192] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:21:01.423 [2024-07-13 23:06:50.602390] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:01.423 [2024-07-13 23:06:50.605751] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:01.423 [2024-07-13 23:06:50.605996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:01.423 BaseBdev3 00:21:01.423 23:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:21:01.423 [2024-07-13 23:06:50.822710] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:01.423 [2024-07-13 23:06:50.825846] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:01.423 [2024-07-13 23:06:50.826146] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:01.423 [2024-07-13 23:06:50.826780] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:21:01.423 [2024-07-13 23:06:50.826876] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:01.423 [2024-07-13 23:06:50.827231] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:21:01.423 [2024-07-13 23:06:50.828238] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:21:01.423 [2024-07-13 23:06:50.828434] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:21:01.423 [2024-07-13 23:06:50.828903] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:01.681 23:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:01.681 23:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:01.681 23:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:01.681 23:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:01.681 23:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:01.681 23:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:01.681 23:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:01.681 23:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:01.681 23:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:01.681 23:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:01.681 23:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.681 23:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.939 23:06:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:01.939 "name": "raid_bdev1", 00:21:01.939 "uuid": "c01568fe-7d84-4baf-b136-d8027a586b6c", 00:21:01.939 "strip_size_kb": 0, 00:21:01.939 "state": "online", 00:21:01.939 "raid_level": "raid1", 00:21:01.939 "superblock": true, 00:21:01.939 "num_base_bdevs": 3, 00:21:01.939 "num_base_bdevs_discovered": 3, 00:21:01.939 "num_base_bdevs_operational": 3, 00:21:01.939 "base_bdevs_list": [ 00:21:01.939 { 00:21:01.939 "name": "BaseBdev1", 00:21:01.939 "uuid": "9de9facd-102d-5f87-af4e-9bc6432b184c", 00:21:01.939 "is_configured": true, 00:21:01.939 "data_offset": 2048, 00:21:01.939 "data_size": 63488 00:21:01.939 }, 00:21:01.939 { 00:21:01.939 "name": "BaseBdev2", 00:21:01.939 "uuid": "885cbe92-cfe0-5d13-972e-e3c513f8e4fe", 00:21:01.939 "is_configured": true, 00:21:01.939 "data_offset": 2048, 00:21:01.939 "data_size": 63488 00:21:01.939 }, 00:21:01.939 { 00:21:01.939 "name": "BaseBdev3", 00:21:01.939 "uuid": "cc886ff4-4e9b-5960-ba74-d9b0208907a4", 00:21:01.939 "is_configured": true, 00:21:01.939 "data_offset": 2048, 00:21:01.939 "data_size": 63488 00:21:01.939 } 00:21:01.939 ] 00:21:01.939 }' 00:21:01.939 23:06:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:01.939 23:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.505 23:06:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:21:02.505 23:06:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:02.505 [2024-07-13 23:06:51.791843] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:21:03.439 23:06:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:21:03.698 [2024-07-13 23:06:52.959746] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:21:03.698 [2024-07-13 23:06:52.960219] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:03.698 [2024-07-13 23:06:52.960690] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002460 00:21:03.698 23:06:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:21:03.698 23:06:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:21:03.698 23:06:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:21:03.698 23:06:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=2 00:21:03.698 23:06:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:03.698 23:06:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:03.698 23:06:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:03.698 23:06:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:03.698 23:06:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:03.698 23:06:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:03.698 23:06:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:03.698 23:06:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:03.698 23:06:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:03.698 23:06:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:03.698 23:06:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.698 23:06:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.957 23:06:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:03.957 "name": "raid_bdev1", 00:21:03.957 "uuid": "c01568fe-7d84-4baf-b136-d8027a586b6c", 00:21:03.957 "strip_size_kb": 0, 00:21:03.957 "state": "online", 00:21:03.957 "raid_level": "raid1", 00:21:03.957 "superblock": true, 00:21:03.957 "num_base_bdevs": 3, 00:21:03.957 "num_base_bdevs_discovered": 2, 00:21:03.957 "num_base_bdevs_operational": 2, 00:21:03.957 "base_bdevs_list": [ 00:21:03.957 { 00:21:03.957 "name": null, 00:21:03.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.957 "is_configured": false, 00:21:03.957 "data_offset": 2048, 00:21:03.957 "data_size": 63488 00:21:03.957 }, 00:21:03.957 { 00:21:03.957 "name": "BaseBdev2", 00:21:03.957 "uuid": "885cbe92-cfe0-5d13-972e-e3c513f8e4fe", 00:21:03.957 "is_configured": true, 00:21:03.957 "data_offset": 2048, 00:21:03.957 "data_size": 63488 00:21:03.957 }, 00:21:03.957 { 00:21:03.957 "name": "BaseBdev3", 00:21:03.957 "uuid": "cc886ff4-4e9b-5960-ba74-d9b0208907a4", 00:21:03.957 "is_configured": true, 00:21:03.957 "data_offset": 2048, 00:21:03.957 "data_size": 63488 00:21:03.957 } 00:21:03.957 ] 00:21:03.957 }' 00:21:03.957 23:06:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:03.957 23:06:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.522 23:06:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:04.780 [2024-07-13 23:06:54.168460] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:04.780 [2024-07-13 23:06:54.168816] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:04.780 [2024-07-13 23:06:54.172175] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:04.780 [2024-07-13 23:06:54.172382] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:04.780 [2024-07-13 23:06:54.172600] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:04.780 [2024-07-13 23:06:54.172738] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:21:04.780 0 00:21:05.038 23:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 143851 00:21:05.038 23:06:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 143851 ']' 00:21:05.038 23:06:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 143851 00:21:05.038 23:06:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:21:05.038 23:06:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:05.038 23:06:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 143851 00:21:05.038 23:06:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:05.038 23:06:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:05.038 23:06:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 143851' 00:21:05.038 killing process with pid 143851 00:21:05.038 23:06:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 143851 00:21:05.038 23:06:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 143851 00:21:05.038 [2024-07-13 23:06:54.214774] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:05.038 [2024-07-13 23:06:54.252253] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:05.295 23:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.IkDgSMAxuY 00:21:05.295 23:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:21:05.295 23:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:21:05.295 23:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:21:05.295 23:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:21:05.295 23:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:05.295 23:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:21:05.295 23:06:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:21:05.295 00:21:05.295 real 0m7.278s 00:21:05.295 user 0m11.714s 00:21:05.295 sys 0m0.925s 00:21:05.295 23:06:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:05.295 23:06:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.295 ************************************ 00:21:05.295 END TEST raid_write_error_test 00:21:05.295 ************************************ 00:21:05.295 23:06:54 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:05.295 23:06:54 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:21:05.295 23:06:54 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:21:05.295 23:06:54 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:21:05.295 23:06:54 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:05.295 23:06:54 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:05.295 23:06:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:05.295 ************************************ 00:21:05.295 START TEST raid_state_function_test 00:21:05.295 ************************************ 00:21:05.295 23:06:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 false 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=144045 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 144045' 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:05.296 Process raid pid: 144045 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 144045 /var/tmp/spdk-raid.sock 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 144045 ']' 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:05.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:05.296 23:06:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.554 [2024-07-13 23:06:54.733448] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:21:05.554 [2024-07-13 23:06:54.733951] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:05.554 [2024-07-13 23:06:54.874879] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.811 [2024-07-13 23:06:54.975178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.811 [2024-07-13 23:06:55.049826] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:06.389 23:06:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:06.389 23:06:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:21:06.389 23:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:06.647 [2024-07-13 23:06:55.996057] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:06.647 [2024-07-13 23:06:55.996397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:06.647 [2024-07-13 23:06:55.996535] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:06.647 [2024-07-13 23:06:55.996684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:06.647 [2024-07-13 23:06:55.996792] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:06.647 [2024-07-13 23:06:55.996977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:06.647 [2024-07-13 23:06:55.997088] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:06.647 [2024-07-13 23:06:55.997229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:06.647 23:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:06.647 23:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:06.647 23:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:06.647 23:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:06.647 23:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:06.647 23:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:06.647 23:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:06.647 23:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:06.647 23:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:06.647 23:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:06.647 23:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.647 23:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:06.905 23:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:06.905 "name": "Existed_Raid", 00:21:06.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.905 "strip_size_kb": 64, 00:21:06.905 "state": "configuring", 00:21:06.905 "raid_level": "raid0", 00:21:06.905 "superblock": false, 00:21:06.905 "num_base_bdevs": 4, 00:21:06.905 "num_base_bdevs_discovered": 0, 00:21:06.905 "num_base_bdevs_operational": 4, 00:21:06.905 "base_bdevs_list": [ 00:21:06.905 { 00:21:06.905 "name": "BaseBdev1", 00:21:06.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.905 "is_configured": false, 00:21:06.905 "data_offset": 0, 00:21:06.905 "data_size": 0 00:21:06.905 }, 00:21:06.905 { 00:21:06.905 "name": "BaseBdev2", 00:21:06.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.905 "is_configured": false, 00:21:06.905 "data_offset": 0, 00:21:06.905 "data_size": 0 00:21:06.905 }, 00:21:06.905 { 00:21:06.905 "name": "BaseBdev3", 00:21:06.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.905 "is_configured": false, 00:21:06.905 "data_offset": 0, 00:21:06.905 "data_size": 0 00:21:06.905 }, 00:21:06.905 { 00:21:06.905 "name": "BaseBdev4", 00:21:06.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.905 "is_configured": false, 00:21:06.905 "data_offset": 0, 00:21:06.905 "data_size": 0 00:21:06.905 } 00:21:06.905 ] 00:21:06.905 }' 00:21:06.905 23:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:06.905 23:06:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.470 23:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:07.728 [2024-07-13 23:06:57.116275] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:07.728 [2024-07-13 23:06:57.116639] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:21:07.728 23:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:07.986 [2024-07-13 23:06:57.364256] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:07.986 [2024-07-13 23:06:57.364506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:07.986 [2024-07-13 23:06:57.364639] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:07.986 [2024-07-13 23:06:57.364791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:07.986 [2024-07-13 23:06:57.364898] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:07.986 [2024-07-13 23:06:57.365049] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:07.986 [2024-07-13 23:06:57.365154] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:07.986 [2024-07-13 23:06:57.365232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:07.986 23:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:08.243 [2024-07-13 23:06:57.595666] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:08.243 BaseBdev1 00:21:08.243 23:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:21:08.244 23:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:08.244 23:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:08.244 23:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:08.244 23:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:08.244 23:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:08.244 23:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:08.502 23:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:08.761 [ 00:21:08.761 { 00:21:08.761 "name": "BaseBdev1", 00:21:08.761 "aliases": [ 00:21:08.761 "9b11236a-ca4e-4293-bba0-a54d442a3b0f" 00:21:08.761 ], 00:21:08.761 "product_name": "Malloc disk", 00:21:08.761 "block_size": 512, 00:21:08.761 "num_blocks": 65536, 00:21:08.761 "uuid": "9b11236a-ca4e-4293-bba0-a54d442a3b0f", 00:21:08.761 "assigned_rate_limits": { 00:21:08.761 "rw_ios_per_sec": 0, 00:21:08.761 "rw_mbytes_per_sec": 0, 00:21:08.761 "r_mbytes_per_sec": 0, 00:21:08.761 "w_mbytes_per_sec": 0 00:21:08.761 }, 00:21:08.761 "claimed": true, 00:21:08.761 "claim_type": "exclusive_write", 00:21:08.761 "zoned": false, 00:21:08.761 "supported_io_types": { 00:21:08.761 "read": true, 00:21:08.761 "write": true, 00:21:08.761 "unmap": true, 00:21:08.761 "flush": true, 00:21:08.761 "reset": true, 00:21:08.761 "nvme_admin": false, 00:21:08.761 "nvme_io": false, 00:21:08.761 "nvme_io_md": false, 00:21:08.761 "write_zeroes": true, 00:21:08.762 "zcopy": true, 00:21:08.762 "get_zone_info": false, 00:21:08.762 "zone_management": false, 00:21:08.762 "zone_append": false, 00:21:08.762 "compare": false, 00:21:08.762 "compare_and_write": false, 00:21:08.762 "abort": true, 00:21:08.762 "seek_hole": false, 00:21:08.762 "seek_data": false, 00:21:08.762 "copy": true, 00:21:08.762 "nvme_iov_md": false 00:21:08.762 }, 00:21:08.762 "memory_domains": [ 00:21:08.762 { 00:21:08.762 "dma_device_id": "system", 00:21:08.762 "dma_device_type": 1 00:21:08.762 }, 00:21:08.762 { 00:21:08.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:08.762 "dma_device_type": 2 00:21:08.762 } 00:21:08.762 ], 00:21:08.762 "driver_specific": {} 00:21:08.762 } 00:21:08.762 ] 00:21:08.762 23:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:08.762 23:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:08.762 23:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:08.762 23:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:08.762 23:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:08.762 23:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:08.762 23:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:08.762 23:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:08.762 23:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:08.762 23:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:08.762 23:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:08.762 23:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:08.762 23:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.020 23:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:09.020 "name": "Existed_Raid", 00:21:09.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.020 "strip_size_kb": 64, 00:21:09.020 "state": "configuring", 00:21:09.020 "raid_level": "raid0", 00:21:09.020 "superblock": false, 00:21:09.020 "num_base_bdevs": 4, 00:21:09.020 "num_base_bdevs_discovered": 1, 00:21:09.020 "num_base_bdevs_operational": 4, 00:21:09.020 "base_bdevs_list": [ 00:21:09.021 { 00:21:09.021 "name": "BaseBdev1", 00:21:09.021 "uuid": "9b11236a-ca4e-4293-bba0-a54d442a3b0f", 00:21:09.021 "is_configured": true, 00:21:09.021 "data_offset": 0, 00:21:09.021 "data_size": 65536 00:21:09.021 }, 00:21:09.021 { 00:21:09.021 "name": "BaseBdev2", 00:21:09.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.021 "is_configured": false, 00:21:09.021 "data_offset": 0, 00:21:09.021 "data_size": 0 00:21:09.021 }, 00:21:09.021 { 00:21:09.021 "name": "BaseBdev3", 00:21:09.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.021 "is_configured": false, 00:21:09.021 "data_offset": 0, 00:21:09.021 "data_size": 0 00:21:09.021 }, 00:21:09.021 { 00:21:09.021 "name": "BaseBdev4", 00:21:09.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.021 "is_configured": false, 00:21:09.021 "data_offset": 0, 00:21:09.021 "data_size": 0 00:21:09.021 } 00:21:09.021 ] 00:21:09.021 }' 00:21:09.021 23:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:09.021 23:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.589 23:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:09.849 [2024-07-13 23:06:59.160633] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:09.849 [2024-07-13 23:06:59.161016] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:21:09.849 23:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:10.108 [2024-07-13 23:06:59.444823] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:10.108 [2024-07-13 23:06:59.448167] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:10.108 [2024-07-13 23:06:59.448445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:10.108 [2024-07-13 23:06:59.448603] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:10.108 [2024-07-13 23:06:59.448693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:10.108 [2024-07-13 23:06:59.448893] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:10.108 [2024-07-13 23:06:59.449003] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:10.108 23:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:21:10.108 23:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:10.108 23:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:10.108 23:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:10.108 23:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:10.108 23:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:10.108 23:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:10.108 23:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:10.108 23:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:10.108 23:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:10.108 23:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:10.108 23:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:10.108 23:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.108 23:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:10.367 23:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:10.367 "name": "Existed_Raid", 00:21:10.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.367 "strip_size_kb": 64, 00:21:10.367 "state": "configuring", 00:21:10.367 "raid_level": "raid0", 00:21:10.367 "superblock": false, 00:21:10.367 "num_base_bdevs": 4, 00:21:10.367 "num_base_bdevs_discovered": 1, 00:21:10.367 "num_base_bdevs_operational": 4, 00:21:10.367 "base_bdevs_list": [ 00:21:10.367 { 00:21:10.367 "name": "BaseBdev1", 00:21:10.367 "uuid": "9b11236a-ca4e-4293-bba0-a54d442a3b0f", 00:21:10.367 "is_configured": true, 00:21:10.367 "data_offset": 0, 00:21:10.367 "data_size": 65536 00:21:10.367 }, 00:21:10.367 { 00:21:10.367 "name": "BaseBdev2", 00:21:10.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.367 "is_configured": false, 00:21:10.367 "data_offset": 0, 00:21:10.367 "data_size": 0 00:21:10.367 }, 00:21:10.367 { 00:21:10.367 "name": "BaseBdev3", 00:21:10.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.367 "is_configured": false, 00:21:10.367 "data_offset": 0, 00:21:10.367 "data_size": 0 00:21:10.367 }, 00:21:10.367 { 00:21:10.367 "name": "BaseBdev4", 00:21:10.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.367 "is_configured": false, 00:21:10.367 "data_offset": 0, 00:21:10.367 "data_size": 0 00:21:10.367 } 00:21:10.367 ] 00:21:10.367 }' 00:21:10.367 23:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:10.367 23:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.303 23:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:11.303 [2024-07-13 23:07:00.606113] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:11.303 BaseBdev2 00:21:11.303 23:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:21:11.303 23:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:11.303 23:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:11.303 23:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:11.303 23:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:11.303 23:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:11.303 23:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:11.561 23:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:11.820 [ 00:21:11.820 { 00:21:11.820 "name": "BaseBdev2", 00:21:11.820 "aliases": [ 00:21:11.820 "53c02264-6f09-43fe-b701-85d7cf00e19e" 00:21:11.820 ], 00:21:11.820 "product_name": "Malloc disk", 00:21:11.820 "block_size": 512, 00:21:11.820 "num_blocks": 65536, 00:21:11.820 "uuid": "53c02264-6f09-43fe-b701-85d7cf00e19e", 00:21:11.820 "assigned_rate_limits": { 00:21:11.820 "rw_ios_per_sec": 0, 00:21:11.820 "rw_mbytes_per_sec": 0, 00:21:11.820 "r_mbytes_per_sec": 0, 00:21:11.820 "w_mbytes_per_sec": 0 00:21:11.820 }, 00:21:11.820 "claimed": true, 00:21:11.820 "claim_type": "exclusive_write", 00:21:11.820 "zoned": false, 00:21:11.820 "supported_io_types": { 00:21:11.820 "read": true, 00:21:11.820 "write": true, 00:21:11.820 "unmap": true, 00:21:11.820 "flush": true, 00:21:11.820 "reset": true, 00:21:11.820 "nvme_admin": false, 00:21:11.820 "nvme_io": false, 00:21:11.820 "nvme_io_md": false, 00:21:11.820 "write_zeroes": true, 00:21:11.820 "zcopy": true, 00:21:11.820 "get_zone_info": false, 00:21:11.820 "zone_management": false, 00:21:11.820 "zone_append": false, 00:21:11.820 "compare": false, 00:21:11.820 "compare_and_write": false, 00:21:11.820 "abort": true, 00:21:11.820 "seek_hole": false, 00:21:11.820 "seek_data": false, 00:21:11.820 "copy": true, 00:21:11.820 "nvme_iov_md": false 00:21:11.820 }, 00:21:11.820 "memory_domains": [ 00:21:11.820 { 00:21:11.820 "dma_device_id": "system", 00:21:11.820 "dma_device_type": 1 00:21:11.820 }, 00:21:11.820 { 00:21:11.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.820 "dma_device_type": 2 00:21:11.820 } 00:21:11.820 ], 00:21:11.820 "driver_specific": {} 00:21:11.820 } 00:21:11.820 ] 00:21:11.820 23:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:11.820 23:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:11.820 23:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:11.820 23:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:11.820 23:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:11.820 23:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:11.820 23:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:11.820 23:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:11.820 23:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:11.820 23:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:11.820 23:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:11.820 23:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:11.820 23:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:11.820 23:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.820 23:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:12.078 23:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:12.078 "name": "Existed_Raid", 00:21:12.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.078 "strip_size_kb": 64, 00:21:12.078 "state": "configuring", 00:21:12.078 "raid_level": "raid0", 00:21:12.078 "superblock": false, 00:21:12.078 "num_base_bdevs": 4, 00:21:12.078 "num_base_bdevs_discovered": 2, 00:21:12.078 "num_base_bdevs_operational": 4, 00:21:12.078 "base_bdevs_list": [ 00:21:12.078 { 00:21:12.078 "name": "BaseBdev1", 00:21:12.078 "uuid": "9b11236a-ca4e-4293-bba0-a54d442a3b0f", 00:21:12.078 "is_configured": true, 00:21:12.078 "data_offset": 0, 00:21:12.078 "data_size": 65536 00:21:12.078 }, 00:21:12.078 { 00:21:12.078 "name": "BaseBdev2", 00:21:12.078 "uuid": "53c02264-6f09-43fe-b701-85d7cf00e19e", 00:21:12.078 "is_configured": true, 00:21:12.078 "data_offset": 0, 00:21:12.078 "data_size": 65536 00:21:12.078 }, 00:21:12.078 { 00:21:12.078 "name": "BaseBdev3", 00:21:12.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.078 "is_configured": false, 00:21:12.078 "data_offset": 0, 00:21:12.078 "data_size": 0 00:21:12.078 }, 00:21:12.078 { 00:21:12.078 "name": "BaseBdev4", 00:21:12.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.078 "is_configured": false, 00:21:12.078 "data_offset": 0, 00:21:12.078 "data_size": 0 00:21:12.078 } 00:21:12.078 ] 00:21:12.078 }' 00:21:12.078 23:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:12.078 23:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.643 23:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:12.905 [2024-07-13 23:07:02.237384] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:12.905 BaseBdev3 00:21:12.905 23:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:21:12.905 23:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:12.905 23:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:12.905 23:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:12.905 23:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:12.905 23:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:12.905 23:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:13.164 23:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:13.422 [ 00:21:13.422 { 00:21:13.422 "name": "BaseBdev3", 00:21:13.422 "aliases": [ 00:21:13.422 "9de92137-860a-4ed9-afb8-453cbe20fa34" 00:21:13.422 ], 00:21:13.422 "product_name": "Malloc disk", 00:21:13.422 "block_size": 512, 00:21:13.422 "num_blocks": 65536, 00:21:13.422 "uuid": "9de92137-860a-4ed9-afb8-453cbe20fa34", 00:21:13.422 "assigned_rate_limits": { 00:21:13.422 "rw_ios_per_sec": 0, 00:21:13.422 "rw_mbytes_per_sec": 0, 00:21:13.422 "r_mbytes_per_sec": 0, 00:21:13.422 "w_mbytes_per_sec": 0 00:21:13.422 }, 00:21:13.422 "claimed": true, 00:21:13.422 "claim_type": "exclusive_write", 00:21:13.422 "zoned": false, 00:21:13.422 "supported_io_types": { 00:21:13.422 "read": true, 00:21:13.422 "write": true, 00:21:13.422 "unmap": true, 00:21:13.422 "flush": true, 00:21:13.422 "reset": true, 00:21:13.422 "nvme_admin": false, 00:21:13.422 "nvme_io": false, 00:21:13.422 "nvme_io_md": false, 00:21:13.422 "write_zeroes": true, 00:21:13.422 "zcopy": true, 00:21:13.422 "get_zone_info": false, 00:21:13.422 "zone_management": false, 00:21:13.422 "zone_append": false, 00:21:13.422 "compare": false, 00:21:13.422 "compare_and_write": false, 00:21:13.422 "abort": true, 00:21:13.422 "seek_hole": false, 00:21:13.422 "seek_data": false, 00:21:13.422 "copy": true, 00:21:13.422 "nvme_iov_md": false 00:21:13.422 }, 00:21:13.422 "memory_domains": [ 00:21:13.422 { 00:21:13.422 "dma_device_id": "system", 00:21:13.422 "dma_device_type": 1 00:21:13.422 }, 00:21:13.422 { 00:21:13.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:13.422 "dma_device_type": 2 00:21:13.422 } 00:21:13.422 ], 00:21:13.422 "driver_specific": {} 00:21:13.422 } 00:21:13.422 ] 00:21:13.422 23:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:13.422 23:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:13.422 23:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:13.422 23:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:13.422 23:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:13.422 23:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:13.422 23:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:13.422 23:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:13.422 23:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:13.422 23:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:13.422 23:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:13.422 23:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:13.422 23:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:13.422 23:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.422 23:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:13.680 23:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:13.680 "name": "Existed_Raid", 00:21:13.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.680 "strip_size_kb": 64, 00:21:13.680 "state": "configuring", 00:21:13.680 "raid_level": "raid0", 00:21:13.680 "superblock": false, 00:21:13.680 "num_base_bdevs": 4, 00:21:13.680 "num_base_bdevs_discovered": 3, 00:21:13.680 "num_base_bdevs_operational": 4, 00:21:13.680 "base_bdevs_list": [ 00:21:13.680 { 00:21:13.680 "name": "BaseBdev1", 00:21:13.680 "uuid": "9b11236a-ca4e-4293-bba0-a54d442a3b0f", 00:21:13.680 "is_configured": true, 00:21:13.680 "data_offset": 0, 00:21:13.680 "data_size": 65536 00:21:13.680 }, 00:21:13.680 { 00:21:13.680 "name": "BaseBdev2", 00:21:13.680 "uuid": "53c02264-6f09-43fe-b701-85d7cf00e19e", 00:21:13.680 "is_configured": true, 00:21:13.680 "data_offset": 0, 00:21:13.680 "data_size": 65536 00:21:13.680 }, 00:21:13.680 { 00:21:13.680 "name": "BaseBdev3", 00:21:13.680 "uuid": "9de92137-860a-4ed9-afb8-453cbe20fa34", 00:21:13.680 "is_configured": true, 00:21:13.680 "data_offset": 0, 00:21:13.680 "data_size": 65536 00:21:13.680 }, 00:21:13.680 { 00:21:13.680 "name": "BaseBdev4", 00:21:13.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.680 "is_configured": false, 00:21:13.680 "data_offset": 0, 00:21:13.680 "data_size": 0 00:21:13.680 } 00:21:13.680 ] 00:21:13.680 }' 00:21:13.680 23:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:13.680 23:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.249 23:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:14.815 [2024-07-13 23:07:03.922717] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:14.815 [2024-07-13 23:07:03.922963] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:21:14.815 [2024-07-13 23:07:03.923048] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:21:14.815 [2024-07-13 23:07:03.923370] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:21:14.815 [2024-07-13 23:07:03.924037] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:21:14.815 [2024-07-13 23:07:03.924197] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:21:14.815 [2024-07-13 23:07:03.924553] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:14.815 BaseBdev4 00:21:14.815 23:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:21:14.815 23:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:21:14.815 23:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:14.815 23:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:14.815 23:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:14.815 23:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:14.815 23:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:14.815 23:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:15.072 [ 00:21:15.072 { 00:21:15.073 "name": "BaseBdev4", 00:21:15.073 "aliases": [ 00:21:15.073 "4ad7d492-ea80-4515-9890-12d79de7fd5b" 00:21:15.073 ], 00:21:15.073 "product_name": "Malloc disk", 00:21:15.073 "block_size": 512, 00:21:15.073 "num_blocks": 65536, 00:21:15.073 "uuid": "4ad7d492-ea80-4515-9890-12d79de7fd5b", 00:21:15.073 "assigned_rate_limits": { 00:21:15.073 "rw_ios_per_sec": 0, 00:21:15.073 "rw_mbytes_per_sec": 0, 00:21:15.073 "r_mbytes_per_sec": 0, 00:21:15.073 "w_mbytes_per_sec": 0 00:21:15.073 }, 00:21:15.073 "claimed": true, 00:21:15.073 "claim_type": "exclusive_write", 00:21:15.073 "zoned": false, 00:21:15.073 "supported_io_types": { 00:21:15.073 "read": true, 00:21:15.073 "write": true, 00:21:15.073 "unmap": true, 00:21:15.073 "flush": true, 00:21:15.073 "reset": true, 00:21:15.073 "nvme_admin": false, 00:21:15.073 "nvme_io": false, 00:21:15.073 "nvme_io_md": false, 00:21:15.073 "write_zeroes": true, 00:21:15.073 "zcopy": true, 00:21:15.073 "get_zone_info": false, 00:21:15.073 "zone_management": false, 00:21:15.073 "zone_append": false, 00:21:15.073 "compare": false, 00:21:15.073 "compare_and_write": false, 00:21:15.073 "abort": true, 00:21:15.073 "seek_hole": false, 00:21:15.073 "seek_data": false, 00:21:15.073 "copy": true, 00:21:15.073 "nvme_iov_md": false 00:21:15.073 }, 00:21:15.073 "memory_domains": [ 00:21:15.073 { 00:21:15.073 "dma_device_id": "system", 00:21:15.073 "dma_device_type": 1 00:21:15.073 }, 00:21:15.073 { 00:21:15.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.073 "dma_device_type": 2 00:21:15.073 } 00:21:15.073 ], 00:21:15.073 "driver_specific": {} 00:21:15.073 } 00:21:15.073 ] 00:21:15.073 23:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:15.073 23:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:15.073 23:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:15.073 23:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:21:15.073 23:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:15.073 23:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:15.073 23:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:15.073 23:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:15.073 23:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:15.073 23:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:15.073 23:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:15.073 23:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:15.073 23:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:15.073 23:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.073 23:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:15.331 23:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:15.331 "name": "Existed_Raid", 00:21:15.331 "uuid": "b1f4d956-58a9-4178-85f8-05e2bb85c62d", 00:21:15.331 "strip_size_kb": 64, 00:21:15.331 "state": "online", 00:21:15.331 "raid_level": "raid0", 00:21:15.331 "superblock": false, 00:21:15.331 "num_base_bdevs": 4, 00:21:15.331 "num_base_bdevs_discovered": 4, 00:21:15.331 "num_base_bdevs_operational": 4, 00:21:15.331 "base_bdevs_list": [ 00:21:15.331 { 00:21:15.331 "name": "BaseBdev1", 00:21:15.331 "uuid": "9b11236a-ca4e-4293-bba0-a54d442a3b0f", 00:21:15.331 "is_configured": true, 00:21:15.331 "data_offset": 0, 00:21:15.331 "data_size": 65536 00:21:15.331 }, 00:21:15.331 { 00:21:15.331 "name": "BaseBdev2", 00:21:15.331 "uuid": "53c02264-6f09-43fe-b701-85d7cf00e19e", 00:21:15.331 "is_configured": true, 00:21:15.331 "data_offset": 0, 00:21:15.331 "data_size": 65536 00:21:15.331 }, 00:21:15.331 { 00:21:15.331 "name": "BaseBdev3", 00:21:15.331 "uuid": "9de92137-860a-4ed9-afb8-453cbe20fa34", 00:21:15.331 "is_configured": true, 00:21:15.331 "data_offset": 0, 00:21:15.331 "data_size": 65536 00:21:15.331 }, 00:21:15.331 { 00:21:15.331 "name": "BaseBdev4", 00:21:15.331 "uuid": "4ad7d492-ea80-4515-9890-12d79de7fd5b", 00:21:15.331 "is_configured": true, 00:21:15.331 "data_offset": 0, 00:21:15.331 "data_size": 65536 00:21:15.331 } 00:21:15.331 ] 00:21:15.331 }' 00:21:15.331 23:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:15.331 23:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.266 23:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:21:16.266 23:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:16.266 23:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:16.266 23:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:16.266 23:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:16.266 23:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:16.266 23:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:16.266 23:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:16.266 [2024-07-13 23:07:05.522636] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:16.266 23:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:16.266 "name": "Existed_Raid", 00:21:16.266 "aliases": [ 00:21:16.266 "b1f4d956-58a9-4178-85f8-05e2bb85c62d" 00:21:16.266 ], 00:21:16.266 "product_name": "Raid Volume", 00:21:16.266 "block_size": 512, 00:21:16.266 "num_blocks": 262144, 00:21:16.266 "uuid": "b1f4d956-58a9-4178-85f8-05e2bb85c62d", 00:21:16.266 "assigned_rate_limits": { 00:21:16.266 "rw_ios_per_sec": 0, 00:21:16.266 "rw_mbytes_per_sec": 0, 00:21:16.266 "r_mbytes_per_sec": 0, 00:21:16.266 "w_mbytes_per_sec": 0 00:21:16.266 }, 00:21:16.266 "claimed": false, 00:21:16.266 "zoned": false, 00:21:16.266 "supported_io_types": { 00:21:16.266 "read": true, 00:21:16.266 "write": true, 00:21:16.266 "unmap": true, 00:21:16.266 "flush": true, 00:21:16.266 "reset": true, 00:21:16.266 "nvme_admin": false, 00:21:16.266 "nvme_io": false, 00:21:16.266 "nvme_io_md": false, 00:21:16.266 "write_zeroes": true, 00:21:16.266 "zcopy": false, 00:21:16.266 "get_zone_info": false, 00:21:16.266 "zone_management": false, 00:21:16.266 "zone_append": false, 00:21:16.266 "compare": false, 00:21:16.266 "compare_and_write": false, 00:21:16.266 "abort": false, 00:21:16.266 "seek_hole": false, 00:21:16.266 "seek_data": false, 00:21:16.266 "copy": false, 00:21:16.266 "nvme_iov_md": false 00:21:16.266 }, 00:21:16.266 "memory_domains": [ 00:21:16.266 { 00:21:16.266 "dma_device_id": "system", 00:21:16.266 "dma_device_type": 1 00:21:16.266 }, 00:21:16.266 { 00:21:16.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:16.266 "dma_device_type": 2 00:21:16.266 }, 00:21:16.266 { 00:21:16.266 "dma_device_id": "system", 00:21:16.266 "dma_device_type": 1 00:21:16.266 }, 00:21:16.266 { 00:21:16.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:16.266 "dma_device_type": 2 00:21:16.266 }, 00:21:16.266 { 00:21:16.266 "dma_device_id": "system", 00:21:16.266 "dma_device_type": 1 00:21:16.266 }, 00:21:16.266 { 00:21:16.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:16.266 "dma_device_type": 2 00:21:16.266 }, 00:21:16.266 { 00:21:16.266 "dma_device_id": "system", 00:21:16.266 "dma_device_type": 1 00:21:16.266 }, 00:21:16.266 { 00:21:16.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:16.266 "dma_device_type": 2 00:21:16.266 } 00:21:16.266 ], 00:21:16.266 "driver_specific": { 00:21:16.266 "raid": { 00:21:16.266 "uuid": "b1f4d956-58a9-4178-85f8-05e2bb85c62d", 00:21:16.266 "strip_size_kb": 64, 00:21:16.266 "state": "online", 00:21:16.266 "raid_level": "raid0", 00:21:16.266 "superblock": false, 00:21:16.266 "num_base_bdevs": 4, 00:21:16.266 "num_base_bdevs_discovered": 4, 00:21:16.266 "num_base_bdevs_operational": 4, 00:21:16.266 "base_bdevs_list": [ 00:21:16.266 { 00:21:16.266 "name": "BaseBdev1", 00:21:16.266 "uuid": "9b11236a-ca4e-4293-bba0-a54d442a3b0f", 00:21:16.266 "is_configured": true, 00:21:16.266 "data_offset": 0, 00:21:16.266 "data_size": 65536 00:21:16.266 }, 00:21:16.266 { 00:21:16.266 "name": "BaseBdev2", 00:21:16.266 "uuid": "53c02264-6f09-43fe-b701-85d7cf00e19e", 00:21:16.266 "is_configured": true, 00:21:16.266 "data_offset": 0, 00:21:16.266 "data_size": 65536 00:21:16.266 }, 00:21:16.266 { 00:21:16.266 "name": "BaseBdev3", 00:21:16.266 "uuid": "9de92137-860a-4ed9-afb8-453cbe20fa34", 00:21:16.266 "is_configured": true, 00:21:16.266 "data_offset": 0, 00:21:16.266 "data_size": 65536 00:21:16.266 }, 00:21:16.266 { 00:21:16.266 "name": "BaseBdev4", 00:21:16.266 "uuid": "4ad7d492-ea80-4515-9890-12d79de7fd5b", 00:21:16.266 "is_configured": true, 00:21:16.266 "data_offset": 0, 00:21:16.266 "data_size": 65536 00:21:16.266 } 00:21:16.266 ] 00:21:16.266 } 00:21:16.266 } 00:21:16.266 }' 00:21:16.266 23:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:16.266 23:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:16.266 BaseBdev2 00:21:16.266 BaseBdev3 00:21:16.266 BaseBdev4' 00:21:16.266 23:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:16.266 23:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:16.266 23:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:16.525 23:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:16.525 "name": "BaseBdev1", 00:21:16.525 "aliases": [ 00:21:16.525 "9b11236a-ca4e-4293-bba0-a54d442a3b0f" 00:21:16.525 ], 00:21:16.525 "product_name": "Malloc disk", 00:21:16.525 "block_size": 512, 00:21:16.525 "num_blocks": 65536, 00:21:16.525 "uuid": "9b11236a-ca4e-4293-bba0-a54d442a3b0f", 00:21:16.525 "assigned_rate_limits": { 00:21:16.525 "rw_ios_per_sec": 0, 00:21:16.525 "rw_mbytes_per_sec": 0, 00:21:16.525 "r_mbytes_per_sec": 0, 00:21:16.525 "w_mbytes_per_sec": 0 00:21:16.525 }, 00:21:16.525 "claimed": true, 00:21:16.525 "claim_type": "exclusive_write", 00:21:16.525 "zoned": false, 00:21:16.525 "supported_io_types": { 00:21:16.525 "read": true, 00:21:16.525 "write": true, 00:21:16.525 "unmap": true, 00:21:16.525 "flush": true, 00:21:16.525 "reset": true, 00:21:16.525 "nvme_admin": false, 00:21:16.525 "nvme_io": false, 00:21:16.525 "nvme_io_md": false, 00:21:16.525 "write_zeroes": true, 00:21:16.525 "zcopy": true, 00:21:16.525 "get_zone_info": false, 00:21:16.525 "zone_management": false, 00:21:16.525 "zone_append": false, 00:21:16.525 "compare": false, 00:21:16.525 "compare_and_write": false, 00:21:16.525 "abort": true, 00:21:16.525 "seek_hole": false, 00:21:16.525 "seek_data": false, 00:21:16.525 "copy": true, 00:21:16.525 "nvme_iov_md": false 00:21:16.525 }, 00:21:16.525 "memory_domains": [ 00:21:16.525 { 00:21:16.525 "dma_device_id": "system", 00:21:16.525 "dma_device_type": 1 00:21:16.525 }, 00:21:16.525 { 00:21:16.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:16.525 "dma_device_type": 2 00:21:16.525 } 00:21:16.525 ], 00:21:16.525 "driver_specific": {} 00:21:16.525 }' 00:21:16.525 23:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:16.525 23:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:16.783 23:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:16.783 23:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:16.783 23:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:16.783 23:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:16.783 23:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:16.783 23:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:16.783 23:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:16.783 23:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:16.783 23:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:17.041 23:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:17.041 23:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:17.041 23:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:17.041 23:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:17.299 23:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:17.299 "name": "BaseBdev2", 00:21:17.299 "aliases": [ 00:21:17.299 "53c02264-6f09-43fe-b701-85d7cf00e19e" 00:21:17.299 ], 00:21:17.299 "product_name": "Malloc disk", 00:21:17.299 "block_size": 512, 00:21:17.299 "num_blocks": 65536, 00:21:17.299 "uuid": "53c02264-6f09-43fe-b701-85d7cf00e19e", 00:21:17.299 "assigned_rate_limits": { 00:21:17.299 "rw_ios_per_sec": 0, 00:21:17.299 "rw_mbytes_per_sec": 0, 00:21:17.299 "r_mbytes_per_sec": 0, 00:21:17.299 "w_mbytes_per_sec": 0 00:21:17.299 }, 00:21:17.300 "claimed": true, 00:21:17.300 "claim_type": "exclusive_write", 00:21:17.300 "zoned": false, 00:21:17.300 "supported_io_types": { 00:21:17.300 "read": true, 00:21:17.300 "write": true, 00:21:17.300 "unmap": true, 00:21:17.300 "flush": true, 00:21:17.300 "reset": true, 00:21:17.300 "nvme_admin": false, 00:21:17.300 "nvme_io": false, 00:21:17.300 "nvme_io_md": false, 00:21:17.300 "write_zeroes": true, 00:21:17.300 "zcopy": true, 00:21:17.300 "get_zone_info": false, 00:21:17.300 "zone_management": false, 00:21:17.300 "zone_append": false, 00:21:17.300 "compare": false, 00:21:17.300 "compare_and_write": false, 00:21:17.300 "abort": true, 00:21:17.300 "seek_hole": false, 00:21:17.300 "seek_data": false, 00:21:17.300 "copy": true, 00:21:17.300 "nvme_iov_md": false 00:21:17.300 }, 00:21:17.300 "memory_domains": [ 00:21:17.300 { 00:21:17.300 "dma_device_id": "system", 00:21:17.300 "dma_device_type": 1 00:21:17.300 }, 00:21:17.300 { 00:21:17.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:17.300 "dma_device_type": 2 00:21:17.300 } 00:21:17.300 ], 00:21:17.300 "driver_specific": {} 00:21:17.300 }' 00:21:17.300 23:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:17.300 23:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:17.300 23:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:17.300 23:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:17.300 23:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:17.300 23:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:17.300 23:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:17.300 23:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:17.557 23:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:17.557 23:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:17.557 23:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:17.557 23:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:17.557 23:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:17.557 23:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:17.557 23:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:17.816 23:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:17.816 "name": "BaseBdev3", 00:21:17.816 "aliases": [ 00:21:17.816 "9de92137-860a-4ed9-afb8-453cbe20fa34" 00:21:17.816 ], 00:21:17.816 "product_name": "Malloc disk", 00:21:17.816 "block_size": 512, 00:21:17.816 "num_blocks": 65536, 00:21:17.816 "uuid": "9de92137-860a-4ed9-afb8-453cbe20fa34", 00:21:17.816 "assigned_rate_limits": { 00:21:17.816 "rw_ios_per_sec": 0, 00:21:17.816 "rw_mbytes_per_sec": 0, 00:21:17.816 "r_mbytes_per_sec": 0, 00:21:17.816 "w_mbytes_per_sec": 0 00:21:17.816 }, 00:21:17.816 "claimed": true, 00:21:17.816 "claim_type": "exclusive_write", 00:21:17.816 "zoned": false, 00:21:17.816 "supported_io_types": { 00:21:17.816 "read": true, 00:21:17.816 "write": true, 00:21:17.816 "unmap": true, 00:21:17.816 "flush": true, 00:21:17.816 "reset": true, 00:21:17.816 "nvme_admin": false, 00:21:17.816 "nvme_io": false, 00:21:17.816 "nvme_io_md": false, 00:21:17.816 "write_zeroes": true, 00:21:17.816 "zcopy": true, 00:21:17.816 "get_zone_info": false, 00:21:17.816 "zone_management": false, 00:21:17.816 "zone_append": false, 00:21:17.816 "compare": false, 00:21:17.816 "compare_and_write": false, 00:21:17.816 "abort": true, 00:21:17.816 "seek_hole": false, 00:21:17.816 "seek_data": false, 00:21:17.816 "copy": true, 00:21:17.816 "nvme_iov_md": false 00:21:17.816 }, 00:21:17.816 "memory_domains": [ 00:21:17.816 { 00:21:17.816 "dma_device_id": "system", 00:21:17.816 "dma_device_type": 1 00:21:17.816 }, 00:21:17.816 { 00:21:17.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:17.816 "dma_device_type": 2 00:21:17.816 } 00:21:17.816 ], 00:21:17.816 "driver_specific": {} 00:21:17.816 }' 00:21:17.816 23:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:17.816 23:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:18.073 23:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:18.073 23:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:18.073 23:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:18.073 23:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:18.073 23:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:18.073 23:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:18.073 23:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:18.073 23:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:18.351 23:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:18.351 23:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:18.351 23:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:18.351 23:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:18.351 23:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:18.609 23:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:18.609 "name": "BaseBdev4", 00:21:18.609 "aliases": [ 00:21:18.609 "4ad7d492-ea80-4515-9890-12d79de7fd5b" 00:21:18.609 ], 00:21:18.609 "product_name": "Malloc disk", 00:21:18.609 "block_size": 512, 00:21:18.609 "num_blocks": 65536, 00:21:18.609 "uuid": "4ad7d492-ea80-4515-9890-12d79de7fd5b", 00:21:18.609 "assigned_rate_limits": { 00:21:18.609 "rw_ios_per_sec": 0, 00:21:18.609 "rw_mbytes_per_sec": 0, 00:21:18.609 "r_mbytes_per_sec": 0, 00:21:18.609 "w_mbytes_per_sec": 0 00:21:18.609 }, 00:21:18.609 "claimed": true, 00:21:18.609 "claim_type": "exclusive_write", 00:21:18.609 "zoned": false, 00:21:18.609 "supported_io_types": { 00:21:18.609 "read": true, 00:21:18.609 "write": true, 00:21:18.609 "unmap": true, 00:21:18.609 "flush": true, 00:21:18.609 "reset": true, 00:21:18.609 "nvme_admin": false, 00:21:18.609 "nvme_io": false, 00:21:18.609 "nvme_io_md": false, 00:21:18.609 "write_zeroes": true, 00:21:18.609 "zcopy": true, 00:21:18.609 "get_zone_info": false, 00:21:18.609 "zone_management": false, 00:21:18.609 "zone_append": false, 00:21:18.609 "compare": false, 00:21:18.609 "compare_and_write": false, 00:21:18.609 "abort": true, 00:21:18.609 "seek_hole": false, 00:21:18.609 "seek_data": false, 00:21:18.609 "copy": true, 00:21:18.609 "nvme_iov_md": false 00:21:18.609 }, 00:21:18.609 "memory_domains": [ 00:21:18.609 { 00:21:18.609 "dma_device_id": "system", 00:21:18.609 "dma_device_type": 1 00:21:18.609 }, 00:21:18.609 { 00:21:18.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:18.609 "dma_device_type": 2 00:21:18.609 } 00:21:18.609 ], 00:21:18.609 "driver_specific": {} 00:21:18.609 }' 00:21:18.609 23:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:18.609 23:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:18.609 23:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:18.609 23:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:18.609 23:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:18.868 23:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:18.868 23:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:18.868 23:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:18.868 23:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:18.868 23:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:18.868 23:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:18.868 23:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:18.868 23:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:19.126 [2024-07-13 23:07:08.459265] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:19.126 [2024-07-13 23:07:08.459463] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:19.126 [2024-07-13 23:07:08.459663] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:19.126 23:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:21:19.126 23:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:21:19.126 23:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:19.126 23:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:21:19.126 23:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:21:19.126 23:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:21:19.126 23:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:19.126 23:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:21:19.126 23:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:19.126 23:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:19.126 23:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:19.126 23:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:19.126 23:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:19.126 23:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:19.126 23:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:19.126 23:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:19.126 23:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.384 23:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:19.384 "name": "Existed_Raid", 00:21:19.384 "uuid": "b1f4d956-58a9-4178-85f8-05e2bb85c62d", 00:21:19.384 "strip_size_kb": 64, 00:21:19.384 "state": "offline", 00:21:19.384 "raid_level": "raid0", 00:21:19.384 "superblock": false, 00:21:19.384 "num_base_bdevs": 4, 00:21:19.384 "num_base_bdevs_discovered": 3, 00:21:19.384 "num_base_bdevs_operational": 3, 00:21:19.384 "base_bdevs_list": [ 00:21:19.384 { 00:21:19.384 "name": null, 00:21:19.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.384 "is_configured": false, 00:21:19.384 "data_offset": 0, 00:21:19.384 "data_size": 65536 00:21:19.384 }, 00:21:19.384 { 00:21:19.384 "name": "BaseBdev2", 00:21:19.384 "uuid": "53c02264-6f09-43fe-b701-85d7cf00e19e", 00:21:19.384 "is_configured": true, 00:21:19.384 "data_offset": 0, 00:21:19.384 "data_size": 65536 00:21:19.384 }, 00:21:19.384 { 00:21:19.384 "name": "BaseBdev3", 00:21:19.384 "uuid": "9de92137-860a-4ed9-afb8-453cbe20fa34", 00:21:19.384 "is_configured": true, 00:21:19.384 "data_offset": 0, 00:21:19.384 "data_size": 65536 00:21:19.384 }, 00:21:19.384 { 00:21:19.384 "name": "BaseBdev4", 00:21:19.384 "uuid": "4ad7d492-ea80-4515-9890-12d79de7fd5b", 00:21:19.384 "is_configured": true, 00:21:19.384 "data_offset": 0, 00:21:19.384 "data_size": 65536 00:21:19.384 } 00:21:19.384 ] 00:21:19.384 }' 00:21:19.384 23:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:19.384 23:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.317 23:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:21:20.317 23:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:20.317 23:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.317 23:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:20.317 23:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:20.317 23:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:20.317 23:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:20.574 [2024-07-13 23:07:09.842683] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:20.574 23:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:20.574 23:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:20.574 23:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.574 23:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:20.831 23:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:20.832 23:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:20.832 23:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:21.089 [2024-07-13 23:07:10.441769] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:21.089 23:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:21.089 23:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:21.089 23:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:21.089 23:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.345 23:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:21.345 23:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:21.345 23:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:21.602 [2024-07-13 23:07:10.915730] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:21.602 [2024-07-13 23:07:10.915959] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:21:21.602 23:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:21.602 23:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:21.602 23:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.602 23:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:21:21.859 23:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:21:21.859 23:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:21:21.859 23:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:21:21.859 23:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:21:21.859 23:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:21.859 23:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:22.116 BaseBdev2 00:21:22.116 23:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:21:22.116 23:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:22.116 23:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:22.116 23:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:22.116 23:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:22.116 23:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:22.116 23:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:22.373 23:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:22.631 [ 00:21:22.631 { 00:21:22.631 "name": "BaseBdev2", 00:21:22.631 "aliases": [ 00:21:22.631 "faa0cd07-214e-4684-a4b7-65e909615fc3" 00:21:22.631 ], 00:21:22.631 "product_name": "Malloc disk", 00:21:22.631 "block_size": 512, 00:21:22.631 "num_blocks": 65536, 00:21:22.631 "uuid": "faa0cd07-214e-4684-a4b7-65e909615fc3", 00:21:22.631 "assigned_rate_limits": { 00:21:22.631 "rw_ios_per_sec": 0, 00:21:22.631 "rw_mbytes_per_sec": 0, 00:21:22.631 "r_mbytes_per_sec": 0, 00:21:22.631 "w_mbytes_per_sec": 0 00:21:22.631 }, 00:21:22.631 "claimed": false, 00:21:22.631 "zoned": false, 00:21:22.631 "supported_io_types": { 00:21:22.631 "read": true, 00:21:22.631 "write": true, 00:21:22.631 "unmap": true, 00:21:22.631 "flush": true, 00:21:22.631 "reset": true, 00:21:22.631 "nvme_admin": false, 00:21:22.631 "nvme_io": false, 00:21:22.631 "nvme_io_md": false, 00:21:22.631 "write_zeroes": true, 00:21:22.631 "zcopy": true, 00:21:22.631 "get_zone_info": false, 00:21:22.631 "zone_management": false, 00:21:22.631 "zone_append": false, 00:21:22.631 "compare": false, 00:21:22.631 "compare_and_write": false, 00:21:22.631 "abort": true, 00:21:22.631 "seek_hole": false, 00:21:22.631 "seek_data": false, 00:21:22.631 "copy": true, 00:21:22.631 "nvme_iov_md": false 00:21:22.631 }, 00:21:22.631 "memory_domains": [ 00:21:22.631 { 00:21:22.631 "dma_device_id": "system", 00:21:22.631 "dma_device_type": 1 00:21:22.631 }, 00:21:22.631 { 00:21:22.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:22.631 "dma_device_type": 2 00:21:22.631 } 00:21:22.631 ], 00:21:22.631 "driver_specific": {} 00:21:22.631 } 00:21:22.631 ] 00:21:22.631 23:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:22.631 23:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:22.631 23:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:22.631 23:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:22.887 BaseBdev3 00:21:22.887 23:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:21:22.887 23:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:22.887 23:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:22.887 23:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:22.887 23:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:22.887 23:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:22.887 23:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:23.143 23:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:23.401 [ 00:21:23.401 { 00:21:23.401 "name": "BaseBdev3", 00:21:23.401 "aliases": [ 00:21:23.401 "9ac9c19c-ab8f-4397-97be-a02164318a4d" 00:21:23.401 ], 00:21:23.401 "product_name": "Malloc disk", 00:21:23.401 "block_size": 512, 00:21:23.401 "num_blocks": 65536, 00:21:23.401 "uuid": "9ac9c19c-ab8f-4397-97be-a02164318a4d", 00:21:23.401 "assigned_rate_limits": { 00:21:23.401 "rw_ios_per_sec": 0, 00:21:23.401 "rw_mbytes_per_sec": 0, 00:21:23.401 "r_mbytes_per_sec": 0, 00:21:23.401 "w_mbytes_per_sec": 0 00:21:23.401 }, 00:21:23.401 "claimed": false, 00:21:23.401 "zoned": false, 00:21:23.401 "supported_io_types": { 00:21:23.401 "read": true, 00:21:23.401 "write": true, 00:21:23.401 "unmap": true, 00:21:23.401 "flush": true, 00:21:23.402 "reset": true, 00:21:23.402 "nvme_admin": false, 00:21:23.402 "nvme_io": false, 00:21:23.402 "nvme_io_md": false, 00:21:23.402 "write_zeroes": true, 00:21:23.402 "zcopy": true, 00:21:23.402 "get_zone_info": false, 00:21:23.402 "zone_management": false, 00:21:23.402 "zone_append": false, 00:21:23.402 "compare": false, 00:21:23.402 "compare_and_write": false, 00:21:23.402 "abort": true, 00:21:23.402 "seek_hole": false, 00:21:23.402 "seek_data": false, 00:21:23.402 "copy": true, 00:21:23.402 "nvme_iov_md": false 00:21:23.402 }, 00:21:23.402 "memory_domains": [ 00:21:23.402 { 00:21:23.402 "dma_device_id": "system", 00:21:23.402 "dma_device_type": 1 00:21:23.402 }, 00:21:23.402 { 00:21:23.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:23.402 "dma_device_type": 2 00:21:23.402 } 00:21:23.402 ], 00:21:23.402 "driver_specific": {} 00:21:23.402 } 00:21:23.402 ] 00:21:23.402 23:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:23.402 23:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:23.402 23:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:23.402 23:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:23.402 BaseBdev4 00:21:23.402 23:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:21:23.402 23:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:21:23.402 23:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:23.402 23:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:23.402 23:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:23.402 23:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:23.402 23:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:23.659 23:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:23.917 [ 00:21:23.917 { 00:21:23.917 "name": "BaseBdev4", 00:21:23.917 "aliases": [ 00:21:23.917 "e4bda9d6-a9b6-459a-907b-b71b750c839a" 00:21:23.917 ], 00:21:23.917 "product_name": "Malloc disk", 00:21:23.917 "block_size": 512, 00:21:23.917 "num_blocks": 65536, 00:21:23.917 "uuid": "e4bda9d6-a9b6-459a-907b-b71b750c839a", 00:21:23.917 "assigned_rate_limits": { 00:21:23.917 "rw_ios_per_sec": 0, 00:21:23.917 "rw_mbytes_per_sec": 0, 00:21:23.917 "r_mbytes_per_sec": 0, 00:21:23.917 "w_mbytes_per_sec": 0 00:21:23.917 }, 00:21:23.917 "claimed": false, 00:21:23.917 "zoned": false, 00:21:23.917 "supported_io_types": { 00:21:23.917 "read": true, 00:21:23.917 "write": true, 00:21:23.917 "unmap": true, 00:21:23.917 "flush": true, 00:21:23.917 "reset": true, 00:21:23.917 "nvme_admin": false, 00:21:23.917 "nvme_io": false, 00:21:23.917 "nvme_io_md": false, 00:21:23.917 "write_zeroes": true, 00:21:23.917 "zcopy": true, 00:21:23.917 "get_zone_info": false, 00:21:23.917 "zone_management": false, 00:21:23.917 "zone_append": false, 00:21:23.917 "compare": false, 00:21:23.917 "compare_and_write": false, 00:21:23.917 "abort": true, 00:21:23.917 "seek_hole": false, 00:21:23.917 "seek_data": false, 00:21:23.917 "copy": true, 00:21:23.917 "nvme_iov_md": false 00:21:23.917 }, 00:21:23.917 "memory_domains": [ 00:21:23.917 { 00:21:23.917 "dma_device_id": "system", 00:21:23.917 "dma_device_type": 1 00:21:23.917 }, 00:21:23.917 { 00:21:23.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:23.917 "dma_device_type": 2 00:21:23.917 } 00:21:23.917 ], 00:21:23.917 "driver_specific": {} 00:21:23.917 } 00:21:23.917 ] 00:21:23.917 23:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:23.917 23:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:23.917 23:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:23.917 23:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:24.185 [2024-07-13 23:07:13.468736] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:24.185 [2024-07-13 23:07:13.469219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:24.185 [2024-07-13 23:07:13.469440] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:24.185 [2024-07-13 23:07:13.472134] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:24.185 [2024-07-13 23:07:13.472362] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:24.185 23:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:24.185 23:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:24.185 23:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:24.185 23:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:24.185 23:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:24.185 23:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:24.185 23:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:24.185 23:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:24.185 23:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:24.185 23:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:24.185 23:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.185 23:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:24.529 23:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:24.529 "name": "Existed_Raid", 00:21:24.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.529 "strip_size_kb": 64, 00:21:24.529 "state": "configuring", 00:21:24.529 "raid_level": "raid0", 00:21:24.529 "superblock": false, 00:21:24.529 "num_base_bdevs": 4, 00:21:24.529 "num_base_bdevs_discovered": 3, 00:21:24.529 "num_base_bdevs_operational": 4, 00:21:24.529 "base_bdevs_list": [ 00:21:24.529 { 00:21:24.529 "name": "BaseBdev1", 00:21:24.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.529 "is_configured": false, 00:21:24.529 "data_offset": 0, 00:21:24.529 "data_size": 0 00:21:24.529 }, 00:21:24.529 { 00:21:24.529 "name": "BaseBdev2", 00:21:24.529 "uuid": "faa0cd07-214e-4684-a4b7-65e909615fc3", 00:21:24.529 "is_configured": true, 00:21:24.529 "data_offset": 0, 00:21:24.529 "data_size": 65536 00:21:24.529 }, 00:21:24.529 { 00:21:24.529 "name": "BaseBdev3", 00:21:24.529 "uuid": "9ac9c19c-ab8f-4397-97be-a02164318a4d", 00:21:24.529 "is_configured": true, 00:21:24.529 "data_offset": 0, 00:21:24.529 "data_size": 65536 00:21:24.529 }, 00:21:24.529 { 00:21:24.529 "name": "BaseBdev4", 00:21:24.529 "uuid": "e4bda9d6-a9b6-459a-907b-b71b750c839a", 00:21:24.529 "is_configured": true, 00:21:24.529 "data_offset": 0, 00:21:24.529 "data_size": 65536 00:21:24.529 } 00:21:24.529 ] 00:21:24.529 }' 00:21:24.529 23:07:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:24.529 23:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.095 23:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:25.353 [2024-07-13 23:07:14.576925] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:25.353 23:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:25.353 23:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:25.353 23:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:25.353 23:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:25.353 23:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:25.353 23:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:25.353 23:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:25.353 23:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:25.353 23:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:25.353 23:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:25.353 23:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.353 23:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:25.611 23:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:25.611 "name": "Existed_Raid", 00:21:25.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.611 "strip_size_kb": 64, 00:21:25.611 "state": "configuring", 00:21:25.611 "raid_level": "raid0", 00:21:25.611 "superblock": false, 00:21:25.611 "num_base_bdevs": 4, 00:21:25.611 "num_base_bdevs_discovered": 2, 00:21:25.611 "num_base_bdevs_operational": 4, 00:21:25.611 "base_bdevs_list": [ 00:21:25.611 { 00:21:25.611 "name": "BaseBdev1", 00:21:25.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.611 "is_configured": false, 00:21:25.611 "data_offset": 0, 00:21:25.611 "data_size": 0 00:21:25.611 }, 00:21:25.611 { 00:21:25.611 "name": null, 00:21:25.611 "uuid": "faa0cd07-214e-4684-a4b7-65e909615fc3", 00:21:25.611 "is_configured": false, 00:21:25.611 "data_offset": 0, 00:21:25.611 "data_size": 65536 00:21:25.611 }, 00:21:25.611 { 00:21:25.611 "name": "BaseBdev3", 00:21:25.611 "uuid": "9ac9c19c-ab8f-4397-97be-a02164318a4d", 00:21:25.611 "is_configured": true, 00:21:25.611 "data_offset": 0, 00:21:25.611 "data_size": 65536 00:21:25.611 }, 00:21:25.611 { 00:21:25.611 "name": "BaseBdev4", 00:21:25.611 "uuid": "e4bda9d6-a9b6-459a-907b-b71b750c839a", 00:21:25.611 "is_configured": true, 00:21:25.612 "data_offset": 0, 00:21:25.612 "data_size": 65536 00:21:25.612 } 00:21:25.612 ] 00:21:25.612 }' 00:21:25.612 23:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:25.612 23:07:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.177 23:07:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.177 23:07:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:26.435 23:07:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:21:26.435 23:07:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:26.693 [2024-07-13 23:07:15.929723] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:26.693 BaseBdev1 00:21:26.693 23:07:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:21:26.693 23:07:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:26.693 23:07:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:26.693 23:07:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:26.693 23:07:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:26.693 23:07:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:26.693 23:07:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:26.951 23:07:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:27.209 [ 00:21:27.209 { 00:21:27.209 "name": "BaseBdev1", 00:21:27.209 "aliases": [ 00:21:27.209 "94b9a35f-9942-4e72-b32d-7cef8588d814" 00:21:27.209 ], 00:21:27.209 "product_name": "Malloc disk", 00:21:27.209 "block_size": 512, 00:21:27.209 "num_blocks": 65536, 00:21:27.209 "uuid": "94b9a35f-9942-4e72-b32d-7cef8588d814", 00:21:27.209 "assigned_rate_limits": { 00:21:27.209 "rw_ios_per_sec": 0, 00:21:27.209 "rw_mbytes_per_sec": 0, 00:21:27.209 "r_mbytes_per_sec": 0, 00:21:27.209 "w_mbytes_per_sec": 0 00:21:27.209 }, 00:21:27.209 "claimed": true, 00:21:27.209 "claim_type": "exclusive_write", 00:21:27.209 "zoned": false, 00:21:27.209 "supported_io_types": { 00:21:27.209 "read": true, 00:21:27.209 "write": true, 00:21:27.209 "unmap": true, 00:21:27.209 "flush": true, 00:21:27.209 "reset": true, 00:21:27.209 "nvme_admin": false, 00:21:27.209 "nvme_io": false, 00:21:27.209 "nvme_io_md": false, 00:21:27.209 "write_zeroes": true, 00:21:27.209 "zcopy": true, 00:21:27.209 "get_zone_info": false, 00:21:27.209 "zone_management": false, 00:21:27.209 "zone_append": false, 00:21:27.209 "compare": false, 00:21:27.209 "compare_and_write": false, 00:21:27.209 "abort": true, 00:21:27.209 "seek_hole": false, 00:21:27.209 "seek_data": false, 00:21:27.209 "copy": true, 00:21:27.209 "nvme_iov_md": false 00:21:27.209 }, 00:21:27.209 "memory_domains": [ 00:21:27.209 { 00:21:27.209 "dma_device_id": "system", 00:21:27.209 "dma_device_type": 1 00:21:27.209 }, 00:21:27.209 { 00:21:27.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:27.209 "dma_device_type": 2 00:21:27.209 } 00:21:27.209 ], 00:21:27.209 "driver_specific": {} 00:21:27.209 } 00:21:27.209 ] 00:21:27.209 23:07:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:27.209 23:07:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:27.209 23:07:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:27.209 23:07:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:27.209 23:07:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:27.209 23:07:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:27.209 23:07:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:27.209 23:07:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:27.209 23:07:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:27.209 23:07:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:27.209 23:07:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:27.210 23:07:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.210 23:07:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:27.468 23:07:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:27.468 "name": "Existed_Raid", 00:21:27.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.468 "strip_size_kb": 64, 00:21:27.468 "state": "configuring", 00:21:27.468 "raid_level": "raid0", 00:21:27.468 "superblock": false, 00:21:27.468 "num_base_bdevs": 4, 00:21:27.468 "num_base_bdevs_discovered": 3, 00:21:27.468 "num_base_bdevs_operational": 4, 00:21:27.468 "base_bdevs_list": [ 00:21:27.468 { 00:21:27.468 "name": "BaseBdev1", 00:21:27.468 "uuid": "94b9a35f-9942-4e72-b32d-7cef8588d814", 00:21:27.468 "is_configured": true, 00:21:27.468 "data_offset": 0, 00:21:27.468 "data_size": 65536 00:21:27.468 }, 00:21:27.468 { 00:21:27.468 "name": null, 00:21:27.468 "uuid": "faa0cd07-214e-4684-a4b7-65e909615fc3", 00:21:27.468 "is_configured": false, 00:21:27.468 "data_offset": 0, 00:21:27.468 "data_size": 65536 00:21:27.468 }, 00:21:27.468 { 00:21:27.468 "name": "BaseBdev3", 00:21:27.468 "uuid": "9ac9c19c-ab8f-4397-97be-a02164318a4d", 00:21:27.468 "is_configured": true, 00:21:27.468 "data_offset": 0, 00:21:27.468 "data_size": 65536 00:21:27.468 }, 00:21:27.468 { 00:21:27.468 "name": "BaseBdev4", 00:21:27.468 "uuid": "e4bda9d6-a9b6-459a-907b-b71b750c839a", 00:21:27.468 "is_configured": true, 00:21:27.468 "data_offset": 0, 00:21:27.468 "data_size": 65536 00:21:27.468 } 00:21:27.468 ] 00:21:27.468 }' 00:21:27.468 23:07:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:27.468 23:07:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.035 23:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.035 23:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:28.293 23:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:21:28.293 23:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:21:28.554 [2024-07-13 23:07:17.798178] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:28.554 23:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:28.554 23:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:28.554 23:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:28.554 23:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:28.554 23:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:28.554 23:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:28.554 23:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:28.554 23:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:28.554 23:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:28.554 23:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:28.554 23:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.554 23:07:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:28.813 23:07:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:28.813 "name": "Existed_Raid", 00:21:28.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.813 "strip_size_kb": 64, 00:21:28.813 "state": "configuring", 00:21:28.813 "raid_level": "raid0", 00:21:28.813 "superblock": false, 00:21:28.813 "num_base_bdevs": 4, 00:21:28.813 "num_base_bdevs_discovered": 2, 00:21:28.813 "num_base_bdevs_operational": 4, 00:21:28.813 "base_bdevs_list": [ 00:21:28.813 { 00:21:28.813 "name": "BaseBdev1", 00:21:28.813 "uuid": "94b9a35f-9942-4e72-b32d-7cef8588d814", 00:21:28.813 "is_configured": true, 00:21:28.813 "data_offset": 0, 00:21:28.813 "data_size": 65536 00:21:28.813 }, 00:21:28.813 { 00:21:28.813 "name": null, 00:21:28.813 "uuid": "faa0cd07-214e-4684-a4b7-65e909615fc3", 00:21:28.813 "is_configured": false, 00:21:28.813 "data_offset": 0, 00:21:28.813 "data_size": 65536 00:21:28.813 }, 00:21:28.813 { 00:21:28.813 "name": null, 00:21:28.813 "uuid": "9ac9c19c-ab8f-4397-97be-a02164318a4d", 00:21:28.813 "is_configured": false, 00:21:28.813 "data_offset": 0, 00:21:28.813 "data_size": 65536 00:21:28.813 }, 00:21:28.813 { 00:21:28.813 "name": "BaseBdev4", 00:21:28.813 "uuid": "e4bda9d6-a9b6-459a-907b-b71b750c839a", 00:21:28.813 "is_configured": true, 00:21:28.813 "data_offset": 0, 00:21:28.813 "data_size": 65536 00:21:28.813 } 00:21:28.813 ] 00:21:28.813 }' 00:21:28.813 23:07:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:28.813 23:07:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.381 23:07:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.381 23:07:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:29.641 23:07:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:21:29.641 23:07:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:29.900 [2024-07-13 23:07:19.166517] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:29.900 23:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:29.900 23:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:29.900 23:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:29.900 23:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:29.900 23:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:29.900 23:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:29.900 23:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:29.901 23:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:29.901 23:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:29.901 23:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:29.901 23:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.901 23:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:30.160 23:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:30.160 "name": "Existed_Raid", 00:21:30.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.160 "strip_size_kb": 64, 00:21:30.160 "state": "configuring", 00:21:30.160 "raid_level": "raid0", 00:21:30.160 "superblock": false, 00:21:30.160 "num_base_bdevs": 4, 00:21:30.160 "num_base_bdevs_discovered": 3, 00:21:30.160 "num_base_bdevs_operational": 4, 00:21:30.160 "base_bdevs_list": [ 00:21:30.160 { 00:21:30.160 "name": "BaseBdev1", 00:21:30.160 "uuid": "94b9a35f-9942-4e72-b32d-7cef8588d814", 00:21:30.160 "is_configured": true, 00:21:30.160 "data_offset": 0, 00:21:30.160 "data_size": 65536 00:21:30.160 }, 00:21:30.160 { 00:21:30.160 "name": null, 00:21:30.160 "uuid": "faa0cd07-214e-4684-a4b7-65e909615fc3", 00:21:30.160 "is_configured": false, 00:21:30.160 "data_offset": 0, 00:21:30.160 "data_size": 65536 00:21:30.160 }, 00:21:30.160 { 00:21:30.160 "name": "BaseBdev3", 00:21:30.160 "uuid": "9ac9c19c-ab8f-4397-97be-a02164318a4d", 00:21:30.160 "is_configured": true, 00:21:30.160 "data_offset": 0, 00:21:30.160 "data_size": 65536 00:21:30.160 }, 00:21:30.160 { 00:21:30.160 "name": "BaseBdev4", 00:21:30.160 "uuid": "e4bda9d6-a9b6-459a-907b-b71b750c839a", 00:21:30.160 "is_configured": true, 00:21:30.160 "data_offset": 0, 00:21:30.160 "data_size": 65536 00:21:30.160 } 00:21:30.160 ] 00:21:30.160 }' 00:21:30.160 23:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:30.160 23:07:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.727 23:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.727 23:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:30.986 23:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:21:30.986 23:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:31.246 [2024-07-13 23:07:20.542833] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:31.246 23:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:31.246 23:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:31.246 23:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:31.246 23:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:31.246 23:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:31.246 23:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:31.246 23:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:31.246 23:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:31.246 23:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:31.246 23:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:31.246 23:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.246 23:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:31.505 23:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:31.505 "name": "Existed_Raid", 00:21:31.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.505 "strip_size_kb": 64, 00:21:31.505 "state": "configuring", 00:21:31.505 "raid_level": "raid0", 00:21:31.505 "superblock": false, 00:21:31.505 "num_base_bdevs": 4, 00:21:31.505 "num_base_bdevs_discovered": 2, 00:21:31.505 "num_base_bdevs_operational": 4, 00:21:31.505 "base_bdevs_list": [ 00:21:31.505 { 00:21:31.505 "name": null, 00:21:31.505 "uuid": "94b9a35f-9942-4e72-b32d-7cef8588d814", 00:21:31.505 "is_configured": false, 00:21:31.505 "data_offset": 0, 00:21:31.505 "data_size": 65536 00:21:31.505 }, 00:21:31.505 { 00:21:31.505 "name": null, 00:21:31.505 "uuid": "faa0cd07-214e-4684-a4b7-65e909615fc3", 00:21:31.505 "is_configured": false, 00:21:31.505 "data_offset": 0, 00:21:31.505 "data_size": 65536 00:21:31.505 }, 00:21:31.505 { 00:21:31.505 "name": "BaseBdev3", 00:21:31.505 "uuid": "9ac9c19c-ab8f-4397-97be-a02164318a4d", 00:21:31.505 "is_configured": true, 00:21:31.505 "data_offset": 0, 00:21:31.505 "data_size": 65536 00:21:31.505 }, 00:21:31.505 { 00:21:31.505 "name": "BaseBdev4", 00:21:31.505 "uuid": "e4bda9d6-a9b6-459a-907b-b71b750c839a", 00:21:31.505 "is_configured": true, 00:21:31.505 "data_offset": 0, 00:21:31.505 "data_size": 65536 00:21:31.505 } 00:21:31.505 ] 00:21:31.505 }' 00:21:31.505 23:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:31.505 23:07:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.073 23:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:32.073 23:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.332 23:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:21:32.332 23:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:32.591 [2024-07-13 23:07:21.919082] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:32.591 23:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:32.591 23:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:32.591 23:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:32.591 23:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:32.591 23:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:32.591 23:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:32.591 23:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:32.591 23:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:32.591 23:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:32.591 23:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:32.591 23:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.591 23:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:32.850 23:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:32.850 "name": "Existed_Raid", 00:21:32.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.850 "strip_size_kb": 64, 00:21:32.850 "state": "configuring", 00:21:32.850 "raid_level": "raid0", 00:21:32.850 "superblock": false, 00:21:32.850 "num_base_bdevs": 4, 00:21:32.850 "num_base_bdevs_discovered": 3, 00:21:32.850 "num_base_bdevs_operational": 4, 00:21:32.850 "base_bdevs_list": [ 00:21:32.850 { 00:21:32.850 "name": null, 00:21:32.850 "uuid": "94b9a35f-9942-4e72-b32d-7cef8588d814", 00:21:32.850 "is_configured": false, 00:21:32.850 "data_offset": 0, 00:21:32.850 "data_size": 65536 00:21:32.850 }, 00:21:32.850 { 00:21:32.850 "name": "BaseBdev2", 00:21:32.850 "uuid": "faa0cd07-214e-4684-a4b7-65e909615fc3", 00:21:32.850 "is_configured": true, 00:21:32.850 "data_offset": 0, 00:21:32.851 "data_size": 65536 00:21:32.851 }, 00:21:32.851 { 00:21:32.851 "name": "BaseBdev3", 00:21:32.851 "uuid": "9ac9c19c-ab8f-4397-97be-a02164318a4d", 00:21:32.851 "is_configured": true, 00:21:32.851 "data_offset": 0, 00:21:32.851 "data_size": 65536 00:21:32.851 }, 00:21:32.851 { 00:21:32.851 "name": "BaseBdev4", 00:21:32.851 "uuid": "e4bda9d6-a9b6-459a-907b-b71b750c839a", 00:21:32.851 "is_configured": true, 00:21:32.851 "data_offset": 0, 00:21:32.851 "data_size": 65536 00:21:32.851 } 00:21:32.851 ] 00:21:32.851 }' 00:21:32.851 23:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:32.851 23:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.788 23:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.788 23:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:33.788 23:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:21:33.788 23:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.788 23:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:34.045 23:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 94b9a35f-9942-4e72-b32d-7cef8588d814 00:21:34.303 [2024-07-13 23:07:23.606600] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:34.303 [2024-07-13 23:07:23.607006] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:21:34.303 [2024-07-13 23:07:23.607056] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:21:34.303 [2024-07-13 23:07:23.607266] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:21:34.304 [2024-07-13 23:07:23.607761] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:21:34.304 [2024-07-13 23:07:23.607928] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008180 00:21:34.304 [2024-07-13 23:07:23.608243] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:34.304 NewBaseBdev 00:21:34.304 23:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:21:34.304 23:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:21:34.304 23:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:34.304 23:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:34.304 23:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:34.304 23:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:34.304 23:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:34.561 23:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:34.820 [ 00:21:34.820 { 00:21:34.820 "name": "NewBaseBdev", 00:21:34.820 "aliases": [ 00:21:34.820 "94b9a35f-9942-4e72-b32d-7cef8588d814" 00:21:34.820 ], 00:21:34.820 "product_name": "Malloc disk", 00:21:34.820 "block_size": 512, 00:21:34.820 "num_blocks": 65536, 00:21:34.820 "uuid": "94b9a35f-9942-4e72-b32d-7cef8588d814", 00:21:34.820 "assigned_rate_limits": { 00:21:34.820 "rw_ios_per_sec": 0, 00:21:34.820 "rw_mbytes_per_sec": 0, 00:21:34.820 "r_mbytes_per_sec": 0, 00:21:34.820 "w_mbytes_per_sec": 0 00:21:34.820 }, 00:21:34.820 "claimed": true, 00:21:34.820 "claim_type": "exclusive_write", 00:21:34.820 "zoned": false, 00:21:34.820 "supported_io_types": { 00:21:34.820 "read": true, 00:21:34.820 "write": true, 00:21:34.820 "unmap": true, 00:21:34.820 "flush": true, 00:21:34.820 "reset": true, 00:21:34.820 "nvme_admin": false, 00:21:34.820 "nvme_io": false, 00:21:34.820 "nvme_io_md": false, 00:21:34.820 "write_zeroes": true, 00:21:34.820 "zcopy": true, 00:21:34.820 "get_zone_info": false, 00:21:34.820 "zone_management": false, 00:21:34.820 "zone_append": false, 00:21:34.820 "compare": false, 00:21:34.820 "compare_and_write": false, 00:21:34.820 "abort": true, 00:21:34.820 "seek_hole": false, 00:21:34.820 "seek_data": false, 00:21:34.820 "copy": true, 00:21:34.820 "nvme_iov_md": false 00:21:34.820 }, 00:21:34.820 "memory_domains": [ 00:21:34.820 { 00:21:34.820 "dma_device_id": "system", 00:21:34.820 "dma_device_type": 1 00:21:34.820 }, 00:21:34.820 { 00:21:34.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:34.820 "dma_device_type": 2 00:21:34.820 } 00:21:34.820 ], 00:21:34.820 "driver_specific": {} 00:21:34.820 } 00:21:34.820 ] 00:21:34.820 23:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:34.820 23:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:21:34.820 23:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:34.820 23:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:34.820 23:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:34.820 23:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:34.820 23:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:34.820 23:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:34.820 23:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:34.820 23:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:34.820 23:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:34.820 23:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.820 23:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:35.078 23:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:35.078 "name": "Existed_Raid", 00:21:35.078 "uuid": "2058ed69-c72b-4024-9701-689b280a5b5c", 00:21:35.078 "strip_size_kb": 64, 00:21:35.078 "state": "online", 00:21:35.078 "raid_level": "raid0", 00:21:35.078 "superblock": false, 00:21:35.078 "num_base_bdevs": 4, 00:21:35.078 "num_base_bdevs_discovered": 4, 00:21:35.078 "num_base_bdevs_operational": 4, 00:21:35.078 "base_bdevs_list": [ 00:21:35.078 { 00:21:35.078 "name": "NewBaseBdev", 00:21:35.078 "uuid": "94b9a35f-9942-4e72-b32d-7cef8588d814", 00:21:35.078 "is_configured": true, 00:21:35.078 "data_offset": 0, 00:21:35.078 "data_size": 65536 00:21:35.078 }, 00:21:35.078 { 00:21:35.078 "name": "BaseBdev2", 00:21:35.078 "uuid": "faa0cd07-214e-4684-a4b7-65e909615fc3", 00:21:35.078 "is_configured": true, 00:21:35.078 "data_offset": 0, 00:21:35.078 "data_size": 65536 00:21:35.078 }, 00:21:35.078 { 00:21:35.078 "name": "BaseBdev3", 00:21:35.078 "uuid": "9ac9c19c-ab8f-4397-97be-a02164318a4d", 00:21:35.078 "is_configured": true, 00:21:35.078 "data_offset": 0, 00:21:35.078 "data_size": 65536 00:21:35.078 }, 00:21:35.078 { 00:21:35.078 "name": "BaseBdev4", 00:21:35.078 "uuid": "e4bda9d6-a9b6-459a-907b-b71b750c839a", 00:21:35.078 "is_configured": true, 00:21:35.078 "data_offset": 0, 00:21:35.078 "data_size": 65536 00:21:35.078 } 00:21:35.078 ] 00:21:35.078 }' 00:21:35.078 23:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:35.078 23:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.645 23:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:21:35.645 23:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:35.645 23:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:35.645 23:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:35.645 23:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:35.645 23:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:35.645 23:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:35.645 23:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:35.904 [2024-07-13 23:07:25.187489] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:35.904 23:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:35.904 "name": "Existed_Raid", 00:21:35.904 "aliases": [ 00:21:35.904 "2058ed69-c72b-4024-9701-689b280a5b5c" 00:21:35.904 ], 00:21:35.904 "product_name": "Raid Volume", 00:21:35.904 "block_size": 512, 00:21:35.904 "num_blocks": 262144, 00:21:35.904 "uuid": "2058ed69-c72b-4024-9701-689b280a5b5c", 00:21:35.904 "assigned_rate_limits": { 00:21:35.904 "rw_ios_per_sec": 0, 00:21:35.904 "rw_mbytes_per_sec": 0, 00:21:35.904 "r_mbytes_per_sec": 0, 00:21:35.904 "w_mbytes_per_sec": 0 00:21:35.904 }, 00:21:35.904 "claimed": false, 00:21:35.904 "zoned": false, 00:21:35.904 "supported_io_types": { 00:21:35.904 "read": true, 00:21:35.904 "write": true, 00:21:35.904 "unmap": true, 00:21:35.904 "flush": true, 00:21:35.904 "reset": true, 00:21:35.904 "nvme_admin": false, 00:21:35.904 "nvme_io": false, 00:21:35.904 "nvme_io_md": false, 00:21:35.904 "write_zeroes": true, 00:21:35.904 "zcopy": false, 00:21:35.904 "get_zone_info": false, 00:21:35.904 "zone_management": false, 00:21:35.904 "zone_append": false, 00:21:35.904 "compare": false, 00:21:35.904 "compare_and_write": false, 00:21:35.904 "abort": false, 00:21:35.904 "seek_hole": false, 00:21:35.904 "seek_data": false, 00:21:35.904 "copy": false, 00:21:35.904 "nvme_iov_md": false 00:21:35.904 }, 00:21:35.904 "memory_domains": [ 00:21:35.904 { 00:21:35.904 "dma_device_id": "system", 00:21:35.904 "dma_device_type": 1 00:21:35.904 }, 00:21:35.904 { 00:21:35.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:35.904 "dma_device_type": 2 00:21:35.904 }, 00:21:35.904 { 00:21:35.904 "dma_device_id": "system", 00:21:35.904 "dma_device_type": 1 00:21:35.904 }, 00:21:35.904 { 00:21:35.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:35.904 "dma_device_type": 2 00:21:35.904 }, 00:21:35.904 { 00:21:35.904 "dma_device_id": "system", 00:21:35.904 "dma_device_type": 1 00:21:35.904 }, 00:21:35.904 { 00:21:35.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:35.904 "dma_device_type": 2 00:21:35.904 }, 00:21:35.904 { 00:21:35.904 "dma_device_id": "system", 00:21:35.904 "dma_device_type": 1 00:21:35.904 }, 00:21:35.904 { 00:21:35.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:35.904 "dma_device_type": 2 00:21:35.904 } 00:21:35.904 ], 00:21:35.904 "driver_specific": { 00:21:35.904 "raid": { 00:21:35.904 "uuid": "2058ed69-c72b-4024-9701-689b280a5b5c", 00:21:35.904 "strip_size_kb": 64, 00:21:35.904 "state": "online", 00:21:35.904 "raid_level": "raid0", 00:21:35.904 "superblock": false, 00:21:35.904 "num_base_bdevs": 4, 00:21:35.904 "num_base_bdevs_discovered": 4, 00:21:35.904 "num_base_bdevs_operational": 4, 00:21:35.904 "base_bdevs_list": [ 00:21:35.904 { 00:21:35.904 "name": "NewBaseBdev", 00:21:35.904 "uuid": "94b9a35f-9942-4e72-b32d-7cef8588d814", 00:21:35.904 "is_configured": true, 00:21:35.904 "data_offset": 0, 00:21:35.904 "data_size": 65536 00:21:35.904 }, 00:21:35.904 { 00:21:35.904 "name": "BaseBdev2", 00:21:35.904 "uuid": "faa0cd07-214e-4684-a4b7-65e909615fc3", 00:21:35.904 "is_configured": true, 00:21:35.904 "data_offset": 0, 00:21:35.904 "data_size": 65536 00:21:35.904 }, 00:21:35.904 { 00:21:35.904 "name": "BaseBdev3", 00:21:35.904 "uuid": "9ac9c19c-ab8f-4397-97be-a02164318a4d", 00:21:35.904 "is_configured": true, 00:21:35.904 "data_offset": 0, 00:21:35.904 "data_size": 65536 00:21:35.904 }, 00:21:35.904 { 00:21:35.904 "name": "BaseBdev4", 00:21:35.904 "uuid": "e4bda9d6-a9b6-459a-907b-b71b750c839a", 00:21:35.904 "is_configured": true, 00:21:35.904 "data_offset": 0, 00:21:35.904 "data_size": 65536 00:21:35.904 } 00:21:35.904 ] 00:21:35.904 } 00:21:35.904 } 00:21:35.904 }' 00:21:35.904 23:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:35.904 23:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:21:35.904 BaseBdev2 00:21:35.904 BaseBdev3 00:21:35.904 BaseBdev4' 00:21:35.904 23:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:35.904 23:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:21:35.904 23:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:36.163 23:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:36.163 "name": "NewBaseBdev", 00:21:36.163 "aliases": [ 00:21:36.163 "94b9a35f-9942-4e72-b32d-7cef8588d814" 00:21:36.163 ], 00:21:36.163 "product_name": "Malloc disk", 00:21:36.163 "block_size": 512, 00:21:36.163 "num_blocks": 65536, 00:21:36.163 "uuid": "94b9a35f-9942-4e72-b32d-7cef8588d814", 00:21:36.163 "assigned_rate_limits": { 00:21:36.163 "rw_ios_per_sec": 0, 00:21:36.163 "rw_mbytes_per_sec": 0, 00:21:36.163 "r_mbytes_per_sec": 0, 00:21:36.163 "w_mbytes_per_sec": 0 00:21:36.163 }, 00:21:36.163 "claimed": true, 00:21:36.163 "claim_type": "exclusive_write", 00:21:36.163 "zoned": false, 00:21:36.163 "supported_io_types": { 00:21:36.163 "read": true, 00:21:36.163 "write": true, 00:21:36.163 "unmap": true, 00:21:36.163 "flush": true, 00:21:36.163 "reset": true, 00:21:36.163 "nvme_admin": false, 00:21:36.163 "nvme_io": false, 00:21:36.163 "nvme_io_md": false, 00:21:36.163 "write_zeroes": true, 00:21:36.163 "zcopy": true, 00:21:36.163 "get_zone_info": false, 00:21:36.163 "zone_management": false, 00:21:36.163 "zone_append": false, 00:21:36.163 "compare": false, 00:21:36.163 "compare_and_write": false, 00:21:36.163 "abort": true, 00:21:36.163 "seek_hole": false, 00:21:36.163 "seek_data": false, 00:21:36.163 "copy": true, 00:21:36.163 "nvme_iov_md": false 00:21:36.163 }, 00:21:36.163 "memory_domains": [ 00:21:36.163 { 00:21:36.163 "dma_device_id": "system", 00:21:36.163 "dma_device_type": 1 00:21:36.163 }, 00:21:36.163 { 00:21:36.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:36.163 "dma_device_type": 2 00:21:36.163 } 00:21:36.163 ], 00:21:36.163 "driver_specific": {} 00:21:36.163 }' 00:21:36.163 23:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:36.452 23:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:36.452 23:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:36.452 23:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:36.452 23:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:36.452 23:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:36.452 23:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:36.452 23:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:36.452 23:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:36.452 23:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:36.718 23:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:36.718 23:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:36.718 23:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:36.718 23:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:36.718 23:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:36.977 23:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:36.977 "name": "BaseBdev2", 00:21:36.977 "aliases": [ 00:21:36.977 "faa0cd07-214e-4684-a4b7-65e909615fc3" 00:21:36.977 ], 00:21:36.977 "product_name": "Malloc disk", 00:21:36.977 "block_size": 512, 00:21:36.977 "num_blocks": 65536, 00:21:36.977 "uuid": "faa0cd07-214e-4684-a4b7-65e909615fc3", 00:21:36.977 "assigned_rate_limits": { 00:21:36.977 "rw_ios_per_sec": 0, 00:21:36.977 "rw_mbytes_per_sec": 0, 00:21:36.977 "r_mbytes_per_sec": 0, 00:21:36.977 "w_mbytes_per_sec": 0 00:21:36.977 }, 00:21:36.977 "claimed": true, 00:21:36.977 "claim_type": "exclusive_write", 00:21:36.977 "zoned": false, 00:21:36.977 "supported_io_types": { 00:21:36.977 "read": true, 00:21:36.977 "write": true, 00:21:36.977 "unmap": true, 00:21:36.977 "flush": true, 00:21:36.977 "reset": true, 00:21:36.977 "nvme_admin": false, 00:21:36.977 "nvme_io": false, 00:21:36.977 "nvme_io_md": false, 00:21:36.977 "write_zeroes": true, 00:21:36.977 "zcopy": true, 00:21:36.977 "get_zone_info": false, 00:21:36.977 "zone_management": false, 00:21:36.977 "zone_append": false, 00:21:36.977 "compare": false, 00:21:36.977 "compare_and_write": false, 00:21:36.977 "abort": true, 00:21:36.977 "seek_hole": false, 00:21:36.977 "seek_data": false, 00:21:36.977 "copy": true, 00:21:36.977 "nvme_iov_md": false 00:21:36.977 }, 00:21:36.977 "memory_domains": [ 00:21:36.977 { 00:21:36.977 "dma_device_id": "system", 00:21:36.977 "dma_device_type": 1 00:21:36.977 }, 00:21:36.977 { 00:21:36.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:36.977 "dma_device_type": 2 00:21:36.977 } 00:21:36.977 ], 00:21:36.977 "driver_specific": {} 00:21:36.977 }' 00:21:36.977 23:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:36.977 23:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:36.977 23:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:36.977 23:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:36.977 23:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:36.977 23:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:36.977 23:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:37.237 23:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:37.237 23:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:37.237 23:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:37.237 23:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:37.237 23:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:37.237 23:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:37.237 23:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:37.237 23:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:37.496 23:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:37.496 "name": "BaseBdev3", 00:21:37.496 "aliases": [ 00:21:37.496 "9ac9c19c-ab8f-4397-97be-a02164318a4d" 00:21:37.496 ], 00:21:37.496 "product_name": "Malloc disk", 00:21:37.496 "block_size": 512, 00:21:37.496 "num_blocks": 65536, 00:21:37.496 "uuid": "9ac9c19c-ab8f-4397-97be-a02164318a4d", 00:21:37.496 "assigned_rate_limits": { 00:21:37.496 "rw_ios_per_sec": 0, 00:21:37.496 "rw_mbytes_per_sec": 0, 00:21:37.496 "r_mbytes_per_sec": 0, 00:21:37.496 "w_mbytes_per_sec": 0 00:21:37.496 }, 00:21:37.496 "claimed": true, 00:21:37.496 "claim_type": "exclusive_write", 00:21:37.496 "zoned": false, 00:21:37.496 "supported_io_types": { 00:21:37.496 "read": true, 00:21:37.496 "write": true, 00:21:37.496 "unmap": true, 00:21:37.496 "flush": true, 00:21:37.496 "reset": true, 00:21:37.496 "nvme_admin": false, 00:21:37.496 "nvme_io": false, 00:21:37.496 "nvme_io_md": false, 00:21:37.496 "write_zeroes": true, 00:21:37.496 "zcopy": true, 00:21:37.496 "get_zone_info": false, 00:21:37.496 "zone_management": false, 00:21:37.496 "zone_append": false, 00:21:37.496 "compare": false, 00:21:37.496 "compare_and_write": false, 00:21:37.496 "abort": true, 00:21:37.496 "seek_hole": false, 00:21:37.496 "seek_data": false, 00:21:37.496 "copy": true, 00:21:37.496 "nvme_iov_md": false 00:21:37.496 }, 00:21:37.496 "memory_domains": [ 00:21:37.496 { 00:21:37.496 "dma_device_id": "system", 00:21:37.496 "dma_device_type": 1 00:21:37.496 }, 00:21:37.496 { 00:21:37.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:37.496 "dma_device_type": 2 00:21:37.496 } 00:21:37.496 ], 00:21:37.496 "driver_specific": {} 00:21:37.496 }' 00:21:37.496 23:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:37.755 23:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:37.755 23:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:37.755 23:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:37.755 23:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:37.755 23:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:37.755 23:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:37.755 23:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:38.013 23:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:38.013 23:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:38.013 23:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:38.013 23:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:38.013 23:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:38.013 23:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:38.013 23:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:38.271 23:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:38.271 "name": "BaseBdev4", 00:21:38.271 "aliases": [ 00:21:38.271 "e4bda9d6-a9b6-459a-907b-b71b750c839a" 00:21:38.271 ], 00:21:38.271 "product_name": "Malloc disk", 00:21:38.271 "block_size": 512, 00:21:38.271 "num_blocks": 65536, 00:21:38.271 "uuid": "e4bda9d6-a9b6-459a-907b-b71b750c839a", 00:21:38.271 "assigned_rate_limits": { 00:21:38.271 "rw_ios_per_sec": 0, 00:21:38.271 "rw_mbytes_per_sec": 0, 00:21:38.271 "r_mbytes_per_sec": 0, 00:21:38.271 "w_mbytes_per_sec": 0 00:21:38.271 }, 00:21:38.271 "claimed": true, 00:21:38.271 "claim_type": "exclusive_write", 00:21:38.271 "zoned": false, 00:21:38.271 "supported_io_types": { 00:21:38.271 "read": true, 00:21:38.271 "write": true, 00:21:38.271 "unmap": true, 00:21:38.271 "flush": true, 00:21:38.271 "reset": true, 00:21:38.271 "nvme_admin": false, 00:21:38.271 "nvme_io": false, 00:21:38.271 "nvme_io_md": false, 00:21:38.271 "write_zeroes": true, 00:21:38.271 "zcopy": true, 00:21:38.271 "get_zone_info": false, 00:21:38.271 "zone_management": false, 00:21:38.271 "zone_append": false, 00:21:38.271 "compare": false, 00:21:38.271 "compare_and_write": false, 00:21:38.271 "abort": true, 00:21:38.271 "seek_hole": false, 00:21:38.271 "seek_data": false, 00:21:38.271 "copy": true, 00:21:38.271 "nvme_iov_md": false 00:21:38.271 }, 00:21:38.271 "memory_domains": [ 00:21:38.271 { 00:21:38.271 "dma_device_id": "system", 00:21:38.271 "dma_device_type": 1 00:21:38.271 }, 00:21:38.271 { 00:21:38.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:38.271 "dma_device_type": 2 00:21:38.271 } 00:21:38.271 ], 00:21:38.271 "driver_specific": {} 00:21:38.271 }' 00:21:38.271 23:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:38.271 23:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:38.271 23:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:38.271 23:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:38.530 23:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:38.530 23:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:38.530 23:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:38.530 23:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:38.530 23:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:38.530 23:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:38.789 23:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:38.789 23:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:38.789 23:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:39.047 [2024-07-13 23:07:28.231843] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:39.047 [2024-07-13 23:07:28.232042] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:39.047 [2024-07-13 23:07:28.232222] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:39.047 [2024-07-13 23:07:28.232393] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:39.047 [2024-07-13 23:07:28.232497] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name Existed_Raid, state offline 00:21:39.047 23:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 144045 00:21:39.047 23:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 144045 ']' 00:21:39.047 23:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 144045 00:21:39.047 23:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:21:39.047 23:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:39.047 23:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 144045 00:21:39.048 23:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:39.048 23:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:39.048 23:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 144045' 00:21:39.048 killing process with pid 144045 00:21:39.048 23:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 144045 00:21:39.048 [2024-07-13 23:07:28.274443] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:39.048 23:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 144045 00:21:39.048 [2024-07-13 23:07:28.310453] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:39.307 ************************************ 00:21:39.307 END TEST raid_state_function_test 00:21:39.307 ************************************ 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:21:39.307 00:21:39.307 real 0m33.872s 00:21:39.307 user 1m4.503s 00:21:39.307 sys 0m4.051s 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.307 23:07:28 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:39.307 23:07:28 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:21:39.307 23:07:28 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:39.307 23:07:28 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:39.307 23:07:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:39.307 ************************************ 00:21:39.307 START TEST raid_state_function_test_sb 00:21:39.307 ************************************ 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 true 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=145152 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:39.307 Process raid pid: 145152 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 145152' 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 145152 /var/tmp/spdk-raid.sock 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 145152 ']' 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:39.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:39.307 23:07:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.307 [2024-07-13 23:07:28.660836] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:21:39.307 [2024-07-13 23:07:28.661280] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:39.565 [2024-07-13 23:07:28.805105] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.565 [2024-07-13 23:07:28.897446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.824 [2024-07-13 23:07:28.975117] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:40.390 23:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:40.390 23:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:21:40.390 23:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:40.648 [2024-07-13 23:07:29.805891] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:40.648 [2024-07-13 23:07:29.806215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:40.648 [2024-07-13 23:07:29.806320] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:40.648 [2024-07-13 23:07:29.806382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:40.648 [2024-07-13 23:07:29.806488] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:40.648 [2024-07-13 23:07:29.806585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:40.648 [2024-07-13 23:07:29.806840] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:40.648 [2024-07-13 23:07:29.806952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:40.648 23:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:40.648 23:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:40.648 23:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:40.648 23:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:40.648 23:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:40.648 23:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:40.648 23:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:40.648 23:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:40.648 23:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:40.648 23:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:40.648 23:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.648 23:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:40.906 23:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:40.906 "name": "Existed_Raid", 00:21:40.906 "uuid": "c472a247-3575-4989-afe1-83e8889a8cf8", 00:21:40.906 "strip_size_kb": 64, 00:21:40.906 "state": "configuring", 00:21:40.906 "raid_level": "raid0", 00:21:40.906 "superblock": true, 00:21:40.906 "num_base_bdevs": 4, 00:21:40.906 "num_base_bdevs_discovered": 0, 00:21:40.906 "num_base_bdevs_operational": 4, 00:21:40.906 "base_bdevs_list": [ 00:21:40.906 { 00:21:40.906 "name": "BaseBdev1", 00:21:40.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.906 "is_configured": false, 00:21:40.906 "data_offset": 0, 00:21:40.906 "data_size": 0 00:21:40.906 }, 00:21:40.906 { 00:21:40.906 "name": "BaseBdev2", 00:21:40.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.906 "is_configured": false, 00:21:40.906 "data_offset": 0, 00:21:40.906 "data_size": 0 00:21:40.906 }, 00:21:40.906 { 00:21:40.906 "name": "BaseBdev3", 00:21:40.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.906 "is_configured": false, 00:21:40.906 "data_offset": 0, 00:21:40.906 "data_size": 0 00:21:40.906 }, 00:21:40.906 { 00:21:40.906 "name": "BaseBdev4", 00:21:40.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.907 "is_configured": false, 00:21:40.907 "data_offset": 0, 00:21:40.907 "data_size": 0 00:21:40.907 } 00:21:40.907 ] 00:21:40.907 }' 00:21:40.907 23:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:40.907 23:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.472 23:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:41.731 [2024-07-13 23:07:30.941979] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:41.731 [2024-07-13 23:07:30.942227] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:21:41.731 23:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:41.989 [2024-07-13 23:07:31.210019] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:41.989 [2024-07-13 23:07:31.210257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:41.989 [2024-07-13 23:07:31.210383] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:41.989 [2024-07-13 23:07:31.210455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:41.989 [2024-07-13 23:07:31.210680] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:41.989 [2024-07-13 23:07:31.210816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:41.989 [2024-07-13 23:07:31.210948] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:41.989 [2024-07-13 23:07:31.211021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:41.990 23:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:42.248 [2024-07-13 23:07:31.484078] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:42.248 BaseBdev1 00:21:42.248 23:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:21:42.248 23:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:42.248 23:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:42.248 23:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:42.248 23:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:42.248 23:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:42.248 23:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:42.506 23:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:42.764 [ 00:21:42.764 { 00:21:42.764 "name": "BaseBdev1", 00:21:42.764 "aliases": [ 00:21:42.764 "0ea41334-dd23-4465-8409-43e520886553" 00:21:42.764 ], 00:21:42.764 "product_name": "Malloc disk", 00:21:42.764 "block_size": 512, 00:21:42.764 "num_blocks": 65536, 00:21:42.764 "uuid": "0ea41334-dd23-4465-8409-43e520886553", 00:21:42.764 "assigned_rate_limits": { 00:21:42.764 "rw_ios_per_sec": 0, 00:21:42.764 "rw_mbytes_per_sec": 0, 00:21:42.764 "r_mbytes_per_sec": 0, 00:21:42.764 "w_mbytes_per_sec": 0 00:21:42.764 }, 00:21:42.764 "claimed": true, 00:21:42.764 "claim_type": "exclusive_write", 00:21:42.764 "zoned": false, 00:21:42.764 "supported_io_types": { 00:21:42.764 "read": true, 00:21:42.764 "write": true, 00:21:42.764 "unmap": true, 00:21:42.764 "flush": true, 00:21:42.764 "reset": true, 00:21:42.764 "nvme_admin": false, 00:21:42.764 "nvme_io": false, 00:21:42.764 "nvme_io_md": false, 00:21:42.764 "write_zeroes": true, 00:21:42.764 "zcopy": true, 00:21:42.764 "get_zone_info": false, 00:21:42.764 "zone_management": false, 00:21:42.764 "zone_append": false, 00:21:42.764 "compare": false, 00:21:42.764 "compare_and_write": false, 00:21:42.764 "abort": true, 00:21:42.764 "seek_hole": false, 00:21:42.764 "seek_data": false, 00:21:42.764 "copy": true, 00:21:42.764 "nvme_iov_md": false 00:21:42.764 }, 00:21:42.764 "memory_domains": [ 00:21:42.764 { 00:21:42.764 "dma_device_id": "system", 00:21:42.764 "dma_device_type": 1 00:21:42.764 }, 00:21:42.764 { 00:21:42.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:42.764 "dma_device_type": 2 00:21:42.764 } 00:21:42.764 ], 00:21:42.764 "driver_specific": {} 00:21:42.764 } 00:21:42.764 ] 00:21:42.764 23:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:42.764 23:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:42.764 23:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:42.764 23:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:42.764 23:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:42.764 23:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:42.764 23:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:42.764 23:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:42.764 23:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:42.764 23:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:42.764 23:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:42.764 23:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.764 23:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:43.023 23:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:43.023 "name": "Existed_Raid", 00:21:43.023 "uuid": "e6a1717a-826f-4e1a-a11f-1b0f5133b54f", 00:21:43.023 "strip_size_kb": 64, 00:21:43.023 "state": "configuring", 00:21:43.023 "raid_level": "raid0", 00:21:43.023 "superblock": true, 00:21:43.023 "num_base_bdevs": 4, 00:21:43.023 "num_base_bdevs_discovered": 1, 00:21:43.023 "num_base_bdevs_operational": 4, 00:21:43.023 "base_bdevs_list": [ 00:21:43.023 { 00:21:43.023 "name": "BaseBdev1", 00:21:43.023 "uuid": "0ea41334-dd23-4465-8409-43e520886553", 00:21:43.023 "is_configured": true, 00:21:43.023 "data_offset": 2048, 00:21:43.023 "data_size": 63488 00:21:43.023 }, 00:21:43.023 { 00:21:43.023 "name": "BaseBdev2", 00:21:43.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.023 "is_configured": false, 00:21:43.023 "data_offset": 0, 00:21:43.023 "data_size": 0 00:21:43.023 }, 00:21:43.023 { 00:21:43.023 "name": "BaseBdev3", 00:21:43.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.023 "is_configured": false, 00:21:43.023 "data_offset": 0, 00:21:43.023 "data_size": 0 00:21:43.023 }, 00:21:43.023 { 00:21:43.023 "name": "BaseBdev4", 00:21:43.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.023 "is_configured": false, 00:21:43.023 "data_offset": 0, 00:21:43.023 "data_size": 0 00:21:43.023 } 00:21:43.023 ] 00:21:43.023 }' 00:21:43.023 23:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:43.023 23:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.589 23:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:43.848 [2024-07-13 23:07:33.108517] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:43.848 [2024-07-13 23:07:33.108947] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:21:43.848 23:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:44.105 [2024-07-13 23:07:33.324562] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:44.105 [2024-07-13 23:07:33.326974] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:44.105 [2024-07-13 23:07:33.327199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:44.105 [2024-07-13 23:07:33.327340] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:44.105 [2024-07-13 23:07:33.327413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:44.105 [2024-07-13 23:07:33.327669] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:44.105 [2024-07-13 23:07:33.327733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:44.105 23:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:21:44.105 23:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:44.105 23:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:44.105 23:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:44.105 23:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:44.105 23:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:44.105 23:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:44.105 23:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:44.105 23:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:44.105 23:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:44.105 23:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:44.105 23:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:44.105 23:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.105 23:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:44.362 23:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:44.362 "name": "Existed_Raid", 00:21:44.362 "uuid": "7403f330-a547-4792-b3a4-fc5bdf29480c", 00:21:44.362 "strip_size_kb": 64, 00:21:44.362 "state": "configuring", 00:21:44.362 "raid_level": "raid0", 00:21:44.362 "superblock": true, 00:21:44.362 "num_base_bdevs": 4, 00:21:44.362 "num_base_bdevs_discovered": 1, 00:21:44.362 "num_base_bdevs_operational": 4, 00:21:44.362 "base_bdevs_list": [ 00:21:44.362 { 00:21:44.362 "name": "BaseBdev1", 00:21:44.362 "uuid": "0ea41334-dd23-4465-8409-43e520886553", 00:21:44.362 "is_configured": true, 00:21:44.362 "data_offset": 2048, 00:21:44.362 "data_size": 63488 00:21:44.362 }, 00:21:44.362 { 00:21:44.362 "name": "BaseBdev2", 00:21:44.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.362 "is_configured": false, 00:21:44.362 "data_offset": 0, 00:21:44.362 "data_size": 0 00:21:44.362 }, 00:21:44.362 { 00:21:44.362 "name": "BaseBdev3", 00:21:44.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.362 "is_configured": false, 00:21:44.362 "data_offset": 0, 00:21:44.362 "data_size": 0 00:21:44.362 }, 00:21:44.362 { 00:21:44.362 "name": "BaseBdev4", 00:21:44.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.362 "is_configured": false, 00:21:44.362 "data_offset": 0, 00:21:44.362 "data_size": 0 00:21:44.362 } 00:21:44.362 ] 00:21:44.362 }' 00:21:44.362 23:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:44.362 23:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.928 23:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:45.186 [2024-07-13 23:07:34.478408] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:45.186 BaseBdev2 00:21:45.186 23:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:21:45.186 23:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:45.186 23:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:45.186 23:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:45.186 23:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:45.186 23:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:45.186 23:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:45.445 23:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:45.704 [ 00:21:45.704 { 00:21:45.704 "name": "BaseBdev2", 00:21:45.704 "aliases": [ 00:21:45.704 "447f8e52-53a3-4f30-bca9-e21ea9ec6a45" 00:21:45.704 ], 00:21:45.704 "product_name": "Malloc disk", 00:21:45.704 "block_size": 512, 00:21:45.704 "num_blocks": 65536, 00:21:45.704 "uuid": "447f8e52-53a3-4f30-bca9-e21ea9ec6a45", 00:21:45.704 "assigned_rate_limits": { 00:21:45.704 "rw_ios_per_sec": 0, 00:21:45.704 "rw_mbytes_per_sec": 0, 00:21:45.704 "r_mbytes_per_sec": 0, 00:21:45.704 "w_mbytes_per_sec": 0 00:21:45.704 }, 00:21:45.704 "claimed": true, 00:21:45.704 "claim_type": "exclusive_write", 00:21:45.704 "zoned": false, 00:21:45.704 "supported_io_types": { 00:21:45.704 "read": true, 00:21:45.704 "write": true, 00:21:45.704 "unmap": true, 00:21:45.704 "flush": true, 00:21:45.704 "reset": true, 00:21:45.704 "nvme_admin": false, 00:21:45.704 "nvme_io": false, 00:21:45.704 "nvme_io_md": false, 00:21:45.704 "write_zeroes": true, 00:21:45.704 "zcopy": true, 00:21:45.704 "get_zone_info": false, 00:21:45.704 "zone_management": false, 00:21:45.704 "zone_append": false, 00:21:45.704 "compare": false, 00:21:45.704 "compare_and_write": false, 00:21:45.704 "abort": true, 00:21:45.704 "seek_hole": false, 00:21:45.704 "seek_data": false, 00:21:45.704 "copy": true, 00:21:45.704 "nvme_iov_md": false 00:21:45.704 }, 00:21:45.704 "memory_domains": [ 00:21:45.704 { 00:21:45.704 "dma_device_id": "system", 00:21:45.704 "dma_device_type": 1 00:21:45.704 }, 00:21:45.704 { 00:21:45.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:45.704 "dma_device_type": 2 00:21:45.704 } 00:21:45.704 ], 00:21:45.704 "driver_specific": {} 00:21:45.704 } 00:21:45.704 ] 00:21:45.704 23:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:45.704 23:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:45.704 23:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:45.704 23:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:45.704 23:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:45.704 23:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:45.704 23:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:45.704 23:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:45.704 23:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:45.704 23:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:45.704 23:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:45.704 23:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:45.704 23:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:45.704 23:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.704 23:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:45.963 23:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:45.963 "name": "Existed_Raid", 00:21:45.963 "uuid": "7403f330-a547-4792-b3a4-fc5bdf29480c", 00:21:45.963 "strip_size_kb": 64, 00:21:45.963 "state": "configuring", 00:21:45.963 "raid_level": "raid0", 00:21:45.963 "superblock": true, 00:21:45.963 "num_base_bdevs": 4, 00:21:45.963 "num_base_bdevs_discovered": 2, 00:21:45.963 "num_base_bdevs_operational": 4, 00:21:45.963 "base_bdevs_list": [ 00:21:45.963 { 00:21:45.963 "name": "BaseBdev1", 00:21:45.963 "uuid": "0ea41334-dd23-4465-8409-43e520886553", 00:21:45.963 "is_configured": true, 00:21:45.963 "data_offset": 2048, 00:21:45.963 "data_size": 63488 00:21:45.963 }, 00:21:45.963 { 00:21:45.963 "name": "BaseBdev2", 00:21:45.963 "uuid": "447f8e52-53a3-4f30-bca9-e21ea9ec6a45", 00:21:45.963 "is_configured": true, 00:21:45.963 "data_offset": 2048, 00:21:45.963 "data_size": 63488 00:21:45.963 }, 00:21:45.963 { 00:21:45.963 "name": "BaseBdev3", 00:21:45.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.963 "is_configured": false, 00:21:45.963 "data_offset": 0, 00:21:45.963 "data_size": 0 00:21:45.963 }, 00:21:45.963 { 00:21:45.963 "name": "BaseBdev4", 00:21:45.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.963 "is_configured": false, 00:21:45.963 "data_offset": 0, 00:21:45.963 "data_size": 0 00:21:45.963 } 00:21:45.963 ] 00:21:45.963 }' 00:21:45.963 23:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:45.963 23:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.529 23:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:46.788 [2024-07-13 23:07:36.030549] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:46.788 BaseBdev3 00:21:46.788 23:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:21:46.788 23:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:46.788 23:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:46.788 23:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:46.788 23:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:46.788 23:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:46.788 23:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:47.047 23:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:47.306 [ 00:21:47.306 { 00:21:47.306 "name": "BaseBdev3", 00:21:47.306 "aliases": [ 00:21:47.306 "b3a40721-91ba-42eb-8ee7-d69ca45a5248" 00:21:47.306 ], 00:21:47.306 "product_name": "Malloc disk", 00:21:47.306 "block_size": 512, 00:21:47.306 "num_blocks": 65536, 00:21:47.306 "uuid": "b3a40721-91ba-42eb-8ee7-d69ca45a5248", 00:21:47.306 "assigned_rate_limits": { 00:21:47.306 "rw_ios_per_sec": 0, 00:21:47.306 "rw_mbytes_per_sec": 0, 00:21:47.306 "r_mbytes_per_sec": 0, 00:21:47.306 "w_mbytes_per_sec": 0 00:21:47.306 }, 00:21:47.306 "claimed": true, 00:21:47.306 "claim_type": "exclusive_write", 00:21:47.306 "zoned": false, 00:21:47.306 "supported_io_types": { 00:21:47.306 "read": true, 00:21:47.306 "write": true, 00:21:47.306 "unmap": true, 00:21:47.306 "flush": true, 00:21:47.306 "reset": true, 00:21:47.306 "nvme_admin": false, 00:21:47.306 "nvme_io": false, 00:21:47.306 "nvme_io_md": false, 00:21:47.306 "write_zeroes": true, 00:21:47.306 "zcopy": true, 00:21:47.306 "get_zone_info": false, 00:21:47.306 "zone_management": false, 00:21:47.306 "zone_append": false, 00:21:47.306 "compare": false, 00:21:47.306 "compare_and_write": false, 00:21:47.306 "abort": true, 00:21:47.306 "seek_hole": false, 00:21:47.306 "seek_data": false, 00:21:47.306 "copy": true, 00:21:47.306 "nvme_iov_md": false 00:21:47.306 }, 00:21:47.306 "memory_domains": [ 00:21:47.306 { 00:21:47.306 "dma_device_id": "system", 00:21:47.306 "dma_device_type": 1 00:21:47.306 }, 00:21:47.306 { 00:21:47.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:47.306 "dma_device_type": 2 00:21:47.306 } 00:21:47.306 ], 00:21:47.306 "driver_specific": {} 00:21:47.306 } 00:21:47.306 ] 00:21:47.306 23:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:47.306 23:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:47.306 23:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:47.306 23:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:47.306 23:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:47.306 23:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:47.306 23:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:47.306 23:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:47.306 23:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:47.306 23:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:47.306 23:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:47.306 23:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:47.306 23:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:47.306 23:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.306 23:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:47.565 23:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:47.565 "name": "Existed_Raid", 00:21:47.565 "uuid": "7403f330-a547-4792-b3a4-fc5bdf29480c", 00:21:47.565 "strip_size_kb": 64, 00:21:47.565 "state": "configuring", 00:21:47.565 "raid_level": "raid0", 00:21:47.565 "superblock": true, 00:21:47.565 "num_base_bdevs": 4, 00:21:47.565 "num_base_bdevs_discovered": 3, 00:21:47.565 "num_base_bdevs_operational": 4, 00:21:47.565 "base_bdevs_list": [ 00:21:47.565 { 00:21:47.565 "name": "BaseBdev1", 00:21:47.565 "uuid": "0ea41334-dd23-4465-8409-43e520886553", 00:21:47.565 "is_configured": true, 00:21:47.565 "data_offset": 2048, 00:21:47.565 "data_size": 63488 00:21:47.565 }, 00:21:47.565 { 00:21:47.565 "name": "BaseBdev2", 00:21:47.565 "uuid": "447f8e52-53a3-4f30-bca9-e21ea9ec6a45", 00:21:47.565 "is_configured": true, 00:21:47.565 "data_offset": 2048, 00:21:47.565 "data_size": 63488 00:21:47.565 }, 00:21:47.565 { 00:21:47.565 "name": "BaseBdev3", 00:21:47.565 "uuid": "b3a40721-91ba-42eb-8ee7-d69ca45a5248", 00:21:47.565 "is_configured": true, 00:21:47.565 "data_offset": 2048, 00:21:47.565 "data_size": 63488 00:21:47.565 }, 00:21:47.565 { 00:21:47.565 "name": "BaseBdev4", 00:21:47.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.565 "is_configured": false, 00:21:47.565 "data_offset": 0, 00:21:47.565 "data_size": 0 00:21:47.565 } 00:21:47.565 ] 00:21:47.565 }' 00:21:47.565 23:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:47.565 23:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.132 23:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:48.391 [2024-07-13 23:07:37.646446] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:48.391 [2024-07-13 23:07:37.647047] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:21:48.391 [2024-07-13 23:07:37.647219] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:48.391 [2024-07-13 23:07:37.647409] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:21:48.391 BaseBdev4 00:21:48.391 [2024-07-13 23:07:37.647909] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:21:48.391 [2024-07-13 23:07:37.647924] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:21:48.391 [2024-07-13 23:07:37.648077] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:48.391 23:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:21:48.391 23:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:21:48.391 23:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:48.391 23:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:48.391 23:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:48.391 23:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:48.391 23:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:48.650 23:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:48.909 [ 00:21:48.909 { 00:21:48.909 "name": "BaseBdev4", 00:21:48.909 "aliases": [ 00:21:48.909 "6cd33a0d-0a71-4984-ba1b-d82d16d2a0db" 00:21:48.909 ], 00:21:48.909 "product_name": "Malloc disk", 00:21:48.909 "block_size": 512, 00:21:48.909 "num_blocks": 65536, 00:21:48.909 "uuid": "6cd33a0d-0a71-4984-ba1b-d82d16d2a0db", 00:21:48.909 "assigned_rate_limits": { 00:21:48.909 "rw_ios_per_sec": 0, 00:21:48.909 "rw_mbytes_per_sec": 0, 00:21:48.909 "r_mbytes_per_sec": 0, 00:21:48.909 "w_mbytes_per_sec": 0 00:21:48.909 }, 00:21:48.909 "claimed": true, 00:21:48.909 "claim_type": "exclusive_write", 00:21:48.909 "zoned": false, 00:21:48.909 "supported_io_types": { 00:21:48.909 "read": true, 00:21:48.909 "write": true, 00:21:48.909 "unmap": true, 00:21:48.909 "flush": true, 00:21:48.909 "reset": true, 00:21:48.909 "nvme_admin": false, 00:21:48.909 "nvme_io": false, 00:21:48.909 "nvme_io_md": false, 00:21:48.909 "write_zeroes": true, 00:21:48.909 "zcopy": true, 00:21:48.909 "get_zone_info": false, 00:21:48.909 "zone_management": false, 00:21:48.909 "zone_append": false, 00:21:48.909 "compare": false, 00:21:48.909 "compare_and_write": false, 00:21:48.909 "abort": true, 00:21:48.909 "seek_hole": false, 00:21:48.909 "seek_data": false, 00:21:48.909 "copy": true, 00:21:48.909 "nvme_iov_md": false 00:21:48.909 }, 00:21:48.909 "memory_domains": [ 00:21:48.909 { 00:21:48.909 "dma_device_id": "system", 00:21:48.909 "dma_device_type": 1 00:21:48.909 }, 00:21:48.909 { 00:21:48.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:48.909 "dma_device_type": 2 00:21:48.909 } 00:21:48.909 ], 00:21:48.909 "driver_specific": {} 00:21:48.909 } 00:21:48.909 ] 00:21:48.909 23:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:48.909 23:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:48.909 23:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:48.909 23:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:21:48.909 23:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:48.909 23:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:48.909 23:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:48.909 23:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:48.909 23:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:48.909 23:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:48.909 23:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:48.909 23:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:48.909 23:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:48.909 23:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.909 23:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:49.168 23:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:49.168 "name": "Existed_Raid", 00:21:49.168 "uuid": "7403f330-a547-4792-b3a4-fc5bdf29480c", 00:21:49.168 "strip_size_kb": 64, 00:21:49.168 "state": "online", 00:21:49.168 "raid_level": "raid0", 00:21:49.168 "superblock": true, 00:21:49.168 "num_base_bdevs": 4, 00:21:49.168 "num_base_bdevs_discovered": 4, 00:21:49.168 "num_base_bdevs_operational": 4, 00:21:49.168 "base_bdevs_list": [ 00:21:49.168 { 00:21:49.168 "name": "BaseBdev1", 00:21:49.168 "uuid": "0ea41334-dd23-4465-8409-43e520886553", 00:21:49.168 "is_configured": true, 00:21:49.168 "data_offset": 2048, 00:21:49.168 "data_size": 63488 00:21:49.168 }, 00:21:49.168 { 00:21:49.168 "name": "BaseBdev2", 00:21:49.168 "uuid": "447f8e52-53a3-4f30-bca9-e21ea9ec6a45", 00:21:49.168 "is_configured": true, 00:21:49.168 "data_offset": 2048, 00:21:49.168 "data_size": 63488 00:21:49.168 }, 00:21:49.168 { 00:21:49.168 "name": "BaseBdev3", 00:21:49.168 "uuid": "b3a40721-91ba-42eb-8ee7-d69ca45a5248", 00:21:49.168 "is_configured": true, 00:21:49.168 "data_offset": 2048, 00:21:49.168 "data_size": 63488 00:21:49.168 }, 00:21:49.168 { 00:21:49.168 "name": "BaseBdev4", 00:21:49.168 "uuid": "6cd33a0d-0a71-4984-ba1b-d82d16d2a0db", 00:21:49.168 "is_configured": true, 00:21:49.168 "data_offset": 2048, 00:21:49.168 "data_size": 63488 00:21:49.168 } 00:21:49.168 ] 00:21:49.168 }' 00:21:49.168 23:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:49.168 23:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:49.735 23:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:21:49.735 23:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:49.735 23:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:49.735 23:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:49.735 23:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:49.735 23:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:21:49.735 23:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:49.735 23:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:49.993 [2024-07-13 23:07:39.319353] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:49.993 23:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:49.993 "name": "Existed_Raid", 00:21:49.993 "aliases": [ 00:21:49.993 "7403f330-a547-4792-b3a4-fc5bdf29480c" 00:21:49.993 ], 00:21:49.993 "product_name": "Raid Volume", 00:21:49.993 "block_size": 512, 00:21:49.994 "num_blocks": 253952, 00:21:49.994 "uuid": "7403f330-a547-4792-b3a4-fc5bdf29480c", 00:21:49.994 "assigned_rate_limits": { 00:21:49.994 "rw_ios_per_sec": 0, 00:21:49.994 "rw_mbytes_per_sec": 0, 00:21:49.994 "r_mbytes_per_sec": 0, 00:21:49.994 "w_mbytes_per_sec": 0 00:21:49.994 }, 00:21:49.994 "claimed": false, 00:21:49.994 "zoned": false, 00:21:49.994 "supported_io_types": { 00:21:49.994 "read": true, 00:21:49.994 "write": true, 00:21:49.994 "unmap": true, 00:21:49.994 "flush": true, 00:21:49.994 "reset": true, 00:21:49.994 "nvme_admin": false, 00:21:49.994 "nvme_io": false, 00:21:49.994 "nvme_io_md": false, 00:21:49.994 "write_zeroes": true, 00:21:49.994 "zcopy": false, 00:21:49.994 "get_zone_info": false, 00:21:49.994 "zone_management": false, 00:21:49.994 "zone_append": false, 00:21:49.994 "compare": false, 00:21:49.994 "compare_and_write": false, 00:21:49.994 "abort": false, 00:21:49.994 "seek_hole": false, 00:21:49.994 "seek_data": false, 00:21:49.994 "copy": false, 00:21:49.994 "nvme_iov_md": false 00:21:49.994 }, 00:21:49.994 "memory_domains": [ 00:21:49.994 { 00:21:49.994 "dma_device_id": "system", 00:21:49.994 "dma_device_type": 1 00:21:49.994 }, 00:21:49.994 { 00:21:49.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.994 "dma_device_type": 2 00:21:49.994 }, 00:21:49.994 { 00:21:49.994 "dma_device_id": "system", 00:21:49.994 "dma_device_type": 1 00:21:49.994 }, 00:21:49.994 { 00:21:49.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.994 "dma_device_type": 2 00:21:49.994 }, 00:21:49.994 { 00:21:49.994 "dma_device_id": "system", 00:21:49.994 "dma_device_type": 1 00:21:49.994 }, 00:21:49.994 { 00:21:49.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.994 "dma_device_type": 2 00:21:49.994 }, 00:21:49.994 { 00:21:49.994 "dma_device_id": "system", 00:21:49.994 "dma_device_type": 1 00:21:49.994 }, 00:21:49.994 { 00:21:49.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.994 "dma_device_type": 2 00:21:49.994 } 00:21:49.994 ], 00:21:49.994 "driver_specific": { 00:21:49.994 "raid": { 00:21:49.994 "uuid": "7403f330-a547-4792-b3a4-fc5bdf29480c", 00:21:49.994 "strip_size_kb": 64, 00:21:49.994 "state": "online", 00:21:49.994 "raid_level": "raid0", 00:21:49.994 "superblock": true, 00:21:49.994 "num_base_bdevs": 4, 00:21:49.994 "num_base_bdevs_discovered": 4, 00:21:49.994 "num_base_bdevs_operational": 4, 00:21:49.994 "base_bdevs_list": [ 00:21:49.994 { 00:21:49.994 "name": "BaseBdev1", 00:21:49.994 "uuid": "0ea41334-dd23-4465-8409-43e520886553", 00:21:49.994 "is_configured": true, 00:21:49.994 "data_offset": 2048, 00:21:49.994 "data_size": 63488 00:21:49.994 }, 00:21:49.994 { 00:21:49.994 "name": "BaseBdev2", 00:21:49.994 "uuid": "447f8e52-53a3-4f30-bca9-e21ea9ec6a45", 00:21:49.994 "is_configured": true, 00:21:49.994 "data_offset": 2048, 00:21:49.994 "data_size": 63488 00:21:49.994 }, 00:21:49.994 { 00:21:49.994 "name": "BaseBdev3", 00:21:49.994 "uuid": "b3a40721-91ba-42eb-8ee7-d69ca45a5248", 00:21:49.994 "is_configured": true, 00:21:49.994 "data_offset": 2048, 00:21:49.994 "data_size": 63488 00:21:49.994 }, 00:21:49.994 { 00:21:49.994 "name": "BaseBdev4", 00:21:49.994 "uuid": "6cd33a0d-0a71-4984-ba1b-d82d16d2a0db", 00:21:49.994 "is_configured": true, 00:21:49.994 "data_offset": 2048, 00:21:49.994 "data_size": 63488 00:21:49.994 } 00:21:49.994 ] 00:21:49.994 } 00:21:49.994 } 00:21:49.994 }' 00:21:49.994 23:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:49.994 23:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:49.994 BaseBdev2 00:21:49.994 BaseBdev3 00:21:49.994 BaseBdev4' 00:21:49.994 23:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:49.994 23:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:49.994 23:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:50.266 23:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:50.266 "name": "BaseBdev1", 00:21:50.266 "aliases": [ 00:21:50.266 "0ea41334-dd23-4465-8409-43e520886553" 00:21:50.266 ], 00:21:50.266 "product_name": "Malloc disk", 00:21:50.266 "block_size": 512, 00:21:50.266 "num_blocks": 65536, 00:21:50.267 "uuid": "0ea41334-dd23-4465-8409-43e520886553", 00:21:50.267 "assigned_rate_limits": { 00:21:50.267 "rw_ios_per_sec": 0, 00:21:50.267 "rw_mbytes_per_sec": 0, 00:21:50.267 "r_mbytes_per_sec": 0, 00:21:50.267 "w_mbytes_per_sec": 0 00:21:50.267 }, 00:21:50.267 "claimed": true, 00:21:50.267 "claim_type": "exclusive_write", 00:21:50.267 "zoned": false, 00:21:50.267 "supported_io_types": { 00:21:50.267 "read": true, 00:21:50.267 "write": true, 00:21:50.267 "unmap": true, 00:21:50.267 "flush": true, 00:21:50.267 "reset": true, 00:21:50.267 "nvme_admin": false, 00:21:50.267 "nvme_io": false, 00:21:50.267 "nvme_io_md": false, 00:21:50.267 "write_zeroes": true, 00:21:50.267 "zcopy": true, 00:21:50.267 "get_zone_info": false, 00:21:50.267 "zone_management": false, 00:21:50.267 "zone_append": false, 00:21:50.267 "compare": false, 00:21:50.267 "compare_and_write": false, 00:21:50.267 "abort": true, 00:21:50.267 "seek_hole": false, 00:21:50.267 "seek_data": false, 00:21:50.267 "copy": true, 00:21:50.267 "nvme_iov_md": false 00:21:50.267 }, 00:21:50.267 "memory_domains": [ 00:21:50.267 { 00:21:50.267 "dma_device_id": "system", 00:21:50.267 "dma_device_type": 1 00:21:50.267 }, 00:21:50.267 { 00:21:50.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:50.267 "dma_device_type": 2 00:21:50.267 } 00:21:50.267 ], 00:21:50.267 "driver_specific": {} 00:21:50.267 }' 00:21:50.267 23:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:50.537 23:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:50.537 23:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:50.537 23:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:50.537 23:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:50.537 23:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:50.537 23:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:50.537 23:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:50.538 23:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:50.796 23:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:50.796 23:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:50.796 23:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:50.796 23:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:50.796 23:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:50.796 23:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:51.055 23:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:51.055 "name": "BaseBdev2", 00:21:51.055 "aliases": [ 00:21:51.055 "447f8e52-53a3-4f30-bca9-e21ea9ec6a45" 00:21:51.055 ], 00:21:51.055 "product_name": "Malloc disk", 00:21:51.055 "block_size": 512, 00:21:51.055 "num_blocks": 65536, 00:21:51.055 "uuid": "447f8e52-53a3-4f30-bca9-e21ea9ec6a45", 00:21:51.055 "assigned_rate_limits": { 00:21:51.055 "rw_ios_per_sec": 0, 00:21:51.055 "rw_mbytes_per_sec": 0, 00:21:51.055 "r_mbytes_per_sec": 0, 00:21:51.055 "w_mbytes_per_sec": 0 00:21:51.055 }, 00:21:51.055 "claimed": true, 00:21:51.055 "claim_type": "exclusive_write", 00:21:51.055 "zoned": false, 00:21:51.055 "supported_io_types": { 00:21:51.055 "read": true, 00:21:51.055 "write": true, 00:21:51.055 "unmap": true, 00:21:51.055 "flush": true, 00:21:51.055 "reset": true, 00:21:51.055 "nvme_admin": false, 00:21:51.055 "nvme_io": false, 00:21:51.055 "nvme_io_md": false, 00:21:51.055 "write_zeroes": true, 00:21:51.055 "zcopy": true, 00:21:51.055 "get_zone_info": false, 00:21:51.055 "zone_management": false, 00:21:51.055 "zone_append": false, 00:21:51.055 "compare": false, 00:21:51.055 "compare_and_write": false, 00:21:51.055 "abort": true, 00:21:51.055 "seek_hole": false, 00:21:51.055 "seek_data": false, 00:21:51.055 "copy": true, 00:21:51.055 "nvme_iov_md": false 00:21:51.055 }, 00:21:51.055 "memory_domains": [ 00:21:51.055 { 00:21:51.055 "dma_device_id": "system", 00:21:51.055 "dma_device_type": 1 00:21:51.055 }, 00:21:51.055 { 00:21:51.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:51.055 "dma_device_type": 2 00:21:51.055 } 00:21:51.055 ], 00:21:51.055 "driver_specific": {} 00:21:51.055 }' 00:21:51.055 23:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:51.055 23:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:51.055 23:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:51.055 23:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:51.055 23:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:51.055 23:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:51.055 23:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:51.314 23:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:51.314 23:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:51.314 23:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:51.314 23:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:51.314 23:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:51.314 23:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:51.314 23:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:51.314 23:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:51.572 23:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:51.572 "name": "BaseBdev3", 00:21:51.572 "aliases": [ 00:21:51.572 "b3a40721-91ba-42eb-8ee7-d69ca45a5248" 00:21:51.572 ], 00:21:51.572 "product_name": "Malloc disk", 00:21:51.572 "block_size": 512, 00:21:51.572 "num_blocks": 65536, 00:21:51.572 "uuid": "b3a40721-91ba-42eb-8ee7-d69ca45a5248", 00:21:51.572 "assigned_rate_limits": { 00:21:51.572 "rw_ios_per_sec": 0, 00:21:51.572 "rw_mbytes_per_sec": 0, 00:21:51.572 "r_mbytes_per_sec": 0, 00:21:51.572 "w_mbytes_per_sec": 0 00:21:51.572 }, 00:21:51.572 "claimed": true, 00:21:51.572 "claim_type": "exclusive_write", 00:21:51.572 "zoned": false, 00:21:51.572 "supported_io_types": { 00:21:51.572 "read": true, 00:21:51.572 "write": true, 00:21:51.572 "unmap": true, 00:21:51.572 "flush": true, 00:21:51.572 "reset": true, 00:21:51.572 "nvme_admin": false, 00:21:51.572 "nvme_io": false, 00:21:51.572 "nvme_io_md": false, 00:21:51.572 "write_zeroes": true, 00:21:51.572 "zcopy": true, 00:21:51.572 "get_zone_info": false, 00:21:51.572 "zone_management": false, 00:21:51.572 "zone_append": false, 00:21:51.572 "compare": false, 00:21:51.572 "compare_and_write": false, 00:21:51.572 "abort": true, 00:21:51.572 "seek_hole": false, 00:21:51.572 "seek_data": false, 00:21:51.572 "copy": true, 00:21:51.572 "nvme_iov_md": false 00:21:51.572 }, 00:21:51.572 "memory_domains": [ 00:21:51.572 { 00:21:51.572 "dma_device_id": "system", 00:21:51.572 "dma_device_type": 1 00:21:51.572 }, 00:21:51.572 { 00:21:51.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:51.573 "dma_device_type": 2 00:21:51.573 } 00:21:51.573 ], 00:21:51.573 "driver_specific": {} 00:21:51.573 }' 00:21:51.573 23:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:51.831 23:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:51.831 23:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:51.831 23:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:51.831 23:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:51.831 23:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:51.831 23:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:51.831 23:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:51.831 23:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:51.831 23:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:52.089 23:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:52.089 23:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:52.089 23:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:52.089 23:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:52.089 23:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:52.347 23:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:52.347 "name": "BaseBdev4", 00:21:52.347 "aliases": [ 00:21:52.347 "6cd33a0d-0a71-4984-ba1b-d82d16d2a0db" 00:21:52.347 ], 00:21:52.347 "product_name": "Malloc disk", 00:21:52.347 "block_size": 512, 00:21:52.347 "num_blocks": 65536, 00:21:52.347 "uuid": "6cd33a0d-0a71-4984-ba1b-d82d16d2a0db", 00:21:52.347 "assigned_rate_limits": { 00:21:52.347 "rw_ios_per_sec": 0, 00:21:52.347 "rw_mbytes_per_sec": 0, 00:21:52.347 "r_mbytes_per_sec": 0, 00:21:52.347 "w_mbytes_per_sec": 0 00:21:52.347 }, 00:21:52.347 "claimed": true, 00:21:52.347 "claim_type": "exclusive_write", 00:21:52.347 "zoned": false, 00:21:52.347 "supported_io_types": { 00:21:52.347 "read": true, 00:21:52.347 "write": true, 00:21:52.347 "unmap": true, 00:21:52.347 "flush": true, 00:21:52.347 "reset": true, 00:21:52.347 "nvme_admin": false, 00:21:52.347 "nvme_io": false, 00:21:52.347 "nvme_io_md": false, 00:21:52.347 "write_zeroes": true, 00:21:52.347 "zcopy": true, 00:21:52.347 "get_zone_info": false, 00:21:52.347 "zone_management": false, 00:21:52.347 "zone_append": false, 00:21:52.347 "compare": false, 00:21:52.347 "compare_and_write": false, 00:21:52.347 "abort": true, 00:21:52.347 "seek_hole": false, 00:21:52.347 "seek_data": false, 00:21:52.347 "copy": true, 00:21:52.347 "nvme_iov_md": false 00:21:52.347 }, 00:21:52.347 "memory_domains": [ 00:21:52.347 { 00:21:52.347 "dma_device_id": "system", 00:21:52.347 "dma_device_type": 1 00:21:52.347 }, 00:21:52.347 { 00:21:52.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.347 "dma_device_type": 2 00:21:52.347 } 00:21:52.347 ], 00:21:52.347 "driver_specific": {} 00:21:52.347 }' 00:21:52.347 23:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:52.347 23:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:52.347 23:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:52.347 23:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:52.347 23:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:52.606 23:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:52.606 23:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:52.606 23:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:52.606 23:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:52.606 23:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:52.606 23:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:52.606 23:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:52.606 23:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:52.866 [2024-07-13 23:07:42.214068] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:52.866 [2024-07-13 23:07:42.214275] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:52.866 [2024-07-13 23:07:42.214469] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:52.866 23:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:21:52.866 23:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:21:52.866 23:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:52.866 23:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:21:52.866 23:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:21:52.866 23:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:21:52.866 23:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:52.866 23:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:21:52.866 23:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:52.866 23:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:52.866 23:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:52.866 23:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:52.866 23:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:52.866 23:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:52.866 23:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:52.866 23:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.866 23:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:53.125 23:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:53.125 "name": "Existed_Raid", 00:21:53.125 "uuid": "7403f330-a547-4792-b3a4-fc5bdf29480c", 00:21:53.125 "strip_size_kb": 64, 00:21:53.125 "state": "offline", 00:21:53.125 "raid_level": "raid0", 00:21:53.125 "superblock": true, 00:21:53.125 "num_base_bdevs": 4, 00:21:53.125 "num_base_bdevs_discovered": 3, 00:21:53.125 "num_base_bdevs_operational": 3, 00:21:53.125 "base_bdevs_list": [ 00:21:53.125 { 00:21:53.125 "name": null, 00:21:53.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.125 "is_configured": false, 00:21:53.125 "data_offset": 2048, 00:21:53.125 "data_size": 63488 00:21:53.125 }, 00:21:53.125 { 00:21:53.125 "name": "BaseBdev2", 00:21:53.125 "uuid": "447f8e52-53a3-4f30-bca9-e21ea9ec6a45", 00:21:53.125 "is_configured": true, 00:21:53.125 "data_offset": 2048, 00:21:53.125 "data_size": 63488 00:21:53.125 }, 00:21:53.125 { 00:21:53.125 "name": "BaseBdev3", 00:21:53.125 "uuid": "b3a40721-91ba-42eb-8ee7-d69ca45a5248", 00:21:53.125 "is_configured": true, 00:21:53.125 "data_offset": 2048, 00:21:53.125 "data_size": 63488 00:21:53.125 }, 00:21:53.125 { 00:21:53.125 "name": "BaseBdev4", 00:21:53.125 "uuid": "6cd33a0d-0a71-4984-ba1b-d82d16d2a0db", 00:21:53.125 "is_configured": true, 00:21:53.125 "data_offset": 2048, 00:21:53.125 "data_size": 63488 00:21:53.125 } 00:21:53.125 ] 00:21:53.125 }' 00:21:53.125 23:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:53.125 23:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:53.691 23:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:21:53.691 23:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:53.691 23:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.691 23:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:53.953 23:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:53.953 23:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:53.953 23:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:54.213 [2024-07-13 23:07:43.609401] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:54.471 23:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:54.471 23:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:54.471 23:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.471 23:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:54.729 23:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:54.729 23:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:54.729 23:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:54.729 [2024-07-13 23:07:44.083810] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:54.729 23:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:54.729 23:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:54.729 23:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.729 23:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:54.988 23:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:54.988 23:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:54.988 23:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:55.246 [2024-07-13 23:07:44.529659] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:55.246 [2024-07-13 23:07:44.529910] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:21:55.246 23:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:55.246 23:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:55.246 23:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.246 23:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:21:55.505 23:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:21:55.505 23:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:21:55.505 23:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:21:55.505 23:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:21:55.505 23:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:55.505 23:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:55.764 BaseBdev2 00:21:55.764 23:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:21:55.764 23:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:55.764 23:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:55.764 23:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:55.764 23:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:55.764 23:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:55.764 23:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:56.022 23:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:56.282 [ 00:21:56.282 { 00:21:56.282 "name": "BaseBdev2", 00:21:56.282 "aliases": [ 00:21:56.282 "e9316474-3dc4-452a-ad15-2502ea6d97bc" 00:21:56.282 ], 00:21:56.282 "product_name": "Malloc disk", 00:21:56.282 "block_size": 512, 00:21:56.282 "num_blocks": 65536, 00:21:56.282 "uuid": "e9316474-3dc4-452a-ad15-2502ea6d97bc", 00:21:56.282 "assigned_rate_limits": { 00:21:56.282 "rw_ios_per_sec": 0, 00:21:56.282 "rw_mbytes_per_sec": 0, 00:21:56.282 "r_mbytes_per_sec": 0, 00:21:56.282 "w_mbytes_per_sec": 0 00:21:56.282 }, 00:21:56.282 "claimed": false, 00:21:56.282 "zoned": false, 00:21:56.282 "supported_io_types": { 00:21:56.282 "read": true, 00:21:56.282 "write": true, 00:21:56.282 "unmap": true, 00:21:56.282 "flush": true, 00:21:56.282 "reset": true, 00:21:56.282 "nvme_admin": false, 00:21:56.282 "nvme_io": false, 00:21:56.282 "nvme_io_md": false, 00:21:56.282 "write_zeroes": true, 00:21:56.282 "zcopy": true, 00:21:56.282 "get_zone_info": false, 00:21:56.282 "zone_management": false, 00:21:56.282 "zone_append": false, 00:21:56.282 "compare": false, 00:21:56.282 "compare_and_write": false, 00:21:56.282 "abort": true, 00:21:56.282 "seek_hole": false, 00:21:56.282 "seek_data": false, 00:21:56.282 "copy": true, 00:21:56.282 "nvme_iov_md": false 00:21:56.282 }, 00:21:56.282 "memory_domains": [ 00:21:56.282 { 00:21:56.282 "dma_device_id": "system", 00:21:56.282 "dma_device_type": 1 00:21:56.282 }, 00:21:56.282 { 00:21:56.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.282 "dma_device_type": 2 00:21:56.282 } 00:21:56.282 ], 00:21:56.282 "driver_specific": {} 00:21:56.282 } 00:21:56.282 ] 00:21:56.282 23:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:56.282 23:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:56.282 23:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:56.282 23:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:56.282 BaseBdev3 00:21:56.282 23:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:21:56.282 23:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:56.282 23:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:56.282 23:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:56.282 23:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:56.282 23:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:56.282 23:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:56.540 23:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:56.799 [ 00:21:56.799 { 00:21:56.799 "name": "BaseBdev3", 00:21:56.799 "aliases": [ 00:21:56.799 "530d19fe-1b61-4fd2-aac1-b1ee8c3ef806" 00:21:56.799 ], 00:21:56.799 "product_name": "Malloc disk", 00:21:56.799 "block_size": 512, 00:21:56.799 "num_blocks": 65536, 00:21:56.799 "uuid": "530d19fe-1b61-4fd2-aac1-b1ee8c3ef806", 00:21:56.799 "assigned_rate_limits": { 00:21:56.799 "rw_ios_per_sec": 0, 00:21:56.800 "rw_mbytes_per_sec": 0, 00:21:56.800 "r_mbytes_per_sec": 0, 00:21:56.800 "w_mbytes_per_sec": 0 00:21:56.800 }, 00:21:56.800 "claimed": false, 00:21:56.800 "zoned": false, 00:21:56.800 "supported_io_types": { 00:21:56.800 "read": true, 00:21:56.800 "write": true, 00:21:56.800 "unmap": true, 00:21:56.800 "flush": true, 00:21:56.800 "reset": true, 00:21:56.800 "nvme_admin": false, 00:21:56.800 "nvme_io": false, 00:21:56.800 "nvme_io_md": false, 00:21:56.800 "write_zeroes": true, 00:21:56.800 "zcopy": true, 00:21:56.800 "get_zone_info": false, 00:21:56.800 "zone_management": false, 00:21:56.800 "zone_append": false, 00:21:56.800 "compare": false, 00:21:56.800 "compare_and_write": false, 00:21:56.800 "abort": true, 00:21:56.800 "seek_hole": false, 00:21:56.800 "seek_data": false, 00:21:56.800 "copy": true, 00:21:56.800 "nvme_iov_md": false 00:21:56.800 }, 00:21:56.800 "memory_domains": [ 00:21:56.800 { 00:21:56.800 "dma_device_id": "system", 00:21:56.800 "dma_device_type": 1 00:21:56.800 }, 00:21:56.800 { 00:21:56.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.800 "dma_device_type": 2 00:21:56.800 } 00:21:56.800 ], 00:21:56.800 "driver_specific": {} 00:21:56.800 } 00:21:56.800 ] 00:21:56.800 23:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:56.800 23:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:56.800 23:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:56.800 23:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:57.059 BaseBdev4 00:21:57.059 23:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:21:57.059 23:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:21:57.059 23:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:57.059 23:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:57.059 23:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:57.059 23:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:57.059 23:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:57.318 23:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:57.577 [ 00:21:57.577 { 00:21:57.577 "name": "BaseBdev4", 00:21:57.577 "aliases": [ 00:21:57.577 "4b7c4178-5336-4341-95fc-fb65b61468cf" 00:21:57.577 ], 00:21:57.577 "product_name": "Malloc disk", 00:21:57.577 "block_size": 512, 00:21:57.577 "num_blocks": 65536, 00:21:57.577 "uuid": "4b7c4178-5336-4341-95fc-fb65b61468cf", 00:21:57.577 "assigned_rate_limits": { 00:21:57.577 "rw_ios_per_sec": 0, 00:21:57.577 "rw_mbytes_per_sec": 0, 00:21:57.577 "r_mbytes_per_sec": 0, 00:21:57.577 "w_mbytes_per_sec": 0 00:21:57.577 }, 00:21:57.577 "claimed": false, 00:21:57.577 "zoned": false, 00:21:57.577 "supported_io_types": { 00:21:57.577 "read": true, 00:21:57.577 "write": true, 00:21:57.577 "unmap": true, 00:21:57.577 "flush": true, 00:21:57.577 "reset": true, 00:21:57.577 "nvme_admin": false, 00:21:57.577 "nvme_io": false, 00:21:57.577 "nvme_io_md": false, 00:21:57.577 "write_zeroes": true, 00:21:57.577 "zcopy": true, 00:21:57.577 "get_zone_info": false, 00:21:57.577 "zone_management": false, 00:21:57.577 "zone_append": false, 00:21:57.577 "compare": false, 00:21:57.577 "compare_and_write": false, 00:21:57.577 "abort": true, 00:21:57.577 "seek_hole": false, 00:21:57.577 "seek_data": false, 00:21:57.577 "copy": true, 00:21:57.577 "nvme_iov_md": false 00:21:57.577 }, 00:21:57.577 "memory_domains": [ 00:21:57.577 { 00:21:57.577 "dma_device_id": "system", 00:21:57.577 "dma_device_type": 1 00:21:57.577 }, 00:21:57.577 { 00:21:57.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:57.577 "dma_device_type": 2 00:21:57.577 } 00:21:57.577 ], 00:21:57.577 "driver_specific": {} 00:21:57.577 } 00:21:57.577 ] 00:21:57.577 23:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:57.577 23:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:57.577 23:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:57.577 23:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:57.577 [2024-07-13 23:07:46.971846] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:57.577 [2024-07-13 23:07:46.972098] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:57.577 [2024-07-13 23:07:46.972243] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:57.577 [2024-07-13 23:07:46.974368] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:57.577 [2024-07-13 23:07:46.974553] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:57.836 23:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:57.836 23:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:57.836 23:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:57.836 23:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:57.836 23:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:57.836 23:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:57.836 23:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:57.836 23:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:57.836 23:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:57.836 23:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:57.836 23:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.836 23:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:57.836 23:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:57.836 "name": "Existed_Raid", 00:21:57.836 "uuid": "c52053fe-899b-40d8-a6ca-f174855082c4", 00:21:57.836 "strip_size_kb": 64, 00:21:57.836 "state": "configuring", 00:21:57.836 "raid_level": "raid0", 00:21:57.836 "superblock": true, 00:21:57.836 "num_base_bdevs": 4, 00:21:57.836 "num_base_bdevs_discovered": 3, 00:21:57.836 "num_base_bdevs_operational": 4, 00:21:57.836 "base_bdevs_list": [ 00:21:57.836 { 00:21:57.836 "name": "BaseBdev1", 00:21:57.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.836 "is_configured": false, 00:21:57.836 "data_offset": 0, 00:21:57.836 "data_size": 0 00:21:57.836 }, 00:21:57.836 { 00:21:57.836 "name": "BaseBdev2", 00:21:57.836 "uuid": "e9316474-3dc4-452a-ad15-2502ea6d97bc", 00:21:57.836 "is_configured": true, 00:21:57.836 "data_offset": 2048, 00:21:57.836 "data_size": 63488 00:21:57.836 }, 00:21:57.836 { 00:21:57.836 "name": "BaseBdev3", 00:21:57.836 "uuid": "530d19fe-1b61-4fd2-aac1-b1ee8c3ef806", 00:21:57.836 "is_configured": true, 00:21:57.836 "data_offset": 2048, 00:21:57.836 "data_size": 63488 00:21:57.836 }, 00:21:57.836 { 00:21:57.836 "name": "BaseBdev4", 00:21:57.836 "uuid": "4b7c4178-5336-4341-95fc-fb65b61468cf", 00:21:57.836 "is_configured": true, 00:21:57.836 "data_offset": 2048, 00:21:57.836 "data_size": 63488 00:21:57.836 } 00:21:57.836 ] 00:21:57.836 }' 00:21:57.836 23:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:57.836 23:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:58.773 23:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:58.773 [2024-07-13 23:07:48.104092] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:58.773 23:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:58.773 23:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:58.773 23:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:58.773 23:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:58.773 23:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:58.773 23:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:58.773 23:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:58.773 23:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:58.773 23:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:58.773 23:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:58.773 23:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.773 23:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:59.031 23:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:59.032 "name": "Existed_Raid", 00:21:59.032 "uuid": "c52053fe-899b-40d8-a6ca-f174855082c4", 00:21:59.032 "strip_size_kb": 64, 00:21:59.032 "state": "configuring", 00:21:59.032 "raid_level": "raid0", 00:21:59.032 "superblock": true, 00:21:59.032 "num_base_bdevs": 4, 00:21:59.032 "num_base_bdevs_discovered": 2, 00:21:59.032 "num_base_bdevs_operational": 4, 00:21:59.032 "base_bdevs_list": [ 00:21:59.032 { 00:21:59.032 "name": "BaseBdev1", 00:21:59.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.032 "is_configured": false, 00:21:59.032 "data_offset": 0, 00:21:59.032 "data_size": 0 00:21:59.032 }, 00:21:59.032 { 00:21:59.032 "name": null, 00:21:59.032 "uuid": "e9316474-3dc4-452a-ad15-2502ea6d97bc", 00:21:59.032 "is_configured": false, 00:21:59.032 "data_offset": 2048, 00:21:59.032 "data_size": 63488 00:21:59.032 }, 00:21:59.032 { 00:21:59.032 "name": "BaseBdev3", 00:21:59.032 "uuid": "530d19fe-1b61-4fd2-aac1-b1ee8c3ef806", 00:21:59.032 "is_configured": true, 00:21:59.032 "data_offset": 2048, 00:21:59.032 "data_size": 63488 00:21:59.032 }, 00:21:59.032 { 00:21:59.032 "name": "BaseBdev4", 00:21:59.032 "uuid": "4b7c4178-5336-4341-95fc-fb65b61468cf", 00:21:59.032 "is_configured": true, 00:21:59.032 "data_offset": 2048, 00:21:59.032 "data_size": 63488 00:21:59.032 } 00:21:59.032 ] 00:21:59.032 }' 00:21:59.032 23:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:59.032 23:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:59.966 23:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:59.966 23:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.966 23:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:21:59.966 23:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:00.224 [2024-07-13 23:07:49.461105] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:00.224 BaseBdev1 00:22:00.224 23:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:22:00.224 23:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:00.224 23:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:00.224 23:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:00.224 23:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:00.224 23:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:00.224 23:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:00.482 23:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:00.482 [ 00:22:00.482 { 00:22:00.482 "name": "BaseBdev1", 00:22:00.482 "aliases": [ 00:22:00.482 "7dc11157-eed0-4c13-b6fd-11a86ab2fff9" 00:22:00.482 ], 00:22:00.482 "product_name": "Malloc disk", 00:22:00.482 "block_size": 512, 00:22:00.482 "num_blocks": 65536, 00:22:00.482 "uuid": "7dc11157-eed0-4c13-b6fd-11a86ab2fff9", 00:22:00.482 "assigned_rate_limits": { 00:22:00.482 "rw_ios_per_sec": 0, 00:22:00.482 "rw_mbytes_per_sec": 0, 00:22:00.482 "r_mbytes_per_sec": 0, 00:22:00.482 "w_mbytes_per_sec": 0 00:22:00.482 }, 00:22:00.482 "claimed": true, 00:22:00.482 "claim_type": "exclusive_write", 00:22:00.482 "zoned": false, 00:22:00.482 "supported_io_types": { 00:22:00.482 "read": true, 00:22:00.482 "write": true, 00:22:00.482 "unmap": true, 00:22:00.482 "flush": true, 00:22:00.482 "reset": true, 00:22:00.482 "nvme_admin": false, 00:22:00.482 "nvme_io": false, 00:22:00.482 "nvme_io_md": false, 00:22:00.482 "write_zeroes": true, 00:22:00.482 "zcopy": true, 00:22:00.482 "get_zone_info": false, 00:22:00.482 "zone_management": false, 00:22:00.482 "zone_append": false, 00:22:00.482 "compare": false, 00:22:00.482 "compare_and_write": false, 00:22:00.482 "abort": true, 00:22:00.482 "seek_hole": false, 00:22:00.482 "seek_data": false, 00:22:00.482 "copy": true, 00:22:00.482 "nvme_iov_md": false 00:22:00.482 }, 00:22:00.482 "memory_domains": [ 00:22:00.482 { 00:22:00.482 "dma_device_id": "system", 00:22:00.482 "dma_device_type": 1 00:22:00.482 }, 00:22:00.482 { 00:22:00.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:00.482 "dma_device_type": 2 00:22:00.482 } 00:22:00.482 ], 00:22:00.482 "driver_specific": {} 00:22:00.483 } 00:22:00.483 ] 00:22:00.741 23:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:00.741 23:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:00.741 23:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:00.741 23:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:00.741 23:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:00.741 23:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:00.741 23:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:00.741 23:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:00.741 23:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:00.741 23:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:00.741 23:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:00.741 23:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.741 23:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:00.741 23:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:00.741 "name": "Existed_Raid", 00:22:00.741 "uuid": "c52053fe-899b-40d8-a6ca-f174855082c4", 00:22:00.741 "strip_size_kb": 64, 00:22:00.741 "state": "configuring", 00:22:00.741 "raid_level": "raid0", 00:22:00.741 "superblock": true, 00:22:00.741 "num_base_bdevs": 4, 00:22:00.741 "num_base_bdevs_discovered": 3, 00:22:00.741 "num_base_bdevs_operational": 4, 00:22:00.741 "base_bdevs_list": [ 00:22:00.741 { 00:22:00.741 "name": "BaseBdev1", 00:22:00.741 "uuid": "7dc11157-eed0-4c13-b6fd-11a86ab2fff9", 00:22:00.741 "is_configured": true, 00:22:00.741 "data_offset": 2048, 00:22:00.741 "data_size": 63488 00:22:00.741 }, 00:22:00.741 { 00:22:00.741 "name": null, 00:22:00.741 "uuid": "e9316474-3dc4-452a-ad15-2502ea6d97bc", 00:22:00.741 "is_configured": false, 00:22:00.741 "data_offset": 2048, 00:22:00.741 "data_size": 63488 00:22:00.741 }, 00:22:00.741 { 00:22:00.741 "name": "BaseBdev3", 00:22:00.741 "uuid": "530d19fe-1b61-4fd2-aac1-b1ee8c3ef806", 00:22:00.741 "is_configured": true, 00:22:00.741 "data_offset": 2048, 00:22:00.741 "data_size": 63488 00:22:00.741 }, 00:22:00.741 { 00:22:00.741 "name": "BaseBdev4", 00:22:00.741 "uuid": "4b7c4178-5336-4341-95fc-fb65b61468cf", 00:22:00.741 "is_configured": true, 00:22:00.741 "data_offset": 2048, 00:22:00.741 "data_size": 63488 00:22:00.741 } 00:22:00.741 ] 00:22:00.741 }' 00:22:00.741 23:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:00.741 23:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:01.673 23:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.673 23:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:01.673 23:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:22:01.673 23:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:22:01.931 [2024-07-13 23:07:51.265691] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:01.931 23:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:01.931 23:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:01.931 23:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:01.931 23:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:01.931 23:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:01.931 23:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:01.931 23:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:01.931 23:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:01.931 23:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:01.931 23:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:01.931 23:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:01.931 23:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.189 23:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:02.189 "name": "Existed_Raid", 00:22:02.189 "uuid": "c52053fe-899b-40d8-a6ca-f174855082c4", 00:22:02.189 "strip_size_kb": 64, 00:22:02.189 "state": "configuring", 00:22:02.189 "raid_level": "raid0", 00:22:02.189 "superblock": true, 00:22:02.189 "num_base_bdevs": 4, 00:22:02.189 "num_base_bdevs_discovered": 2, 00:22:02.189 "num_base_bdevs_operational": 4, 00:22:02.189 "base_bdevs_list": [ 00:22:02.189 { 00:22:02.189 "name": "BaseBdev1", 00:22:02.189 "uuid": "7dc11157-eed0-4c13-b6fd-11a86ab2fff9", 00:22:02.189 "is_configured": true, 00:22:02.189 "data_offset": 2048, 00:22:02.189 "data_size": 63488 00:22:02.189 }, 00:22:02.189 { 00:22:02.189 "name": null, 00:22:02.189 "uuid": "e9316474-3dc4-452a-ad15-2502ea6d97bc", 00:22:02.189 "is_configured": false, 00:22:02.189 "data_offset": 2048, 00:22:02.189 "data_size": 63488 00:22:02.189 }, 00:22:02.189 { 00:22:02.189 "name": null, 00:22:02.189 "uuid": "530d19fe-1b61-4fd2-aac1-b1ee8c3ef806", 00:22:02.189 "is_configured": false, 00:22:02.189 "data_offset": 2048, 00:22:02.189 "data_size": 63488 00:22:02.189 }, 00:22:02.189 { 00:22:02.189 "name": "BaseBdev4", 00:22:02.189 "uuid": "4b7c4178-5336-4341-95fc-fb65b61468cf", 00:22:02.189 "is_configured": true, 00:22:02.189 "data_offset": 2048, 00:22:02.189 "data_size": 63488 00:22:02.189 } 00:22:02.189 ] 00:22:02.189 }' 00:22:02.189 23:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:02.189 23:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:03.124 23:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.124 23:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:03.124 23:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:22:03.124 23:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:03.383 [2024-07-13 23:07:52.722028] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:03.383 23:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:03.383 23:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:03.383 23:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:03.383 23:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:03.383 23:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:03.383 23:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:03.383 23:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:03.383 23:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:03.383 23:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:03.383 23:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:03.383 23:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.383 23:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:03.642 23:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:03.642 "name": "Existed_Raid", 00:22:03.642 "uuid": "c52053fe-899b-40d8-a6ca-f174855082c4", 00:22:03.642 "strip_size_kb": 64, 00:22:03.642 "state": "configuring", 00:22:03.642 "raid_level": "raid0", 00:22:03.642 "superblock": true, 00:22:03.642 "num_base_bdevs": 4, 00:22:03.642 "num_base_bdevs_discovered": 3, 00:22:03.642 "num_base_bdevs_operational": 4, 00:22:03.642 "base_bdevs_list": [ 00:22:03.642 { 00:22:03.642 "name": "BaseBdev1", 00:22:03.642 "uuid": "7dc11157-eed0-4c13-b6fd-11a86ab2fff9", 00:22:03.642 "is_configured": true, 00:22:03.642 "data_offset": 2048, 00:22:03.642 "data_size": 63488 00:22:03.642 }, 00:22:03.642 { 00:22:03.642 "name": null, 00:22:03.642 "uuid": "e9316474-3dc4-452a-ad15-2502ea6d97bc", 00:22:03.642 "is_configured": false, 00:22:03.642 "data_offset": 2048, 00:22:03.642 "data_size": 63488 00:22:03.642 }, 00:22:03.642 { 00:22:03.642 "name": "BaseBdev3", 00:22:03.642 "uuid": "530d19fe-1b61-4fd2-aac1-b1ee8c3ef806", 00:22:03.642 "is_configured": true, 00:22:03.642 "data_offset": 2048, 00:22:03.642 "data_size": 63488 00:22:03.642 }, 00:22:03.642 { 00:22:03.642 "name": "BaseBdev4", 00:22:03.642 "uuid": "4b7c4178-5336-4341-95fc-fb65b61468cf", 00:22:03.642 "is_configured": true, 00:22:03.642 "data_offset": 2048, 00:22:03.642 "data_size": 63488 00:22:03.642 } 00:22:03.642 ] 00:22:03.642 }' 00:22:03.642 23:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:03.642 23:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.623 23:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.623 23:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:04.623 23:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:22:04.623 23:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:04.881 [2024-07-13 23:07:54.162424] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:04.881 23:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:04.881 23:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:04.881 23:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:04.881 23:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:04.881 23:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:04.881 23:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:04.881 23:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:04.881 23:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:04.881 23:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:04.881 23:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:04.881 23:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:04.881 23:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.139 23:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:05.139 "name": "Existed_Raid", 00:22:05.139 "uuid": "c52053fe-899b-40d8-a6ca-f174855082c4", 00:22:05.139 "strip_size_kb": 64, 00:22:05.139 "state": "configuring", 00:22:05.139 "raid_level": "raid0", 00:22:05.139 "superblock": true, 00:22:05.139 "num_base_bdevs": 4, 00:22:05.139 "num_base_bdevs_discovered": 2, 00:22:05.139 "num_base_bdevs_operational": 4, 00:22:05.139 "base_bdevs_list": [ 00:22:05.139 { 00:22:05.139 "name": null, 00:22:05.139 "uuid": "7dc11157-eed0-4c13-b6fd-11a86ab2fff9", 00:22:05.139 "is_configured": false, 00:22:05.139 "data_offset": 2048, 00:22:05.139 "data_size": 63488 00:22:05.139 }, 00:22:05.139 { 00:22:05.139 "name": null, 00:22:05.139 "uuid": "e9316474-3dc4-452a-ad15-2502ea6d97bc", 00:22:05.139 "is_configured": false, 00:22:05.139 "data_offset": 2048, 00:22:05.139 "data_size": 63488 00:22:05.139 }, 00:22:05.139 { 00:22:05.139 "name": "BaseBdev3", 00:22:05.139 "uuid": "530d19fe-1b61-4fd2-aac1-b1ee8c3ef806", 00:22:05.139 "is_configured": true, 00:22:05.139 "data_offset": 2048, 00:22:05.139 "data_size": 63488 00:22:05.139 }, 00:22:05.139 { 00:22:05.139 "name": "BaseBdev4", 00:22:05.139 "uuid": "4b7c4178-5336-4341-95fc-fb65b61468cf", 00:22:05.139 "is_configured": true, 00:22:05.139 "data_offset": 2048, 00:22:05.139 "data_size": 63488 00:22:05.139 } 00:22:05.139 ] 00:22:05.139 }' 00:22:05.139 23:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:05.139 23:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.074 23:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.074 23:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:06.075 23:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:22:06.075 23:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:06.333 [2024-07-13 23:07:55.672615] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:06.333 23:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:06.333 23:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:06.333 23:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:06.333 23:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:06.333 23:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:06.333 23:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:06.333 23:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:06.333 23:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:06.333 23:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:06.333 23:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:06.333 23:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.333 23:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:06.591 23:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:06.591 "name": "Existed_Raid", 00:22:06.591 "uuid": "c52053fe-899b-40d8-a6ca-f174855082c4", 00:22:06.591 "strip_size_kb": 64, 00:22:06.591 "state": "configuring", 00:22:06.591 "raid_level": "raid0", 00:22:06.591 "superblock": true, 00:22:06.591 "num_base_bdevs": 4, 00:22:06.591 "num_base_bdevs_discovered": 3, 00:22:06.591 "num_base_bdevs_operational": 4, 00:22:06.591 "base_bdevs_list": [ 00:22:06.591 { 00:22:06.591 "name": null, 00:22:06.591 "uuid": "7dc11157-eed0-4c13-b6fd-11a86ab2fff9", 00:22:06.591 "is_configured": false, 00:22:06.591 "data_offset": 2048, 00:22:06.591 "data_size": 63488 00:22:06.591 }, 00:22:06.591 { 00:22:06.591 "name": "BaseBdev2", 00:22:06.591 "uuid": "e9316474-3dc4-452a-ad15-2502ea6d97bc", 00:22:06.591 "is_configured": true, 00:22:06.591 "data_offset": 2048, 00:22:06.591 "data_size": 63488 00:22:06.591 }, 00:22:06.591 { 00:22:06.591 "name": "BaseBdev3", 00:22:06.591 "uuid": "530d19fe-1b61-4fd2-aac1-b1ee8c3ef806", 00:22:06.591 "is_configured": true, 00:22:06.591 "data_offset": 2048, 00:22:06.591 "data_size": 63488 00:22:06.591 }, 00:22:06.591 { 00:22:06.591 "name": "BaseBdev4", 00:22:06.591 "uuid": "4b7c4178-5336-4341-95fc-fb65b61468cf", 00:22:06.591 "is_configured": true, 00:22:06.591 "data_offset": 2048, 00:22:06.591 "data_size": 63488 00:22:06.591 } 00:22:06.591 ] 00:22:06.591 }' 00:22:06.591 23:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:06.591 23:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.523 23:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.523 23:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:07.523 23:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:22:07.523 23:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.523 23:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:07.780 23:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 7dc11157-eed0-4c13-b6fd-11a86ab2fff9 00:22:08.039 [2024-07-13 23:07:57.312267] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:08.039 [2024-07-13 23:07:57.312669] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:22:08.039 [2024-07-13 23:07:57.312796] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:08.039 [2024-07-13 23:07:57.312950] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:22:08.039 [2024-07-13 23:07:57.313476] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:22:08.039 [2024-07-13 23:07:57.313615] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008180 00:22:08.039 [2024-07-13 23:07:57.313860] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:08.039 NewBaseBdev 00:22:08.039 23:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:22:08.039 23:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:22:08.039 23:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:08.039 23:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:08.039 23:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:08.039 23:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:08.039 23:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:08.298 23:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:08.555 [ 00:22:08.555 { 00:22:08.555 "name": "NewBaseBdev", 00:22:08.555 "aliases": [ 00:22:08.555 "7dc11157-eed0-4c13-b6fd-11a86ab2fff9" 00:22:08.555 ], 00:22:08.555 "product_name": "Malloc disk", 00:22:08.555 "block_size": 512, 00:22:08.555 "num_blocks": 65536, 00:22:08.555 "uuid": "7dc11157-eed0-4c13-b6fd-11a86ab2fff9", 00:22:08.555 "assigned_rate_limits": { 00:22:08.555 "rw_ios_per_sec": 0, 00:22:08.555 "rw_mbytes_per_sec": 0, 00:22:08.555 "r_mbytes_per_sec": 0, 00:22:08.555 "w_mbytes_per_sec": 0 00:22:08.555 }, 00:22:08.555 "claimed": true, 00:22:08.555 "claim_type": "exclusive_write", 00:22:08.555 "zoned": false, 00:22:08.555 "supported_io_types": { 00:22:08.555 "read": true, 00:22:08.555 "write": true, 00:22:08.555 "unmap": true, 00:22:08.555 "flush": true, 00:22:08.555 "reset": true, 00:22:08.555 "nvme_admin": false, 00:22:08.555 "nvme_io": false, 00:22:08.555 "nvme_io_md": false, 00:22:08.555 "write_zeroes": true, 00:22:08.555 "zcopy": true, 00:22:08.555 "get_zone_info": false, 00:22:08.555 "zone_management": false, 00:22:08.555 "zone_append": false, 00:22:08.555 "compare": false, 00:22:08.555 "compare_and_write": false, 00:22:08.555 "abort": true, 00:22:08.555 "seek_hole": false, 00:22:08.555 "seek_data": false, 00:22:08.555 "copy": true, 00:22:08.555 "nvme_iov_md": false 00:22:08.555 }, 00:22:08.555 "memory_domains": [ 00:22:08.555 { 00:22:08.555 "dma_device_id": "system", 00:22:08.555 "dma_device_type": 1 00:22:08.555 }, 00:22:08.555 { 00:22:08.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.555 "dma_device_type": 2 00:22:08.555 } 00:22:08.555 ], 00:22:08.555 "driver_specific": {} 00:22:08.555 } 00:22:08.555 ] 00:22:08.555 23:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:08.555 23:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:22:08.555 23:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:08.555 23:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:08.555 23:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:08.555 23:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:08.555 23:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:08.555 23:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:08.555 23:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:08.555 23:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:08.555 23:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:08.555 23:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.555 23:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:08.813 23:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:08.813 "name": "Existed_Raid", 00:22:08.813 "uuid": "c52053fe-899b-40d8-a6ca-f174855082c4", 00:22:08.813 "strip_size_kb": 64, 00:22:08.813 "state": "online", 00:22:08.813 "raid_level": "raid0", 00:22:08.813 "superblock": true, 00:22:08.813 "num_base_bdevs": 4, 00:22:08.813 "num_base_bdevs_discovered": 4, 00:22:08.813 "num_base_bdevs_operational": 4, 00:22:08.813 "base_bdevs_list": [ 00:22:08.813 { 00:22:08.813 "name": "NewBaseBdev", 00:22:08.813 "uuid": "7dc11157-eed0-4c13-b6fd-11a86ab2fff9", 00:22:08.813 "is_configured": true, 00:22:08.813 "data_offset": 2048, 00:22:08.813 "data_size": 63488 00:22:08.813 }, 00:22:08.813 { 00:22:08.813 "name": "BaseBdev2", 00:22:08.813 "uuid": "e9316474-3dc4-452a-ad15-2502ea6d97bc", 00:22:08.813 "is_configured": true, 00:22:08.813 "data_offset": 2048, 00:22:08.813 "data_size": 63488 00:22:08.813 }, 00:22:08.813 { 00:22:08.813 "name": "BaseBdev3", 00:22:08.813 "uuid": "530d19fe-1b61-4fd2-aac1-b1ee8c3ef806", 00:22:08.813 "is_configured": true, 00:22:08.813 "data_offset": 2048, 00:22:08.813 "data_size": 63488 00:22:08.813 }, 00:22:08.813 { 00:22:08.813 "name": "BaseBdev4", 00:22:08.813 "uuid": "4b7c4178-5336-4341-95fc-fb65b61468cf", 00:22:08.813 "is_configured": true, 00:22:08.813 "data_offset": 2048, 00:22:08.813 "data_size": 63488 00:22:08.813 } 00:22:08.813 ] 00:22:08.813 }' 00:22:08.813 23:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:08.813 23:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.378 23:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:22:09.378 23:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:09.378 23:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:09.378 23:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:09.378 23:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:09.378 23:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:22:09.378 23:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:09.378 23:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:09.636 [2024-07-13 23:07:58.949135] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:09.636 23:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:09.636 "name": "Existed_Raid", 00:22:09.636 "aliases": [ 00:22:09.636 "c52053fe-899b-40d8-a6ca-f174855082c4" 00:22:09.636 ], 00:22:09.636 "product_name": "Raid Volume", 00:22:09.636 "block_size": 512, 00:22:09.636 "num_blocks": 253952, 00:22:09.636 "uuid": "c52053fe-899b-40d8-a6ca-f174855082c4", 00:22:09.636 "assigned_rate_limits": { 00:22:09.636 "rw_ios_per_sec": 0, 00:22:09.636 "rw_mbytes_per_sec": 0, 00:22:09.636 "r_mbytes_per_sec": 0, 00:22:09.636 "w_mbytes_per_sec": 0 00:22:09.636 }, 00:22:09.636 "claimed": false, 00:22:09.636 "zoned": false, 00:22:09.636 "supported_io_types": { 00:22:09.636 "read": true, 00:22:09.636 "write": true, 00:22:09.636 "unmap": true, 00:22:09.636 "flush": true, 00:22:09.636 "reset": true, 00:22:09.636 "nvme_admin": false, 00:22:09.636 "nvme_io": false, 00:22:09.636 "nvme_io_md": false, 00:22:09.636 "write_zeroes": true, 00:22:09.636 "zcopy": false, 00:22:09.636 "get_zone_info": false, 00:22:09.636 "zone_management": false, 00:22:09.636 "zone_append": false, 00:22:09.637 "compare": false, 00:22:09.637 "compare_and_write": false, 00:22:09.637 "abort": false, 00:22:09.637 "seek_hole": false, 00:22:09.637 "seek_data": false, 00:22:09.637 "copy": false, 00:22:09.637 "nvme_iov_md": false 00:22:09.637 }, 00:22:09.637 "memory_domains": [ 00:22:09.637 { 00:22:09.637 "dma_device_id": "system", 00:22:09.637 "dma_device_type": 1 00:22:09.637 }, 00:22:09.637 { 00:22:09.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.637 "dma_device_type": 2 00:22:09.637 }, 00:22:09.637 { 00:22:09.637 "dma_device_id": "system", 00:22:09.637 "dma_device_type": 1 00:22:09.637 }, 00:22:09.637 { 00:22:09.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.637 "dma_device_type": 2 00:22:09.637 }, 00:22:09.637 { 00:22:09.637 "dma_device_id": "system", 00:22:09.637 "dma_device_type": 1 00:22:09.637 }, 00:22:09.637 { 00:22:09.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.637 "dma_device_type": 2 00:22:09.637 }, 00:22:09.637 { 00:22:09.637 "dma_device_id": "system", 00:22:09.637 "dma_device_type": 1 00:22:09.637 }, 00:22:09.637 { 00:22:09.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.637 "dma_device_type": 2 00:22:09.637 } 00:22:09.637 ], 00:22:09.637 "driver_specific": { 00:22:09.637 "raid": { 00:22:09.637 "uuid": "c52053fe-899b-40d8-a6ca-f174855082c4", 00:22:09.637 "strip_size_kb": 64, 00:22:09.637 "state": "online", 00:22:09.637 "raid_level": "raid0", 00:22:09.637 "superblock": true, 00:22:09.637 "num_base_bdevs": 4, 00:22:09.637 "num_base_bdevs_discovered": 4, 00:22:09.637 "num_base_bdevs_operational": 4, 00:22:09.637 "base_bdevs_list": [ 00:22:09.637 { 00:22:09.637 "name": "NewBaseBdev", 00:22:09.637 "uuid": "7dc11157-eed0-4c13-b6fd-11a86ab2fff9", 00:22:09.637 "is_configured": true, 00:22:09.637 "data_offset": 2048, 00:22:09.637 "data_size": 63488 00:22:09.637 }, 00:22:09.637 { 00:22:09.637 "name": "BaseBdev2", 00:22:09.637 "uuid": "e9316474-3dc4-452a-ad15-2502ea6d97bc", 00:22:09.637 "is_configured": true, 00:22:09.637 "data_offset": 2048, 00:22:09.637 "data_size": 63488 00:22:09.637 }, 00:22:09.637 { 00:22:09.637 "name": "BaseBdev3", 00:22:09.637 "uuid": "530d19fe-1b61-4fd2-aac1-b1ee8c3ef806", 00:22:09.637 "is_configured": true, 00:22:09.637 "data_offset": 2048, 00:22:09.637 "data_size": 63488 00:22:09.637 }, 00:22:09.637 { 00:22:09.637 "name": "BaseBdev4", 00:22:09.637 "uuid": "4b7c4178-5336-4341-95fc-fb65b61468cf", 00:22:09.637 "is_configured": true, 00:22:09.637 "data_offset": 2048, 00:22:09.637 "data_size": 63488 00:22:09.637 } 00:22:09.637 ] 00:22:09.637 } 00:22:09.637 } 00:22:09.637 }' 00:22:09.637 23:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:09.637 23:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:22:09.637 BaseBdev2 00:22:09.637 BaseBdev3 00:22:09.637 BaseBdev4' 00:22:09.637 23:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:09.637 23:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:22:09.637 23:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:09.895 23:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:09.895 "name": "NewBaseBdev", 00:22:09.895 "aliases": [ 00:22:09.895 "7dc11157-eed0-4c13-b6fd-11a86ab2fff9" 00:22:09.895 ], 00:22:09.895 "product_name": "Malloc disk", 00:22:09.895 "block_size": 512, 00:22:09.895 "num_blocks": 65536, 00:22:09.895 "uuid": "7dc11157-eed0-4c13-b6fd-11a86ab2fff9", 00:22:09.895 "assigned_rate_limits": { 00:22:09.895 "rw_ios_per_sec": 0, 00:22:09.895 "rw_mbytes_per_sec": 0, 00:22:09.895 "r_mbytes_per_sec": 0, 00:22:09.895 "w_mbytes_per_sec": 0 00:22:09.895 }, 00:22:09.895 "claimed": true, 00:22:09.895 "claim_type": "exclusive_write", 00:22:09.895 "zoned": false, 00:22:09.895 "supported_io_types": { 00:22:09.895 "read": true, 00:22:09.895 "write": true, 00:22:09.895 "unmap": true, 00:22:09.895 "flush": true, 00:22:09.895 "reset": true, 00:22:09.895 "nvme_admin": false, 00:22:09.895 "nvme_io": false, 00:22:09.895 "nvme_io_md": false, 00:22:09.895 "write_zeroes": true, 00:22:09.895 "zcopy": true, 00:22:09.895 "get_zone_info": false, 00:22:09.895 "zone_management": false, 00:22:09.895 "zone_append": false, 00:22:09.895 "compare": false, 00:22:09.895 "compare_and_write": false, 00:22:09.895 "abort": true, 00:22:09.895 "seek_hole": false, 00:22:09.895 "seek_data": false, 00:22:09.895 "copy": true, 00:22:09.895 "nvme_iov_md": false 00:22:09.895 }, 00:22:09.895 "memory_domains": [ 00:22:09.895 { 00:22:09.895 "dma_device_id": "system", 00:22:09.895 "dma_device_type": 1 00:22:09.895 }, 00:22:09.895 { 00:22:09.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.895 "dma_device_type": 2 00:22:09.895 } 00:22:09.895 ], 00:22:09.895 "driver_specific": {} 00:22:09.895 }' 00:22:09.895 23:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:10.153 23:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:10.153 23:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:10.153 23:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:10.153 23:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:10.153 23:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:10.153 23:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:10.153 23:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:10.153 23:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:10.153 23:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:10.411 23:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:10.411 23:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:10.411 23:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:10.411 23:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:10.411 23:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:10.669 23:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:10.669 "name": "BaseBdev2", 00:22:10.669 "aliases": [ 00:22:10.669 "e9316474-3dc4-452a-ad15-2502ea6d97bc" 00:22:10.669 ], 00:22:10.669 "product_name": "Malloc disk", 00:22:10.669 "block_size": 512, 00:22:10.669 "num_blocks": 65536, 00:22:10.669 "uuid": "e9316474-3dc4-452a-ad15-2502ea6d97bc", 00:22:10.669 "assigned_rate_limits": { 00:22:10.669 "rw_ios_per_sec": 0, 00:22:10.669 "rw_mbytes_per_sec": 0, 00:22:10.669 "r_mbytes_per_sec": 0, 00:22:10.669 "w_mbytes_per_sec": 0 00:22:10.669 }, 00:22:10.669 "claimed": true, 00:22:10.669 "claim_type": "exclusive_write", 00:22:10.669 "zoned": false, 00:22:10.669 "supported_io_types": { 00:22:10.669 "read": true, 00:22:10.669 "write": true, 00:22:10.669 "unmap": true, 00:22:10.669 "flush": true, 00:22:10.669 "reset": true, 00:22:10.669 "nvme_admin": false, 00:22:10.669 "nvme_io": false, 00:22:10.669 "nvme_io_md": false, 00:22:10.669 "write_zeroes": true, 00:22:10.669 "zcopy": true, 00:22:10.669 "get_zone_info": false, 00:22:10.669 "zone_management": false, 00:22:10.669 "zone_append": false, 00:22:10.669 "compare": false, 00:22:10.669 "compare_and_write": false, 00:22:10.669 "abort": true, 00:22:10.669 "seek_hole": false, 00:22:10.669 "seek_data": false, 00:22:10.669 "copy": true, 00:22:10.669 "nvme_iov_md": false 00:22:10.669 }, 00:22:10.669 "memory_domains": [ 00:22:10.669 { 00:22:10.669 "dma_device_id": "system", 00:22:10.669 "dma_device_type": 1 00:22:10.669 }, 00:22:10.669 { 00:22:10.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:10.669 "dma_device_type": 2 00:22:10.669 } 00:22:10.669 ], 00:22:10.669 "driver_specific": {} 00:22:10.669 }' 00:22:10.669 23:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:10.669 23:07:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:10.669 23:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:10.669 23:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:10.669 23:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:10.927 23:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:10.927 23:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:10.927 23:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:10.927 23:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:10.927 23:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:10.927 23:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:10.927 23:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:10.927 23:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:10.927 23:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:10.927 23:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:11.185 23:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:11.185 "name": "BaseBdev3", 00:22:11.185 "aliases": [ 00:22:11.185 "530d19fe-1b61-4fd2-aac1-b1ee8c3ef806" 00:22:11.185 ], 00:22:11.185 "product_name": "Malloc disk", 00:22:11.185 "block_size": 512, 00:22:11.185 "num_blocks": 65536, 00:22:11.185 "uuid": "530d19fe-1b61-4fd2-aac1-b1ee8c3ef806", 00:22:11.185 "assigned_rate_limits": { 00:22:11.185 "rw_ios_per_sec": 0, 00:22:11.185 "rw_mbytes_per_sec": 0, 00:22:11.185 "r_mbytes_per_sec": 0, 00:22:11.185 "w_mbytes_per_sec": 0 00:22:11.185 }, 00:22:11.185 "claimed": true, 00:22:11.185 "claim_type": "exclusive_write", 00:22:11.185 "zoned": false, 00:22:11.185 "supported_io_types": { 00:22:11.185 "read": true, 00:22:11.185 "write": true, 00:22:11.185 "unmap": true, 00:22:11.185 "flush": true, 00:22:11.185 "reset": true, 00:22:11.185 "nvme_admin": false, 00:22:11.185 "nvme_io": false, 00:22:11.185 "nvme_io_md": false, 00:22:11.185 "write_zeroes": true, 00:22:11.185 "zcopy": true, 00:22:11.185 "get_zone_info": false, 00:22:11.185 "zone_management": false, 00:22:11.185 "zone_append": false, 00:22:11.185 "compare": false, 00:22:11.185 "compare_and_write": false, 00:22:11.185 "abort": true, 00:22:11.185 "seek_hole": false, 00:22:11.185 "seek_data": false, 00:22:11.185 "copy": true, 00:22:11.185 "nvme_iov_md": false 00:22:11.185 }, 00:22:11.185 "memory_domains": [ 00:22:11.185 { 00:22:11.185 "dma_device_id": "system", 00:22:11.185 "dma_device_type": 1 00:22:11.185 }, 00:22:11.185 { 00:22:11.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:11.185 "dma_device_type": 2 00:22:11.185 } 00:22:11.185 ], 00:22:11.185 "driver_specific": {} 00:22:11.185 }' 00:22:11.185 23:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:11.443 23:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:11.443 23:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:11.443 23:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:11.443 23:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:11.443 23:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:11.443 23:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:11.443 23:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:11.701 23:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:11.701 23:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:11.701 23:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:11.701 23:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:11.701 23:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:11.701 23:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:11.701 23:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:11.957 23:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:11.957 "name": "BaseBdev4", 00:22:11.957 "aliases": [ 00:22:11.957 "4b7c4178-5336-4341-95fc-fb65b61468cf" 00:22:11.957 ], 00:22:11.957 "product_name": "Malloc disk", 00:22:11.957 "block_size": 512, 00:22:11.957 "num_blocks": 65536, 00:22:11.957 "uuid": "4b7c4178-5336-4341-95fc-fb65b61468cf", 00:22:11.957 "assigned_rate_limits": { 00:22:11.957 "rw_ios_per_sec": 0, 00:22:11.957 "rw_mbytes_per_sec": 0, 00:22:11.957 "r_mbytes_per_sec": 0, 00:22:11.957 "w_mbytes_per_sec": 0 00:22:11.957 }, 00:22:11.957 "claimed": true, 00:22:11.957 "claim_type": "exclusive_write", 00:22:11.957 "zoned": false, 00:22:11.957 "supported_io_types": { 00:22:11.957 "read": true, 00:22:11.957 "write": true, 00:22:11.957 "unmap": true, 00:22:11.957 "flush": true, 00:22:11.957 "reset": true, 00:22:11.957 "nvme_admin": false, 00:22:11.957 "nvme_io": false, 00:22:11.957 "nvme_io_md": false, 00:22:11.957 "write_zeroes": true, 00:22:11.957 "zcopy": true, 00:22:11.957 "get_zone_info": false, 00:22:11.957 "zone_management": false, 00:22:11.957 "zone_append": false, 00:22:11.957 "compare": false, 00:22:11.957 "compare_and_write": false, 00:22:11.957 "abort": true, 00:22:11.957 "seek_hole": false, 00:22:11.957 "seek_data": false, 00:22:11.957 "copy": true, 00:22:11.957 "nvme_iov_md": false 00:22:11.957 }, 00:22:11.957 "memory_domains": [ 00:22:11.957 { 00:22:11.957 "dma_device_id": "system", 00:22:11.957 "dma_device_type": 1 00:22:11.957 }, 00:22:11.957 { 00:22:11.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:11.957 "dma_device_type": 2 00:22:11.957 } 00:22:11.957 ], 00:22:11.957 "driver_specific": {} 00:22:11.957 }' 00:22:11.957 23:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:11.957 23:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:11.957 23:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:11.957 23:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:11.957 23:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:12.215 23:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:12.215 23:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:12.215 23:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:12.215 23:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:12.215 23:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:12.215 23:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:12.215 23:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:12.215 23:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:12.473 [2024-07-13 23:08:01.833688] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:12.473 [2024-07-13 23:08:01.833913] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:12.473 [2024-07-13 23:08:01.834175] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:12.473 [2024-07-13 23:08:01.834379] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:12.473 [2024-07-13 23:08:01.834511] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name Existed_Raid, state offline 00:22:12.473 23:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 145152 00:22:12.473 23:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 145152 ']' 00:22:12.473 23:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 145152 00:22:12.473 23:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:22:12.473 23:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:12.473 23:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 145152 00:22:12.473 23:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:12.473 23:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:12.473 23:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 145152' 00:22:12.473 killing process with pid 145152 00:22:12.473 23:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 145152 00:22:12.473 [2024-07-13 23:08:01.876882] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:12.473 23:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 145152 00:22:12.732 [2024-07-13 23:08:01.925140] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:12.991 ************************************ 00:22:12.991 END TEST raid_state_function_test_sb 00:22:12.991 ************************************ 00:22:12.991 23:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:22:12.991 00:22:12.991 real 0m33.652s 00:22:12.991 user 1m4.049s 00:22:12.991 sys 0m3.872s 00:22:12.991 23:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:12.991 23:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.991 23:08:02 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:12.991 23:08:02 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:22:12.991 23:08:02 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:12.991 23:08:02 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:12.991 23:08:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:12.991 ************************************ 00:22:12.991 START TEST raid_superblock_test 00:22:12.991 ************************************ 00:22:12.991 23:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 4 00:22:12.991 23:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:22:12.991 23:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:22:12.991 23:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:22:12.991 23:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:22:12.991 23:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:22:12.991 23:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:22:12.991 23:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:22:12.991 23:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:22:12.991 23:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:22:12.991 23:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:22:12.991 23:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:22:12.991 23:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:22:12.991 23:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:22:12.991 23:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:22:12.991 23:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:22:12.991 23:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:22:12.991 23:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=146252 00:22:12.991 23:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:22:12.991 23:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 146252 /var/tmp/spdk-raid.sock 00:22:12.991 23:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 146252 ']' 00:22:12.991 23:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:12.991 23:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:12.991 23:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:12.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:12.991 23:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:12.991 23:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.991 [2024-07-13 23:08:02.386117] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:22:12.991 [2024-07-13 23:08:02.386614] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146252 ] 00:22:13.249 [2024-07-13 23:08:02.538421] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.249 [2024-07-13 23:08:02.654288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.507 [2024-07-13 23:08:02.738836] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:14.071 23:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:14.071 23:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:22:14.071 23:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:22:14.071 23:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:14.071 23:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:22:14.071 23:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:22:14.071 23:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:14.071 23:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:14.071 23:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:14.071 23:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:14.071 23:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:22:14.328 malloc1 00:22:14.328 23:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:14.585 [2024-07-13 23:08:03.814925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:14.585 [2024-07-13 23:08:03.815265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:14.585 [2024-07-13 23:08:03.815419] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:22:14.585 [2024-07-13 23:08:03.815571] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:14.585 [2024-07-13 23:08:03.818252] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:14.585 [2024-07-13 23:08:03.818484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:14.585 pt1 00:22:14.585 23:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:14.585 23:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:14.585 23:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:22:14.585 23:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:22:14.585 23:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:14.585 23:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:14.585 23:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:14.585 23:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:14.585 23:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:22:14.850 malloc2 00:22:14.850 23:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:15.125 [2024-07-13 23:08:04.281045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:15.125 [2024-07-13 23:08:04.281381] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:15.125 [2024-07-13 23:08:04.281517] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:22:15.125 [2024-07-13 23:08:04.281658] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:15.126 [2024-07-13 23:08:04.284259] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:15.126 [2024-07-13 23:08:04.284437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:15.126 pt2 00:22:15.126 23:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:15.126 23:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:15.126 23:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:22:15.126 23:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:22:15.126 23:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:15.126 23:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:15.126 23:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:15.126 23:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:15.126 23:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:22:15.383 malloc3 00:22:15.383 23:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:15.641 [2024-07-13 23:08:04.792441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:15.641 [2024-07-13 23:08:04.792694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:15.641 [2024-07-13 23:08:04.792780] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:15.641 [2024-07-13 23:08:04.793119] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:15.641 [2024-07-13 23:08:04.795679] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:15.641 [2024-07-13 23:08:04.795867] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:15.641 pt3 00:22:15.641 23:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:15.641 23:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:15.641 23:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:22:15.641 23:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:22:15.641 23:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:22:15.641 23:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:15.641 23:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:15.641 23:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:15.641 23:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:22:15.641 malloc4 00:22:15.641 23:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:15.899 [2024-07-13 23:08:05.230140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:15.899 [2024-07-13 23:08:05.230528] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:15.899 [2024-07-13 23:08:05.230677] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:15.899 [2024-07-13 23:08:05.230879] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:15.899 [2024-07-13 23:08:05.233438] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:15.899 [2024-07-13 23:08:05.233616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:15.899 pt4 00:22:15.899 23:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:15.899 23:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:15.899 23:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:22:16.157 [2024-07-13 23:08:05.482398] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:16.157 [2024-07-13 23:08:05.485459] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:16.157 [2024-07-13 23:08:05.485744] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:16.157 [2024-07-13 23:08:05.485992] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:16.157 [2024-07-13 23:08:05.486436] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:22:16.157 [2024-07-13 23:08:05.486609] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:16.157 [2024-07-13 23:08:05.486903] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:22:16.157 [2024-07-13 23:08:05.487499] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:22:16.157 [2024-07-13 23:08:05.487631] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:22:16.157 [2024-07-13 23:08:05.487939] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:16.157 23:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:22:16.157 23:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:16.157 23:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:16.157 23:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:16.157 23:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:16.157 23:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:16.157 23:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:16.157 23:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:16.157 23:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:16.157 23:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:16.157 23:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.157 23:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.417 23:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:16.417 "name": "raid_bdev1", 00:22:16.417 "uuid": "4c53bf84-643f-47d2-b5f0-044062ad986e", 00:22:16.417 "strip_size_kb": 64, 00:22:16.417 "state": "online", 00:22:16.417 "raid_level": "raid0", 00:22:16.417 "superblock": true, 00:22:16.417 "num_base_bdevs": 4, 00:22:16.417 "num_base_bdevs_discovered": 4, 00:22:16.417 "num_base_bdevs_operational": 4, 00:22:16.417 "base_bdevs_list": [ 00:22:16.417 { 00:22:16.417 "name": "pt1", 00:22:16.417 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:16.417 "is_configured": true, 00:22:16.417 "data_offset": 2048, 00:22:16.417 "data_size": 63488 00:22:16.417 }, 00:22:16.417 { 00:22:16.417 "name": "pt2", 00:22:16.417 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:16.417 "is_configured": true, 00:22:16.417 "data_offset": 2048, 00:22:16.417 "data_size": 63488 00:22:16.417 }, 00:22:16.417 { 00:22:16.417 "name": "pt3", 00:22:16.417 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:16.417 "is_configured": true, 00:22:16.417 "data_offset": 2048, 00:22:16.417 "data_size": 63488 00:22:16.417 }, 00:22:16.417 { 00:22:16.417 "name": "pt4", 00:22:16.417 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:16.418 "is_configured": true, 00:22:16.418 "data_offset": 2048, 00:22:16.418 "data_size": 63488 00:22:16.418 } 00:22:16.418 ] 00:22:16.418 }' 00:22:16.418 23:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:16.418 23:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.352 23:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:22:17.352 23:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:17.352 23:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:17.352 23:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:17.352 23:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:17.352 23:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:17.352 23:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:17.352 23:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:17.352 [2024-07-13 23:08:06.723893] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:17.352 23:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:17.352 "name": "raid_bdev1", 00:22:17.352 "aliases": [ 00:22:17.352 "4c53bf84-643f-47d2-b5f0-044062ad986e" 00:22:17.352 ], 00:22:17.352 "product_name": "Raid Volume", 00:22:17.352 "block_size": 512, 00:22:17.352 "num_blocks": 253952, 00:22:17.352 "uuid": "4c53bf84-643f-47d2-b5f0-044062ad986e", 00:22:17.352 "assigned_rate_limits": { 00:22:17.352 "rw_ios_per_sec": 0, 00:22:17.352 "rw_mbytes_per_sec": 0, 00:22:17.352 "r_mbytes_per_sec": 0, 00:22:17.352 "w_mbytes_per_sec": 0 00:22:17.352 }, 00:22:17.352 "claimed": false, 00:22:17.352 "zoned": false, 00:22:17.352 "supported_io_types": { 00:22:17.352 "read": true, 00:22:17.352 "write": true, 00:22:17.352 "unmap": true, 00:22:17.352 "flush": true, 00:22:17.352 "reset": true, 00:22:17.353 "nvme_admin": false, 00:22:17.353 "nvme_io": false, 00:22:17.353 "nvme_io_md": false, 00:22:17.353 "write_zeroes": true, 00:22:17.353 "zcopy": false, 00:22:17.353 "get_zone_info": false, 00:22:17.353 "zone_management": false, 00:22:17.353 "zone_append": false, 00:22:17.353 "compare": false, 00:22:17.353 "compare_and_write": false, 00:22:17.353 "abort": false, 00:22:17.353 "seek_hole": false, 00:22:17.353 "seek_data": false, 00:22:17.353 "copy": false, 00:22:17.353 "nvme_iov_md": false 00:22:17.353 }, 00:22:17.353 "memory_domains": [ 00:22:17.353 { 00:22:17.353 "dma_device_id": "system", 00:22:17.353 "dma_device_type": 1 00:22:17.353 }, 00:22:17.353 { 00:22:17.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.353 "dma_device_type": 2 00:22:17.353 }, 00:22:17.353 { 00:22:17.353 "dma_device_id": "system", 00:22:17.353 "dma_device_type": 1 00:22:17.353 }, 00:22:17.353 { 00:22:17.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.353 "dma_device_type": 2 00:22:17.353 }, 00:22:17.353 { 00:22:17.353 "dma_device_id": "system", 00:22:17.353 "dma_device_type": 1 00:22:17.353 }, 00:22:17.353 { 00:22:17.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.353 "dma_device_type": 2 00:22:17.353 }, 00:22:17.353 { 00:22:17.353 "dma_device_id": "system", 00:22:17.353 "dma_device_type": 1 00:22:17.353 }, 00:22:17.353 { 00:22:17.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.353 "dma_device_type": 2 00:22:17.353 } 00:22:17.353 ], 00:22:17.353 "driver_specific": { 00:22:17.353 "raid": { 00:22:17.353 "uuid": "4c53bf84-643f-47d2-b5f0-044062ad986e", 00:22:17.353 "strip_size_kb": 64, 00:22:17.353 "state": "online", 00:22:17.353 "raid_level": "raid0", 00:22:17.353 "superblock": true, 00:22:17.353 "num_base_bdevs": 4, 00:22:17.353 "num_base_bdevs_discovered": 4, 00:22:17.353 "num_base_bdevs_operational": 4, 00:22:17.353 "base_bdevs_list": [ 00:22:17.353 { 00:22:17.353 "name": "pt1", 00:22:17.353 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:17.353 "is_configured": true, 00:22:17.353 "data_offset": 2048, 00:22:17.353 "data_size": 63488 00:22:17.353 }, 00:22:17.353 { 00:22:17.353 "name": "pt2", 00:22:17.353 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:17.353 "is_configured": true, 00:22:17.353 "data_offset": 2048, 00:22:17.353 "data_size": 63488 00:22:17.353 }, 00:22:17.353 { 00:22:17.353 "name": "pt3", 00:22:17.353 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:17.353 "is_configured": true, 00:22:17.353 "data_offset": 2048, 00:22:17.353 "data_size": 63488 00:22:17.353 }, 00:22:17.353 { 00:22:17.353 "name": "pt4", 00:22:17.353 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:17.353 "is_configured": true, 00:22:17.353 "data_offset": 2048, 00:22:17.353 "data_size": 63488 00:22:17.353 } 00:22:17.353 ] 00:22:17.353 } 00:22:17.353 } 00:22:17.353 }' 00:22:17.353 23:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:17.611 23:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:17.611 pt2 00:22:17.611 pt3 00:22:17.611 pt4' 00:22:17.611 23:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:17.611 23:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:17.611 23:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:17.869 23:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:17.869 "name": "pt1", 00:22:17.869 "aliases": [ 00:22:17.869 "00000000-0000-0000-0000-000000000001" 00:22:17.869 ], 00:22:17.869 "product_name": "passthru", 00:22:17.869 "block_size": 512, 00:22:17.869 "num_blocks": 65536, 00:22:17.869 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:17.869 "assigned_rate_limits": { 00:22:17.869 "rw_ios_per_sec": 0, 00:22:17.869 "rw_mbytes_per_sec": 0, 00:22:17.869 "r_mbytes_per_sec": 0, 00:22:17.869 "w_mbytes_per_sec": 0 00:22:17.869 }, 00:22:17.869 "claimed": true, 00:22:17.869 "claim_type": "exclusive_write", 00:22:17.869 "zoned": false, 00:22:17.869 "supported_io_types": { 00:22:17.869 "read": true, 00:22:17.869 "write": true, 00:22:17.869 "unmap": true, 00:22:17.869 "flush": true, 00:22:17.869 "reset": true, 00:22:17.869 "nvme_admin": false, 00:22:17.869 "nvme_io": false, 00:22:17.869 "nvme_io_md": false, 00:22:17.869 "write_zeroes": true, 00:22:17.869 "zcopy": true, 00:22:17.869 "get_zone_info": false, 00:22:17.869 "zone_management": false, 00:22:17.869 "zone_append": false, 00:22:17.869 "compare": false, 00:22:17.869 "compare_and_write": false, 00:22:17.869 "abort": true, 00:22:17.869 "seek_hole": false, 00:22:17.869 "seek_data": false, 00:22:17.869 "copy": true, 00:22:17.869 "nvme_iov_md": false 00:22:17.869 }, 00:22:17.869 "memory_domains": [ 00:22:17.869 { 00:22:17.869 "dma_device_id": "system", 00:22:17.869 "dma_device_type": 1 00:22:17.869 }, 00:22:17.869 { 00:22:17.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.869 "dma_device_type": 2 00:22:17.869 } 00:22:17.869 ], 00:22:17.869 "driver_specific": { 00:22:17.869 "passthru": { 00:22:17.869 "name": "pt1", 00:22:17.869 "base_bdev_name": "malloc1" 00:22:17.869 } 00:22:17.869 } 00:22:17.869 }' 00:22:17.869 23:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:17.869 23:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:17.869 23:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:17.870 23:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:17.870 23:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:17.870 23:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:17.870 23:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:18.128 23:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:18.128 23:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:18.128 23:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:18.128 23:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:18.128 23:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:18.128 23:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:18.128 23:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:18.128 23:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:18.387 23:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:18.387 "name": "pt2", 00:22:18.387 "aliases": [ 00:22:18.387 "00000000-0000-0000-0000-000000000002" 00:22:18.387 ], 00:22:18.387 "product_name": "passthru", 00:22:18.387 "block_size": 512, 00:22:18.387 "num_blocks": 65536, 00:22:18.387 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:18.387 "assigned_rate_limits": { 00:22:18.387 "rw_ios_per_sec": 0, 00:22:18.387 "rw_mbytes_per_sec": 0, 00:22:18.387 "r_mbytes_per_sec": 0, 00:22:18.387 "w_mbytes_per_sec": 0 00:22:18.387 }, 00:22:18.387 "claimed": true, 00:22:18.387 "claim_type": "exclusive_write", 00:22:18.387 "zoned": false, 00:22:18.387 "supported_io_types": { 00:22:18.387 "read": true, 00:22:18.387 "write": true, 00:22:18.387 "unmap": true, 00:22:18.387 "flush": true, 00:22:18.387 "reset": true, 00:22:18.387 "nvme_admin": false, 00:22:18.387 "nvme_io": false, 00:22:18.387 "nvme_io_md": false, 00:22:18.387 "write_zeroes": true, 00:22:18.387 "zcopy": true, 00:22:18.387 "get_zone_info": false, 00:22:18.387 "zone_management": false, 00:22:18.387 "zone_append": false, 00:22:18.387 "compare": false, 00:22:18.387 "compare_and_write": false, 00:22:18.387 "abort": true, 00:22:18.387 "seek_hole": false, 00:22:18.387 "seek_data": false, 00:22:18.387 "copy": true, 00:22:18.387 "nvme_iov_md": false 00:22:18.387 }, 00:22:18.387 "memory_domains": [ 00:22:18.387 { 00:22:18.387 "dma_device_id": "system", 00:22:18.387 "dma_device_type": 1 00:22:18.387 }, 00:22:18.387 { 00:22:18.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.387 "dma_device_type": 2 00:22:18.387 } 00:22:18.387 ], 00:22:18.387 "driver_specific": { 00:22:18.387 "passthru": { 00:22:18.387 "name": "pt2", 00:22:18.387 "base_bdev_name": "malloc2" 00:22:18.387 } 00:22:18.387 } 00:22:18.387 }' 00:22:18.387 23:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:18.387 23:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:18.387 23:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:18.645 23:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:18.645 23:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:18.645 23:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:18.645 23:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:18.645 23:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:18.645 23:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:18.645 23:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:18.645 23:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:18.903 23:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:18.903 23:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:18.903 23:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:22:18.903 23:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:19.161 23:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:19.161 "name": "pt3", 00:22:19.161 "aliases": [ 00:22:19.161 "00000000-0000-0000-0000-000000000003" 00:22:19.161 ], 00:22:19.161 "product_name": "passthru", 00:22:19.161 "block_size": 512, 00:22:19.161 "num_blocks": 65536, 00:22:19.161 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:19.161 "assigned_rate_limits": { 00:22:19.161 "rw_ios_per_sec": 0, 00:22:19.161 "rw_mbytes_per_sec": 0, 00:22:19.161 "r_mbytes_per_sec": 0, 00:22:19.161 "w_mbytes_per_sec": 0 00:22:19.161 }, 00:22:19.161 "claimed": true, 00:22:19.161 "claim_type": "exclusive_write", 00:22:19.161 "zoned": false, 00:22:19.161 "supported_io_types": { 00:22:19.161 "read": true, 00:22:19.161 "write": true, 00:22:19.161 "unmap": true, 00:22:19.161 "flush": true, 00:22:19.161 "reset": true, 00:22:19.161 "nvme_admin": false, 00:22:19.161 "nvme_io": false, 00:22:19.161 "nvme_io_md": false, 00:22:19.161 "write_zeroes": true, 00:22:19.161 "zcopy": true, 00:22:19.161 "get_zone_info": false, 00:22:19.161 "zone_management": false, 00:22:19.161 "zone_append": false, 00:22:19.161 "compare": false, 00:22:19.161 "compare_and_write": false, 00:22:19.161 "abort": true, 00:22:19.161 "seek_hole": false, 00:22:19.161 "seek_data": false, 00:22:19.161 "copy": true, 00:22:19.161 "nvme_iov_md": false 00:22:19.161 }, 00:22:19.161 "memory_domains": [ 00:22:19.161 { 00:22:19.161 "dma_device_id": "system", 00:22:19.161 "dma_device_type": 1 00:22:19.161 }, 00:22:19.161 { 00:22:19.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.161 "dma_device_type": 2 00:22:19.161 } 00:22:19.161 ], 00:22:19.161 "driver_specific": { 00:22:19.161 "passthru": { 00:22:19.161 "name": "pt3", 00:22:19.161 "base_bdev_name": "malloc3" 00:22:19.161 } 00:22:19.161 } 00:22:19.161 }' 00:22:19.161 23:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:19.161 23:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:19.161 23:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:19.161 23:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:19.161 23:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:19.419 23:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:19.419 23:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:19.419 23:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:19.419 23:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:19.419 23:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:19.419 23:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:19.419 23:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:19.419 23:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:19.419 23:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:22:19.419 23:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:19.677 23:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:19.677 "name": "pt4", 00:22:19.677 "aliases": [ 00:22:19.677 "00000000-0000-0000-0000-000000000004" 00:22:19.677 ], 00:22:19.677 "product_name": "passthru", 00:22:19.677 "block_size": 512, 00:22:19.677 "num_blocks": 65536, 00:22:19.677 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:19.677 "assigned_rate_limits": { 00:22:19.677 "rw_ios_per_sec": 0, 00:22:19.677 "rw_mbytes_per_sec": 0, 00:22:19.677 "r_mbytes_per_sec": 0, 00:22:19.677 "w_mbytes_per_sec": 0 00:22:19.677 }, 00:22:19.677 "claimed": true, 00:22:19.677 "claim_type": "exclusive_write", 00:22:19.677 "zoned": false, 00:22:19.677 "supported_io_types": { 00:22:19.677 "read": true, 00:22:19.677 "write": true, 00:22:19.677 "unmap": true, 00:22:19.677 "flush": true, 00:22:19.677 "reset": true, 00:22:19.677 "nvme_admin": false, 00:22:19.677 "nvme_io": false, 00:22:19.677 "nvme_io_md": false, 00:22:19.677 "write_zeroes": true, 00:22:19.677 "zcopy": true, 00:22:19.677 "get_zone_info": false, 00:22:19.677 "zone_management": false, 00:22:19.677 "zone_append": false, 00:22:19.677 "compare": false, 00:22:19.677 "compare_and_write": false, 00:22:19.677 "abort": true, 00:22:19.677 "seek_hole": false, 00:22:19.677 "seek_data": false, 00:22:19.677 "copy": true, 00:22:19.677 "nvme_iov_md": false 00:22:19.677 }, 00:22:19.677 "memory_domains": [ 00:22:19.677 { 00:22:19.677 "dma_device_id": "system", 00:22:19.677 "dma_device_type": 1 00:22:19.677 }, 00:22:19.677 { 00:22:19.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.677 "dma_device_type": 2 00:22:19.677 } 00:22:19.677 ], 00:22:19.677 "driver_specific": { 00:22:19.677 "passthru": { 00:22:19.677 "name": "pt4", 00:22:19.677 "base_bdev_name": "malloc4" 00:22:19.677 } 00:22:19.677 } 00:22:19.677 }' 00:22:19.677 23:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:19.935 23:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:19.935 23:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:19.935 23:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:19.935 23:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:19.935 23:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:19.935 23:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:19.935 23:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:20.192 23:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:20.192 23:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:20.192 23:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:20.192 23:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:20.192 23:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:20.192 23:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:22:20.451 [2024-07-13 23:08:09.736724] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:20.451 23:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=4c53bf84-643f-47d2-b5f0-044062ad986e 00:22:20.451 23:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 4c53bf84-643f-47d2-b5f0-044062ad986e ']' 00:22:20.451 23:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:20.710 [2024-07-13 23:08:10.040519] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:20.710 [2024-07-13 23:08:10.040570] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:20.710 [2024-07-13 23:08:10.040719] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:20.710 [2024-07-13 23:08:10.040868] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:20.710 [2024-07-13 23:08:10.040897] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:22:20.710 23:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.710 23:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:22:20.967 23:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:22:20.967 23:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:22:20.967 23:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:20.967 23:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:21.226 23:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:21.226 23:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:21.484 23:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:21.484 23:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:21.741 23:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:21.741 23:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:21.998 23:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:21.998 23:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:22:22.255 23:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:22:22.255 23:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:22.255 23:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:22:22.255 23:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:22.255 23:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:22.255 23:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:22.255 23:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:22.255 23:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:22.255 23:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:22.255 23:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:22.255 23:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:22.255 23:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:22:22.255 23:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:22.513 [2024-07-13 23:08:11.729443] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:22.513 [2024-07-13 23:08:11.731999] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:22.513 [2024-07-13 23:08:11.732071] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:22.513 [2024-07-13 23:08:11.732117] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:22:22.513 [2024-07-13 23:08:11.732187] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:22.513 [2024-07-13 23:08:11.732285] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:22.513 [2024-07-13 23:08:11.732371] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:22:22.513 [2024-07-13 23:08:11.732435] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:22:22.513 [2024-07-13 23:08:11.732466] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:22.513 [2024-07-13 23:08:11.732479] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state configuring 00:22:22.513 request: 00:22:22.513 { 00:22:22.513 "name": "raid_bdev1", 00:22:22.513 "raid_level": "raid0", 00:22:22.513 "base_bdevs": [ 00:22:22.513 "malloc1", 00:22:22.513 "malloc2", 00:22:22.513 "malloc3", 00:22:22.513 "malloc4" 00:22:22.513 ], 00:22:22.513 "strip_size_kb": 64, 00:22:22.513 "superblock": false, 00:22:22.513 "method": "bdev_raid_create", 00:22:22.513 "req_id": 1 00:22:22.513 } 00:22:22.513 Got JSON-RPC error response 00:22:22.513 response: 00:22:22.513 { 00:22:22.513 "code": -17, 00:22:22.513 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:22.513 } 00:22:22.513 23:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:22:22.513 23:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:22.513 23:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:22.513 23:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:22.513 23:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.513 23:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:22:22.771 23:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:22:22.771 23:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:22:22.771 23:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:23.028 [2024-07-13 23:08:12.237488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:23.028 [2024-07-13 23:08:12.237639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:23.028 [2024-07-13 23:08:12.237684] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:23.028 [2024-07-13 23:08:12.237733] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:23.028 [2024-07-13 23:08:12.240736] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:23.028 [2024-07-13 23:08:12.240861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:23.028 [2024-07-13 23:08:12.240989] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:23.028 [2024-07-13 23:08:12.241093] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:23.028 pt1 00:22:23.028 23:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:22:23.028 23:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:23.028 23:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:23.028 23:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:23.028 23:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:23.028 23:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:23.028 23:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:23.028 23:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:23.028 23:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:23.028 23:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:23.028 23:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.028 23:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.286 23:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:23.286 "name": "raid_bdev1", 00:22:23.286 "uuid": "4c53bf84-643f-47d2-b5f0-044062ad986e", 00:22:23.286 "strip_size_kb": 64, 00:22:23.286 "state": "configuring", 00:22:23.286 "raid_level": "raid0", 00:22:23.286 "superblock": true, 00:22:23.286 "num_base_bdevs": 4, 00:22:23.286 "num_base_bdevs_discovered": 1, 00:22:23.286 "num_base_bdevs_operational": 4, 00:22:23.286 "base_bdevs_list": [ 00:22:23.286 { 00:22:23.286 "name": "pt1", 00:22:23.286 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:23.286 "is_configured": true, 00:22:23.286 "data_offset": 2048, 00:22:23.286 "data_size": 63488 00:22:23.286 }, 00:22:23.286 { 00:22:23.286 "name": null, 00:22:23.286 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:23.286 "is_configured": false, 00:22:23.286 "data_offset": 2048, 00:22:23.286 "data_size": 63488 00:22:23.286 }, 00:22:23.286 { 00:22:23.286 "name": null, 00:22:23.286 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:23.286 "is_configured": false, 00:22:23.286 "data_offset": 2048, 00:22:23.286 "data_size": 63488 00:22:23.286 }, 00:22:23.286 { 00:22:23.286 "name": null, 00:22:23.286 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:23.286 "is_configured": false, 00:22:23.286 "data_offset": 2048, 00:22:23.286 "data_size": 63488 00:22:23.286 } 00:22:23.286 ] 00:22:23.286 }' 00:22:23.286 23:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:23.286 23:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.853 23:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:22:23.853 23:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:24.111 [2024-07-13 23:08:13.453960] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:24.111 [2024-07-13 23:08:13.454074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:24.111 [2024-07-13 23:08:13.454122] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:22:24.111 [2024-07-13 23:08:13.454147] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:24.111 [2024-07-13 23:08:13.454754] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:24.111 [2024-07-13 23:08:13.454817] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:24.111 [2024-07-13 23:08:13.454961] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:24.111 [2024-07-13 23:08:13.455024] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:24.111 pt2 00:22:24.111 23:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:24.676 [2024-07-13 23:08:13.794125] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:24.676 23:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:22:24.676 23:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:24.676 23:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:24.676 23:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:24.676 23:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:24.676 23:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:24.676 23:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:24.676 23:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:24.676 23:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:24.676 23:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:24.676 23:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.676 23:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.676 23:08:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:24.676 "name": "raid_bdev1", 00:22:24.676 "uuid": "4c53bf84-643f-47d2-b5f0-044062ad986e", 00:22:24.676 "strip_size_kb": 64, 00:22:24.676 "state": "configuring", 00:22:24.676 "raid_level": "raid0", 00:22:24.676 "superblock": true, 00:22:24.676 "num_base_bdevs": 4, 00:22:24.676 "num_base_bdevs_discovered": 1, 00:22:24.676 "num_base_bdevs_operational": 4, 00:22:24.676 "base_bdevs_list": [ 00:22:24.676 { 00:22:24.676 "name": "pt1", 00:22:24.676 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:24.676 "is_configured": true, 00:22:24.676 "data_offset": 2048, 00:22:24.676 "data_size": 63488 00:22:24.676 }, 00:22:24.676 { 00:22:24.676 "name": null, 00:22:24.676 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:24.676 "is_configured": false, 00:22:24.676 "data_offset": 2048, 00:22:24.676 "data_size": 63488 00:22:24.676 }, 00:22:24.676 { 00:22:24.676 "name": null, 00:22:24.676 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:24.676 "is_configured": false, 00:22:24.676 "data_offset": 2048, 00:22:24.676 "data_size": 63488 00:22:24.676 }, 00:22:24.676 { 00:22:24.676 "name": null, 00:22:24.676 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:24.676 "is_configured": false, 00:22:24.676 "data_offset": 2048, 00:22:24.676 "data_size": 63488 00:22:24.676 } 00:22:24.676 ] 00:22:24.676 }' 00:22:24.676 23:08:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:24.676 23:08:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.609 23:08:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:22:25.609 23:08:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:25.609 23:08:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:25.609 [2024-07-13 23:08:14.934311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:25.609 [2024-07-13 23:08:14.934414] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:25.609 [2024-07-13 23:08:14.934460] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:22:25.609 [2024-07-13 23:08:14.934483] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:25.609 [2024-07-13 23:08:14.935026] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:25.609 [2024-07-13 23:08:14.935083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:25.609 [2024-07-13 23:08:14.935167] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:25.609 [2024-07-13 23:08:14.935193] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:25.609 pt2 00:22:25.609 23:08:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:22:25.609 23:08:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:25.609 23:08:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:25.920 [2024-07-13 23:08:15.186360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:25.920 [2024-07-13 23:08:15.186463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:25.920 [2024-07-13 23:08:15.186493] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:22:25.920 [2024-07-13 23:08:15.186519] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:25.920 [2024-07-13 23:08:15.187033] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:25.920 [2024-07-13 23:08:15.187087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:25.920 [2024-07-13 23:08:15.187174] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:25.920 [2024-07-13 23:08:15.187206] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:25.920 pt3 00:22:25.920 23:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:22:25.920 23:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:25.920 23:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:26.193 [2024-07-13 23:08:15.410440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:26.193 [2024-07-13 23:08:15.410558] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:26.193 [2024-07-13 23:08:15.410594] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:26.193 [2024-07-13 23:08:15.410622] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:26.193 [2024-07-13 23:08:15.411119] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:26.193 [2024-07-13 23:08:15.411176] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:26.193 [2024-07-13 23:08:15.411256] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:22:26.193 [2024-07-13 23:08:15.411282] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:26.193 [2024-07-13 23:08:15.411423] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:22:26.193 [2024-07-13 23:08:15.411437] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:26.193 [2024-07-13 23:08:15.411522] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:22:26.193 [2024-07-13 23:08:15.411840] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:22:26.193 [2024-07-13 23:08:15.411853] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:22:26.193 [2024-07-13 23:08:15.411951] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:26.193 pt4 00:22:26.193 23:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:22:26.193 23:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:26.193 23:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:22:26.193 23:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:26.193 23:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:26.193 23:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:26.193 23:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:26.193 23:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:26.193 23:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:26.193 23:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:26.193 23:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:26.193 23:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:26.193 23:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.193 23:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.451 23:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:26.451 "name": "raid_bdev1", 00:22:26.451 "uuid": "4c53bf84-643f-47d2-b5f0-044062ad986e", 00:22:26.451 "strip_size_kb": 64, 00:22:26.451 "state": "online", 00:22:26.451 "raid_level": "raid0", 00:22:26.451 "superblock": true, 00:22:26.451 "num_base_bdevs": 4, 00:22:26.451 "num_base_bdevs_discovered": 4, 00:22:26.451 "num_base_bdevs_operational": 4, 00:22:26.451 "base_bdevs_list": [ 00:22:26.451 { 00:22:26.451 "name": "pt1", 00:22:26.451 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:26.451 "is_configured": true, 00:22:26.451 "data_offset": 2048, 00:22:26.451 "data_size": 63488 00:22:26.451 }, 00:22:26.451 { 00:22:26.451 "name": "pt2", 00:22:26.451 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:26.451 "is_configured": true, 00:22:26.451 "data_offset": 2048, 00:22:26.451 "data_size": 63488 00:22:26.451 }, 00:22:26.451 { 00:22:26.451 "name": "pt3", 00:22:26.451 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:26.451 "is_configured": true, 00:22:26.451 "data_offset": 2048, 00:22:26.451 "data_size": 63488 00:22:26.451 }, 00:22:26.451 { 00:22:26.451 "name": "pt4", 00:22:26.451 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:26.451 "is_configured": true, 00:22:26.451 "data_offset": 2048, 00:22:26.451 "data_size": 63488 00:22:26.451 } 00:22:26.451 ] 00:22:26.451 }' 00:22:26.451 23:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:26.451 23:08:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.016 23:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:22:27.016 23:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:27.016 23:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:27.016 23:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:27.016 23:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:27.016 23:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:27.016 23:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:27.016 23:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:27.280 [2024-07-13 23:08:16.583130] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:27.280 23:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:27.280 "name": "raid_bdev1", 00:22:27.280 "aliases": [ 00:22:27.280 "4c53bf84-643f-47d2-b5f0-044062ad986e" 00:22:27.280 ], 00:22:27.280 "product_name": "Raid Volume", 00:22:27.280 "block_size": 512, 00:22:27.280 "num_blocks": 253952, 00:22:27.280 "uuid": "4c53bf84-643f-47d2-b5f0-044062ad986e", 00:22:27.280 "assigned_rate_limits": { 00:22:27.280 "rw_ios_per_sec": 0, 00:22:27.280 "rw_mbytes_per_sec": 0, 00:22:27.280 "r_mbytes_per_sec": 0, 00:22:27.280 "w_mbytes_per_sec": 0 00:22:27.280 }, 00:22:27.280 "claimed": false, 00:22:27.280 "zoned": false, 00:22:27.280 "supported_io_types": { 00:22:27.280 "read": true, 00:22:27.280 "write": true, 00:22:27.280 "unmap": true, 00:22:27.280 "flush": true, 00:22:27.280 "reset": true, 00:22:27.280 "nvme_admin": false, 00:22:27.280 "nvme_io": false, 00:22:27.280 "nvme_io_md": false, 00:22:27.280 "write_zeroes": true, 00:22:27.280 "zcopy": false, 00:22:27.280 "get_zone_info": false, 00:22:27.280 "zone_management": false, 00:22:27.280 "zone_append": false, 00:22:27.280 "compare": false, 00:22:27.280 "compare_and_write": false, 00:22:27.280 "abort": false, 00:22:27.280 "seek_hole": false, 00:22:27.280 "seek_data": false, 00:22:27.280 "copy": false, 00:22:27.280 "nvme_iov_md": false 00:22:27.280 }, 00:22:27.280 "memory_domains": [ 00:22:27.280 { 00:22:27.280 "dma_device_id": "system", 00:22:27.280 "dma_device_type": 1 00:22:27.280 }, 00:22:27.280 { 00:22:27.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:27.280 "dma_device_type": 2 00:22:27.280 }, 00:22:27.280 { 00:22:27.280 "dma_device_id": "system", 00:22:27.280 "dma_device_type": 1 00:22:27.280 }, 00:22:27.280 { 00:22:27.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:27.280 "dma_device_type": 2 00:22:27.280 }, 00:22:27.280 { 00:22:27.280 "dma_device_id": "system", 00:22:27.280 "dma_device_type": 1 00:22:27.280 }, 00:22:27.280 { 00:22:27.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:27.280 "dma_device_type": 2 00:22:27.280 }, 00:22:27.280 { 00:22:27.280 "dma_device_id": "system", 00:22:27.280 "dma_device_type": 1 00:22:27.280 }, 00:22:27.280 { 00:22:27.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:27.280 "dma_device_type": 2 00:22:27.280 } 00:22:27.280 ], 00:22:27.280 "driver_specific": { 00:22:27.280 "raid": { 00:22:27.280 "uuid": "4c53bf84-643f-47d2-b5f0-044062ad986e", 00:22:27.280 "strip_size_kb": 64, 00:22:27.280 "state": "online", 00:22:27.280 "raid_level": "raid0", 00:22:27.280 "superblock": true, 00:22:27.280 "num_base_bdevs": 4, 00:22:27.280 "num_base_bdevs_discovered": 4, 00:22:27.280 "num_base_bdevs_operational": 4, 00:22:27.280 "base_bdevs_list": [ 00:22:27.280 { 00:22:27.280 "name": "pt1", 00:22:27.280 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:27.280 "is_configured": true, 00:22:27.280 "data_offset": 2048, 00:22:27.280 "data_size": 63488 00:22:27.280 }, 00:22:27.280 { 00:22:27.280 "name": "pt2", 00:22:27.280 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:27.280 "is_configured": true, 00:22:27.280 "data_offset": 2048, 00:22:27.280 "data_size": 63488 00:22:27.280 }, 00:22:27.280 { 00:22:27.280 "name": "pt3", 00:22:27.280 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:27.280 "is_configured": true, 00:22:27.280 "data_offset": 2048, 00:22:27.280 "data_size": 63488 00:22:27.280 }, 00:22:27.280 { 00:22:27.280 "name": "pt4", 00:22:27.280 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:27.280 "is_configured": true, 00:22:27.280 "data_offset": 2048, 00:22:27.280 "data_size": 63488 00:22:27.280 } 00:22:27.280 ] 00:22:27.280 } 00:22:27.280 } 00:22:27.280 }' 00:22:27.280 23:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:27.280 23:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:27.280 pt2 00:22:27.280 pt3 00:22:27.280 pt4' 00:22:27.280 23:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:27.280 23:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:27.280 23:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:27.847 23:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:27.847 "name": "pt1", 00:22:27.847 "aliases": [ 00:22:27.847 "00000000-0000-0000-0000-000000000001" 00:22:27.847 ], 00:22:27.847 "product_name": "passthru", 00:22:27.847 "block_size": 512, 00:22:27.847 "num_blocks": 65536, 00:22:27.847 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:27.847 "assigned_rate_limits": { 00:22:27.847 "rw_ios_per_sec": 0, 00:22:27.847 "rw_mbytes_per_sec": 0, 00:22:27.847 "r_mbytes_per_sec": 0, 00:22:27.847 "w_mbytes_per_sec": 0 00:22:27.847 }, 00:22:27.847 "claimed": true, 00:22:27.847 "claim_type": "exclusive_write", 00:22:27.847 "zoned": false, 00:22:27.847 "supported_io_types": { 00:22:27.847 "read": true, 00:22:27.847 "write": true, 00:22:27.847 "unmap": true, 00:22:27.847 "flush": true, 00:22:27.847 "reset": true, 00:22:27.847 "nvme_admin": false, 00:22:27.847 "nvme_io": false, 00:22:27.847 "nvme_io_md": false, 00:22:27.847 "write_zeroes": true, 00:22:27.847 "zcopy": true, 00:22:27.847 "get_zone_info": false, 00:22:27.847 "zone_management": false, 00:22:27.847 "zone_append": false, 00:22:27.847 "compare": false, 00:22:27.847 "compare_and_write": false, 00:22:27.847 "abort": true, 00:22:27.847 "seek_hole": false, 00:22:27.847 "seek_data": false, 00:22:27.847 "copy": true, 00:22:27.847 "nvme_iov_md": false 00:22:27.847 }, 00:22:27.847 "memory_domains": [ 00:22:27.847 { 00:22:27.847 "dma_device_id": "system", 00:22:27.847 "dma_device_type": 1 00:22:27.847 }, 00:22:27.847 { 00:22:27.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:27.847 "dma_device_type": 2 00:22:27.847 } 00:22:27.847 ], 00:22:27.847 "driver_specific": { 00:22:27.847 "passthru": { 00:22:27.847 "name": "pt1", 00:22:27.847 "base_bdev_name": "malloc1" 00:22:27.847 } 00:22:27.847 } 00:22:27.847 }' 00:22:27.847 23:08:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:27.847 23:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:27.847 23:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:27.847 23:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:27.847 23:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:27.847 23:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:27.847 23:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:27.847 23:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:27.847 23:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:27.847 23:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:28.106 23:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:28.106 23:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:28.106 23:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:28.106 23:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:28.106 23:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:28.364 23:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:28.364 "name": "pt2", 00:22:28.364 "aliases": [ 00:22:28.364 "00000000-0000-0000-0000-000000000002" 00:22:28.364 ], 00:22:28.364 "product_name": "passthru", 00:22:28.364 "block_size": 512, 00:22:28.364 "num_blocks": 65536, 00:22:28.364 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:28.364 "assigned_rate_limits": { 00:22:28.364 "rw_ios_per_sec": 0, 00:22:28.364 "rw_mbytes_per_sec": 0, 00:22:28.364 "r_mbytes_per_sec": 0, 00:22:28.364 "w_mbytes_per_sec": 0 00:22:28.364 }, 00:22:28.364 "claimed": true, 00:22:28.364 "claim_type": "exclusive_write", 00:22:28.364 "zoned": false, 00:22:28.364 "supported_io_types": { 00:22:28.364 "read": true, 00:22:28.364 "write": true, 00:22:28.364 "unmap": true, 00:22:28.364 "flush": true, 00:22:28.364 "reset": true, 00:22:28.364 "nvme_admin": false, 00:22:28.364 "nvme_io": false, 00:22:28.364 "nvme_io_md": false, 00:22:28.364 "write_zeroes": true, 00:22:28.364 "zcopy": true, 00:22:28.364 "get_zone_info": false, 00:22:28.364 "zone_management": false, 00:22:28.364 "zone_append": false, 00:22:28.364 "compare": false, 00:22:28.364 "compare_and_write": false, 00:22:28.364 "abort": true, 00:22:28.364 "seek_hole": false, 00:22:28.364 "seek_data": false, 00:22:28.364 "copy": true, 00:22:28.364 "nvme_iov_md": false 00:22:28.364 }, 00:22:28.364 "memory_domains": [ 00:22:28.364 { 00:22:28.364 "dma_device_id": "system", 00:22:28.364 "dma_device_type": 1 00:22:28.364 }, 00:22:28.364 { 00:22:28.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:28.364 "dma_device_type": 2 00:22:28.364 } 00:22:28.364 ], 00:22:28.364 "driver_specific": { 00:22:28.364 "passthru": { 00:22:28.364 "name": "pt2", 00:22:28.364 "base_bdev_name": "malloc2" 00:22:28.364 } 00:22:28.364 } 00:22:28.364 }' 00:22:28.364 23:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:28.364 23:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:28.364 23:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:28.364 23:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:28.364 23:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:28.364 23:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:28.364 23:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:28.623 23:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:28.623 23:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:28.623 23:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:28.623 23:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:28.623 23:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:28.623 23:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:28.623 23:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:28.623 23:08:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:22:28.881 23:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:28.881 "name": "pt3", 00:22:28.881 "aliases": [ 00:22:28.881 "00000000-0000-0000-0000-000000000003" 00:22:28.881 ], 00:22:28.881 "product_name": "passthru", 00:22:28.881 "block_size": 512, 00:22:28.881 "num_blocks": 65536, 00:22:28.881 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:28.881 "assigned_rate_limits": { 00:22:28.881 "rw_ios_per_sec": 0, 00:22:28.881 "rw_mbytes_per_sec": 0, 00:22:28.881 "r_mbytes_per_sec": 0, 00:22:28.881 "w_mbytes_per_sec": 0 00:22:28.881 }, 00:22:28.881 "claimed": true, 00:22:28.881 "claim_type": "exclusive_write", 00:22:28.881 "zoned": false, 00:22:28.881 "supported_io_types": { 00:22:28.881 "read": true, 00:22:28.881 "write": true, 00:22:28.881 "unmap": true, 00:22:28.881 "flush": true, 00:22:28.881 "reset": true, 00:22:28.881 "nvme_admin": false, 00:22:28.881 "nvme_io": false, 00:22:28.881 "nvme_io_md": false, 00:22:28.881 "write_zeroes": true, 00:22:28.881 "zcopy": true, 00:22:28.881 "get_zone_info": false, 00:22:28.881 "zone_management": false, 00:22:28.881 "zone_append": false, 00:22:28.881 "compare": false, 00:22:28.881 "compare_and_write": false, 00:22:28.881 "abort": true, 00:22:28.881 "seek_hole": false, 00:22:28.881 "seek_data": false, 00:22:28.881 "copy": true, 00:22:28.881 "nvme_iov_md": false 00:22:28.881 }, 00:22:28.881 "memory_domains": [ 00:22:28.881 { 00:22:28.881 "dma_device_id": "system", 00:22:28.881 "dma_device_type": 1 00:22:28.881 }, 00:22:28.881 { 00:22:28.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:28.882 "dma_device_type": 2 00:22:28.882 } 00:22:28.882 ], 00:22:28.882 "driver_specific": { 00:22:28.882 "passthru": { 00:22:28.882 "name": "pt3", 00:22:28.882 "base_bdev_name": "malloc3" 00:22:28.882 } 00:22:28.882 } 00:22:28.882 }' 00:22:28.882 23:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:28.882 23:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:29.139 23:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:29.139 23:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:29.139 23:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:29.139 23:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:29.139 23:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:29.139 23:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:29.139 23:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:29.139 23:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:29.140 23:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:29.397 23:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:29.397 23:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:29.397 23:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:29.397 23:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:22:29.656 23:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:29.656 "name": "pt4", 00:22:29.656 "aliases": [ 00:22:29.656 "00000000-0000-0000-0000-000000000004" 00:22:29.656 ], 00:22:29.656 "product_name": "passthru", 00:22:29.656 "block_size": 512, 00:22:29.656 "num_blocks": 65536, 00:22:29.656 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:29.656 "assigned_rate_limits": { 00:22:29.656 "rw_ios_per_sec": 0, 00:22:29.656 "rw_mbytes_per_sec": 0, 00:22:29.656 "r_mbytes_per_sec": 0, 00:22:29.656 "w_mbytes_per_sec": 0 00:22:29.656 }, 00:22:29.656 "claimed": true, 00:22:29.656 "claim_type": "exclusive_write", 00:22:29.656 "zoned": false, 00:22:29.656 "supported_io_types": { 00:22:29.656 "read": true, 00:22:29.656 "write": true, 00:22:29.656 "unmap": true, 00:22:29.656 "flush": true, 00:22:29.656 "reset": true, 00:22:29.656 "nvme_admin": false, 00:22:29.656 "nvme_io": false, 00:22:29.656 "nvme_io_md": false, 00:22:29.656 "write_zeroes": true, 00:22:29.656 "zcopy": true, 00:22:29.656 "get_zone_info": false, 00:22:29.656 "zone_management": false, 00:22:29.656 "zone_append": false, 00:22:29.656 "compare": false, 00:22:29.656 "compare_and_write": false, 00:22:29.656 "abort": true, 00:22:29.656 "seek_hole": false, 00:22:29.656 "seek_data": false, 00:22:29.656 "copy": true, 00:22:29.656 "nvme_iov_md": false 00:22:29.656 }, 00:22:29.656 "memory_domains": [ 00:22:29.656 { 00:22:29.656 "dma_device_id": "system", 00:22:29.656 "dma_device_type": 1 00:22:29.656 }, 00:22:29.656 { 00:22:29.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:29.656 "dma_device_type": 2 00:22:29.656 } 00:22:29.656 ], 00:22:29.656 "driver_specific": { 00:22:29.656 "passthru": { 00:22:29.656 "name": "pt4", 00:22:29.656 "base_bdev_name": "malloc4" 00:22:29.656 } 00:22:29.656 } 00:22:29.656 }' 00:22:29.656 23:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:29.656 23:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:29.656 23:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:29.656 23:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:29.656 23:08:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:29.656 23:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:29.656 23:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:29.656 23:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:29.914 23:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:29.914 23:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:29.914 23:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:29.914 23:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:29.914 23:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:29.914 23:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:22:30.173 [2024-07-13 23:08:19.367733] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:30.173 23:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 4c53bf84-643f-47d2-b5f0-044062ad986e '!=' 4c53bf84-643f-47d2-b5f0-044062ad986e ']' 00:22:30.173 23:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:22:30.173 23:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:30.173 23:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:22:30.173 23:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 146252 00:22:30.173 23:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 146252 ']' 00:22:30.173 23:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 146252 00:22:30.173 23:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:22:30.173 23:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:30.173 23:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 146252 00:22:30.173 23:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:30.173 23:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:30.173 23:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 146252' 00:22:30.173 killing process with pid 146252 00:22:30.173 23:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 146252 00:22:30.173 [2024-07-13 23:08:19.403908] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:30.173 [2024-07-13 23:08:19.403990] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:30.173 23:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 146252 00:22:30.173 [2024-07-13 23:08:19.404057] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:30.173 [2024-07-13 23:08:19.404068] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:22:30.173 [2024-07-13 23:08:19.442684] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:30.432 23:08:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:22:30.432 00:22:30.432 real 0m17.352s 00:22:30.432 user 0m32.132s 00:22:30.432 sys 0m2.360s 00:22:30.432 23:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:30.432 23:08:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.432 ************************************ 00:22:30.432 END TEST raid_superblock_test 00:22:30.432 ************************************ 00:22:30.432 23:08:19 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:30.432 23:08:19 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:22:30.432 23:08:19 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:30.432 23:08:19 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:30.432 23:08:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:30.432 ************************************ 00:22:30.432 START TEST raid_read_error_test 00:22:30.432 ************************************ 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 read 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.SiHAIcVkjz 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=146801 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 146801 /var/tmp/spdk-raid.sock 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 146801 ']' 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:30.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:30.432 23:08:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.432 [2024-07-13 23:08:19.787387] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:22:30.432 [2024-07-13 23:08:19.787595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146801 ] 00:22:30.690 [2024-07-13 23:08:19.923971] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.690 [2024-07-13 23:08:20.017464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.690 [2024-07-13 23:08:20.090827] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:31.625 23:08:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:31.625 23:08:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:22:31.625 23:08:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:31.625 23:08:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:31.883 BaseBdev1_malloc 00:22:31.883 23:08:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:22:31.883 true 00:22:32.142 23:08:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:32.142 [2024-07-13 23:08:21.491760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:32.142 [2024-07-13 23:08:21.492036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:32.142 [2024-07-13 23:08:21.492195] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:22:32.142 [2024-07-13 23:08:21.492347] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:32.142 [2024-07-13 23:08:21.495136] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:32.142 [2024-07-13 23:08:21.495349] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:32.142 BaseBdev1 00:22:32.142 23:08:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:32.142 23:08:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:32.708 BaseBdev2_malloc 00:22:32.708 23:08:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:22:32.708 true 00:22:32.708 23:08:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:32.967 [2024-07-13 23:08:22.289899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:32.967 [2024-07-13 23:08:22.290147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:32.967 [2024-07-13 23:08:22.290238] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:22:32.967 [2024-07-13 23:08:22.290527] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:32.967 [2024-07-13 23:08:22.293157] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:32.967 [2024-07-13 23:08:22.293344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:32.967 BaseBdev2 00:22:32.967 23:08:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:32.967 23:08:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:33.226 BaseBdev3_malloc 00:22:33.226 23:08:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:22:33.486 true 00:22:33.486 23:08:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:33.745 [2024-07-13 23:08:23.052024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:33.745 [2024-07-13 23:08:23.052259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:33.745 [2024-07-13 23:08:23.052342] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:22:33.745 [2024-07-13 23:08:23.052581] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:33.745 [2024-07-13 23:08:23.055156] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:33.745 [2024-07-13 23:08:23.055350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:33.745 BaseBdev3 00:22:33.745 23:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:33.745 23:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:34.002 BaseBdev4_malloc 00:22:34.002 23:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:22:34.260 true 00:22:34.260 23:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:22:34.532 [2024-07-13 23:08:23.701705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:22:34.532 [2024-07-13 23:08:23.701984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:34.532 [2024-07-13 23:08:23.702061] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:34.532 [2024-07-13 23:08:23.702331] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:34.532 [2024-07-13 23:08:23.705209] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:34.532 [2024-07-13 23:08:23.705407] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:34.532 BaseBdev4 00:22:34.532 23:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:22:34.532 [2024-07-13 23:08:23.917934] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:34.532 [2024-07-13 23:08:23.920567] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:34.532 [2024-07-13 23:08:23.920861] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:34.532 [2024-07-13 23:08:23.921083] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:34.532 [2024-07-13 23:08:23.921561] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:22:34.532 [2024-07-13 23:08:23.921697] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:34.532 [2024-07-13 23:08:23.921975] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:22:34.532 [2024-07-13 23:08:23.922539] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:22:34.532 [2024-07-13 23:08:23.922692] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:22:34.532 [2024-07-13 23:08:23.923002] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:34.532 23:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:22:34.532 23:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:34.532 23:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:34.532 23:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:34.532 23:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:34.532 23:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:34.532 23:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:34.532 23:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:34.532 23:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:34.533 23:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:34.791 23:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.791 23:08:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.791 23:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:34.791 "name": "raid_bdev1", 00:22:34.791 "uuid": "e111f2c1-2787-4995-9c86-0afdfaa6c906", 00:22:34.791 "strip_size_kb": 64, 00:22:34.791 "state": "online", 00:22:34.791 "raid_level": "raid0", 00:22:34.791 "superblock": true, 00:22:34.791 "num_base_bdevs": 4, 00:22:34.791 "num_base_bdevs_discovered": 4, 00:22:34.791 "num_base_bdevs_operational": 4, 00:22:34.791 "base_bdevs_list": [ 00:22:34.791 { 00:22:34.791 "name": "BaseBdev1", 00:22:34.791 "uuid": "a9b4c9e2-ae52-557b-b23e-ea5eb965ee8b", 00:22:34.791 "is_configured": true, 00:22:34.791 "data_offset": 2048, 00:22:34.791 "data_size": 63488 00:22:34.791 }, 00:22:34.791 { 00:22:34.791 "name": "BaseBdev2", 00:22:34.791 "uuid": "5f7f2d8b-3d5d-5f85-94c2-3855e251907d", 00:22:34.791 "is_configured": true, 00:22:34.791 "data_offset": 2048, 00:22:34.791 "data_size": 63488 00:22:34.791 }, 00:22:34.791 { 00:22:34.791 "name": "BaseBdev3", 00:22:34.791 "uuid": "016d27c8-eff5-5a84-8044-a07052c91396", 00:22:34.791 "is_configured": true, 00:22:34.791 "data_offset": 2048, 00:22:34.791 "data_size": 63488 00:22:34.791 }, 00:22:34.791 { 00:22:34.791 "name": "BaseBdev4", 00:22:34.791 "uuid": "f3905d04-ce6b-5c8c-8b8b-a6b7af711137", 00:22:34.791 "is_configured": true, 00:22:34.791 "data_offset": 2048, 00:22:34.791 "data_size": 63488 00:22:34.791 } 00:22:34.791 ] 00:22:34.791 }' 00:22:34.791 23:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:34.791 23:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.727 23:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:22:35.727 23:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:35.727 [2024-07-13 23:08:24.875867] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:22:36.663 23:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:22:36.663 23:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:22:36.663 23:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:22:36.663 23:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:22:36.663 23:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:22:36.663 23:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:36.663 23:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:36.663 23:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:36.663 23:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:36.663 23:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:36.663 23:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:36.663 23:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:36.663 23:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:36.663 23:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:36.663 23:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:36.663 23:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:36.923 23:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:36.923 "name": "raid_bdev1", 00:22:36.923 "uuid": "e111f2c1-2787-4995-9c86-0afdfaa6c906", 00:22:36.923 "strip_size_kb": 64, 00:22:36.923 "state": "online", 00:22:36.923 "raid_level": "raid0", 00:22:36.923 "superblock": true, 00:22:36.923 "num_base_bdevs": 4, 00:22:36.923 "num_base_bdevs_discovered": 4, 00:22:36.923 "num_base_bdevs_operational": 4, 00:22:36.923 "base_bdevs_list": [ 00:22:36.923 { 00:22:36.923 "name": "BaseBdev1", 00:22:36.923 "uuid": "a9b4c9e2-ae52-557b-b23e-ea5eb965ee8b", 00:22:36.923 "is_configured": true, 00:22:36.923 "data_offset": 2048, 00:22:36.923 "data_size": 63488 00:22:36.923 }, 00:22:36.923 { 00:22:36.923 "name": "BaseBdev2", 00:22:36.923 "uuid": "5f7f2d8b-3d5d-5f85-94c2-3855e251907d", 00:22:36.923 "is_configured": true, 00:22:36.923 "data_offset": 2048, 00:22:36.923 "data_size": 63488 00:22:36.923 }, 00:22:36.923 { 00:22:36.923 "name": "BaseBdev3", 00:22:36.923 "uuid": "016d27c8-eff5-5a84-8044-a07052c91396", 00:22:36.923 "is_configured": true, 00:22:36.923 "data_offset": 2048, 00:22:36.923 "data_size": 63488 00:22:36.923 }, 00:22:36.923 { 00:22:36.923 "name": "BaseBdev4", 00:22:36.923 "uuid": "f3905d04-ce6b-5c8c-8b8b-a6b7af711137", 00:22:36.923 "is_configured": true, 00:22:36.923 "data_offset": 2048, 00:22:36.923 "data_size": 63488 00:22:36.923 } 00:22:36.923 ] 00:22:36.923 }' 00:22:36.923 23:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:36.923 23:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.858 23:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:37.858 [2024-07-13 23:08:27.215705] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:37.858 [2024-07-13 23:08:27.215790] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:37.858 [2024-07-13 23:08:27.218586] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:37.858 [2024-07-13 23:08:27.218705] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:37.858 [2024-07-13 23:08:27.218758] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:37.858 [2024-07-13 23:08:27.218770] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:22:37.858 0 00:22:37.858 23:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 146801 00:22:37.858 23:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 146801 ']' 00:22:37.858 23:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 146801 00:22:37.858 23:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:22:37.858 23:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:37.858 23:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 146801 00:22:37.858 23:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:37.858 23:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:37.858 23:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 146801' 00:22:37.858 killing process with pid 146801 00:22:37.858 23:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 146801 00:22:37.858 [2024-07-13 23:08:27.262574] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:37.858 23:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 146801 00:22:38.117 [2024-07-13 23:08:27.307759] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:38.376 23:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.SiHAIcVkjz 00:22:38.376 23:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:22:38.376 23:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:22:38.376 23:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:22:38.376 23:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:22:38.376 23:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:38.376 23:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:22:38.376 23:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:22:38.376 00:22:38.376 real 0m7.942s 00:22:38.376 user 0m12.928s 00:22:38.376 sys 0m0.999s 00:22:38.376 23:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:38.376 23:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.376 ************************************ 00:22:38.376 END TEST raid_read_error_test 00:22:38.376 ************************************ 00:22:38.376 23:08:27 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:38.376 23:08:27 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:22:38.376 23:08:27 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:38.376 23:08:27 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:38.376 23:08:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:38.376 ************************************ 00:22:38.376 START TEST raid_write_error_test 00:22:38.376 ************************************ 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 write 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.NQiJBw9Anc 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=147012 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 147012 /var/tmp/spdk-raid.sock 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 147012 ']' 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:38.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:38.376 23:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.637 [2024-07-13 23:08:27.793997] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:22:38.637 [2024-07-13 23:08:27.794197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147012 ] 00:22:38.637 [2024-07-13 23:08:27.929532] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.637 [2024-07-13 23:08:28.030462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.959 [2024-07-13 23:08:28.111630] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:39.526 23:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:39.526 23:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:22:39.526 23:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:39.526 23:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:39.785 BaseBdev1_malloc 00:22:39.785 23:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:22:40.042 true 00:22:40.042 23:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:40.301 [2024-07-13 23:08:29.551565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:40.301 [2024-07-13 23:08:29.551674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:40.301 [2024-07-13 23:08:29.551727] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:22:40.301 [2024-07-13 23:08:29.551780] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:40.301 [2024-07-13 23:08:29.554449] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:40.301 [2024-07-13 23:08:29.554525] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:40.301 BaseBdev1 00:22:40.301 23:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:40.301 23:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:40.559 BaseBdev2_malloc 00:22:40.559 23:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:22:40.818 true 00:22:40.818 23:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:41.077 [2024-07-13 23:08:30.301918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:41.077 [2024-07-13 23:08:30.302039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:41.077 [2024-07-13 23:08:30.302085] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:22:41.077 [2024-07-13 23:08:30.302132] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:41.077 [2024-07-13 23:08:30.304566] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:41.077 [2024-07-13 23:08:30.304631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:41.077 BaseBdev2 00:22:41.077 23:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:41.077 23:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:41.336 BaseBdev3_malloc 00:22:41.336 23:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:22:41.594 true 00:22:41.594 23:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:41.850 [2024-07-13 23:08:31.043973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:41.850 [2024-07-13 23:08:31.044244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:41.850 [2024-07-13 23:08:31.044403] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:22:41.850 [2024-07-13 23:08:31.044576] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:41.850 [2024-07-13 23:08:31.047394] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:41.850 [2024-07-13 23:08:31.047590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:41.850 BaseBdev3 00:22:41.850 23:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:41.850 23:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:42.107 BaseBdev4_malloc 00:22:42.107 23:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:22:42.107 true 00:22:42.365 23:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:22:42.622 [2024-07-13 23:08:31.779961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:22:42.622 [2024-07-13 23:08:31.780215] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:42.622 [2024-07-13 23:08:31.780295] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:42.622 [2024-07-13 23:08:31.780575] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:42.622 [2024-07-13 23:08:31.783505] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:42.622 [2024-07-13 23:08:31.783691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:42.622 BaseBdev4 00:22:42.622 23:08:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:22:42.622 [2024-07-13 23:08:31.988109] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:42.622 [2024-07-13 23:08:31.990718] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:42.622 [2024-07-13 23:08:31.990980] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:42.622 [2024-07-13 23:08:31.991099] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:42.622 [2024-07-13 23:08:31.991421] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:22:42.622 [2024-07-13 23:08:31.991473] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:42.622 [2024-07-13 23:08:31.991840] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:22:42.622 [2024-07-13 23:08:31.992448] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:22:42.622 [2024-07-13 23:08:31.992572] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:22:42.622 [2024-07-13 23:08:31.992902] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:42.622 23:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:22:42.622 23:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:42.622 23:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:42.622 23:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:42.622 23:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:42.622 23:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:42.622 23:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:42.622 23:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:42.622 23:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:42.622 23:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:42.622 23:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.622 23:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:42.880 23:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:42.880 "name": "raid_bdev1", 00:22:42.880 "uuid": "48e4f81d-3dd2-4327-9bdb-098eb1926aff", 00:22:42.880 "strip_size_kb": 64, 00:22:42.880 "state": "online", 00:22:42.880 "raid_level": "raid0", 00:22:42.880 "superblock": true, 00:22:42.880 "num_base_bdevs": 4, 00:22:42.880 "num_base_bdevs_discovered": 4, 00:22:42.880 "num_base_bdevs_operational": 4, 00:22:42.880 "base_bdevs_list": [ 00:22:42.880 { 00:22:42.880 "name": "BaseBdev1", 00:22:42.880 "uuid": "5a48ac6a-c074-5d01-8eaa-d1eff818791d", 00:22:42.880 "is_configured": true, 00:22:42.880 "data_offset": 2048, 00:22:42.880 "data_size": 63488 00:22:42.880 }, 00:22:42.880 { 00:22:42.880 "name": "BaseBdev2", 00:22:42.880 "uuid": "1260f0b4-4d08-561a-8395-b57d5a141af2", 00:22:42.880 "is_configured": true, 00:22:42.880 "data_offset": 2048, 00:22:42.880 "data_size": 63488 00:22:42.880 }, 00:22:42.880 { 00:22:42.880 "name": "BaseBdev3", 00:22:42.880 "uuid": "a739df4d-ed7f-5632-8b3d-7347ce372ea6", 00:22:42.880 "is_configured": true, 00:22:42.880 "data_offset": 2048, 00:22:42.880 "data_size": 63488 00:22:42.880 }, 00:22:42.880 { 00:22:42.880 "name": "BaseBdev4", 00:22:42.880 "uuid": "e40fb569-1382-5590-8bda-5d28d3533a92", 00:22:42.880 "is_configured": true, 00:22:42.880 "data_offset": 2048, 00:22:42.880 "data_size": 63488 00:22:42.880 } 00:22:42.880 ] 00:22:42.880 }' 00:22:42.880 23:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:42.880 23:08:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.446 23:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:22:43.446 23:08:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:43.705 [2024-07-13 23:08:32.933620] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:22:44.640 23:08:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:22:44.899 23:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:22:44.899 23:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:22:44.899 23:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:22:44.899 23:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:22:44.899 23:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:44.899 23:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:44.899 23:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:44.899 23:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:44.899 23:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:44.899 23:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:44.899 23:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:44.899 23:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:44.899 23:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:44.899 23:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.899 23:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.158 23:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:45.158 "name": "raid_bdev1", 00:22:45.158 "uuid": "48e4f81d-3dd2-4327-9bdb-098eb1926aff", 00:22:45.158 "strip_size_kb": 64, 00:22:45.158 "state": "online", 00:22:45.158 "raid_level": "raid0", 00:22:45.158 "superblock": true, 00:22:45.158 "num_base_bdevs": 4, 00:22:45.158 "num_base_bdevs_discovered": 4, 00:22:45.158 "num_base_bdevs_operational": 4, 00:22:45.158 "base_bdevs_list": [ 00:22:45.158 { 00:22:45.158 "name": "BaseBdev1", 00:22:45.158 "uuid": "5a48ac6a-c074-5d01-8eaa-d1eff818791d", 00:22:45.158 "is_configured": true, 00:22:45.158 "data_offset": 2048, 00:22:45.158 "data_size": 63488 00:22:45.158 }, 00:22:45.158 { 00:22:45.158 "name": "BaseBdev2", 00:22:45.158 "uuid": "1260f0b4-4d08-561a-8395-b57d5a141af2", 00:22:45.158 "is_configured": true, 00:22:45.158 "data_offset": 2048, 00:22:45.158 "data_size": 63488 00:22:45.158 }, 00:22:45.158 { 00:22:45.158 "name": "BaseBdev3", 00:22:45.158 "uuid": "a739df4d-ed7f-5632-8b3d-7347ce372ea6", 00:22:45.158 "is_configured": true, 00:22:45.159 "data_offset": 2048, 00:22:45.159 "data_size": 63488 00:22:45.159 }, 00:22:45.159 { 00:22:45.159 "name": "BaseBdev4", 00:22:45.159 "uuid": "e40fb569-1382-5590-8bda-5d28d3533a92", 00:22:45.159 "is_configured": true, 00:22:45.159 "data_offset": 2048, 00:22:45.159 "data_size": 63488 00:22:45.159 } 00:22:45.159 ] 00:22:45.159 }' 00:22:45.159 23:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:45.159 23:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.725 23:08:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:45.984 [2024-07-13 23:08:35.276676] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:45.984 [2024-07-13 23:08:35.277112] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:45.984 [2024-07-13 23:08:35.280220] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:45.984 [2024-07-13 23:08:35.280446] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:45.984 [2024-07-13 23:08:35.280621] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:45.984 [2024-07-13 23:08:35.280798] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:22:45.984 0 00:22:45.984 23:08:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 147012 00:22:45.984 23:08:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 147012 ']' 00:22:45.984 23:08:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 147012 00:22:45.984 23:08:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:22:45.984 23:08:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:45.984 23:08:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 147012 00:22:45.984 23:08:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:45.984 23:08:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:45.984 23:08:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 147012' 00:22:45.984 killing process with pid 147012 00:22:45.984 23:08:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 147012 00:22:45.984 [2024-07-13 23:08:35.326305] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:45.984 23:08:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 147012 00:22:45.984 [2024-07-13 23:08:35.375090] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:46.551 23:08:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.NQiJBw9Anc 00:22:46.551 23:08:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:22:46.551 23:08:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:22:46.551 23:08:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:22:46.551 23:08:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:22:46.551 23:08:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:46.551 23:08:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:22:46.551 23:08:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:22:46.551 00:22:46.551 real 0m7.999s 00:22:46.551 user 0m13.054s 00:22:46.551 sys 0m1.036s 00:22:46.551 23:08:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:46.551 23:08:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.551 ************************************ 00:22:46.551 END TEST raid_write_error_test 00:22:46.551 ************************************ 00:22:46.551 23:08:35 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:46.551 23:08:35 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:22:46.551 23:08:35 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:22:46.551 23:08:35 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:46.551 23:08:35 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:46.551 23:08:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:46.551 ************************************ 00:22:46.551 START TEST raid_state_function_test 00:22:46.551 ************************************ 00:22:46.551 23:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 false 00:22:46.551 23:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:22:46.551 23:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:22:46.551 23:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:22:46.551 23:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:22:46.551 23:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:22:46.551 23:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:46.551 23:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:22:46.551 23:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:46.551 23:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:46.551 23:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:22:46.552 23:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:46.552 23:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:46.552 23:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:22:46.552 23:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:46.552 23:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:46.552 23:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:22:46.552 23:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:46.552 23:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:46.552 23:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:46.552 23:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:22:46.552 23:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:22:46.552 23:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:22:46.552 23:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:22:46.552 23:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:22:46.552 23:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:22:46.552 23:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:22:46.552 23:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:22:46.552 23:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:22:46.552 23:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:22:46.552 23:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=147205 00:22:46.552 23:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:46.552 Process raid pid: 147205 00:22:46.552 23:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 147205' 00:22:46.552 23:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 147205 /var/tmp/spdk-raid.sock 00:22:46.552 23:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 147205 ']' 00:22:46.552 23:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:46.552 23:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:46.552 23:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:46.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:46.552 23:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:46.552 23:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.552 [2024-07-13 23:08:35.856887] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:22:46.552 [2024-07-13 23:08:35.857354] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:46.810 [2024-07-13 23:08:35.996974] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.810 [2024-07-13 23:08:36.098253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.810 [2024-07-13 23:08:36.173767] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:47.743 23:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:47.743 23:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:22:47.743 23:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:47.743 [2024-07-13 23:08:37.107025] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:47.743 [2024-07-13 23:08:37.107391] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:47.743 [2024-07-13 23:08:37.107520] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:47.743 [2024-07-13 23:08:37.107672] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:47.743 [2024-07-13 23:08:37.107815] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:47.743 [2024-07-13 23:08:37.107903] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:47.743 [2024-07-13 23:08:37.108050] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:47.743 [2024-07-13 23:08:37.108213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:47.743 23:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:47.743 23:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:47.743 23:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:47.743 23:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:47.743 23:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:47.743 23:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:47.743 23:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:47.743 23:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:47.743 23:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:47.743 23:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:47.743 23:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.743 23:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:48.000 23:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:48.000 "name": "Existed_Raid", 00:22:48.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.000 "strip_size_kb": 64, 00:22:48.000 "state": "configuring", 00:22:48.000 "raid_level": "concat", 00:22:48.001 "superblock": false, 00:22:48.001 "num_base_bdevs": 4, 00:22:48.001 "num_base_bdevs_discovered": 0, 00:22:48.001 "num_base_bdevs_operational": 4, 00:22:48.001 "base_bdevs_list": [ 00:22:48.001 { 00:22:48.001 "name": "BaseBdev1", 00:22:48.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.001 "is_configured": false, 00:22:48.001 "data_offset": 0, 00:22:48.001 "data_size": 0 00:22:48.001 }, 00:22:48.001 { 00:22:48.001 "name": "BaseBdev2", 00:22:48.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.001 "is_configured": false, 00:22:48.001 "data_offset": 0, 00:22:48.001 "data_size": 0 00:22:48.001 }, 00:22:48.001 { 00:22:48.001 "name": "BaseBdev3", 00:22:48.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.001 "is_configured": false, 00:22:48.001 "data_offset": 0, 00:22:48.001 "data_size": 0 00:22:48.001 }, 00:22:48.001 { 00:22:48.001 "name": "BaseBdev4", 00:22:48.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.001 "is_configured": false, 00:22:48.001 "data_offset": 0, 00:22:48.001 "data_size": 0 00:22:48.001 } 00:22:48.001 ] 00:22:48.001 }' 00:22:48.001 23:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:48.001 23:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:48.932 23:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:48.932 [2024-07-13 23:08:38.175148] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:48.932 [2024-07-13 23:08:38.175475] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:22:48.932 23:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:49.189 [2024-07-13 23:08:38.439188] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:49.189 [2024-07-13 23:08:38.439435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:49.189 [2024-07-13 23:08:38.439551] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:49.189 [2024-07-13 23:08:38.439623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:49.189 [2024-07-13 23:08:38.439838] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:49.189 [2024-07-13 23:08:38.439911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:49.189 [2024-07-13 23:08:38.440076] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:49.189 [2024-07-13 23:08:38.440155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:49.189 23:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:49.447 [2024-07-13 23:08:38.667241] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:49.447 BaseBdev1 00:22:49.447 23:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:22:49.447 23:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:49.447 23:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:49.447 23:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:49.447 23:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:49.447 23:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:49.447 23:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:49.704 23:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:49.962 [ 00:22:49.962 { 00:22:49.962 "name": "BaseBdev1", 00:22:49.962 "aliases": [ 00:22:49.962 "bfe500a9-25d6-4a1d-bc9a-16b57c873f98" 00:22:49.962 ], 00:22:49.962 "product_name": "Malloc disk", 00:22:49.962 "block_size": 512, 00:22:49.962 "num_blocks": 65536, 00:22:49.962 "uuid": "bfe500a9-25d6-4a1d-bc9a-16b57c873f98", 00:22:49.962 "assigned_rate_limits": { 00:22:49.962 "rw_ios_per_sec": 0, 00:22:49.962 "rw_mbytes_per_sec": 0, 00:22:49.962 "r_mbytes_per_sec": 0, 00:22:49.962 "w_mbytes_per_sec": 0 00:22:49.962 }, 00:22:49.962 "claimed": true, 00:22:49.962 "claim_type": "exclusive_write", 00:22:49.962 "zoned": false, 00:22:49.962 "supported_io_types": { 00:22:49.962 "read": true, 00:22:49.962 "write": true, 00:22:49.962 "unmap": true, 00:22:49.962 "flush": true, 00:22:49.962 "reset": true, 00:22:49.962 "nvme_admin": false, 00:22:49.962 "nvme_io": false, 00:22:49.962 "nvme_io_md": false, 00:22:49.962 "write_zeroes": true, 00:22:49.962 "zcopy": true, 00:22:49.962 "get_zone_info": false, 00:22:49.962 "zone_management": false, 00:22:49.962 "zone_append": false, 00:22:49.962 "compare": false, 00:22:49.962 "compare_and_write": false, 00:22:49.962 "abort": true, 00:22:49.962 "seek_hole": false, 00:22:49.962 "seek_data": false, 00:22:49.962 "copy": true, 00:22:49.962 "nvme_iov_md": false 00:22:49.962 }, 00:22:49.962 "memory_domains": [ 00:22:49.962 { 00:22:49.962 "dma_device_id": "system", 00:22:49.962 "dma_device_type": 1 00:22:49.962 }, 00:22:49.962 { 00:22:49.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:49.962 "dma_device_type": 2 00:22:49.962 } 00:22:49.962 ], 00:22:49.962 "driver_specific": {} 00:22:49.962 } 00:22:49.962 ] 00:22:49.962 23:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:49.962 23:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:49.962 23:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:49.962 23:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:49.962 23:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:49.962 23:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:49.962 23:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:49.962 23:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:49.962 23:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:49.962 23:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:49.962 23:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:49.962 23:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:49.962 23:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.220 23:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:50.220 "name": "Existed_Raid", 00:22:50.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:50.220 "strip_size_kb": 64, 00:22:50.220 "state": "configuring", 00:22:50.220 "raid_level": "concat", 00:22:50.220 "superblock": false, 00:22:50.220 "num_base_bdevs": 4, 00:22:50.220 "num_base_bdevs_discovered": 1, 00:22:50.220 "num_base_bdevs_operational": 4, 00:22:50.220 "base_bdevs_list": [ 00:22:50.220 { 00:22:50.220 "name": "BaseBdev1", 00:22:50.220 "uuid": "bfe500a9-25d6-4a1d-bc9a-16b57c873f98", 00:22:50.220 "is_configured": true, 00:22:50.220 "data_offset": 0, 00:22:50.220 "data_size": 65536 00:22:50.220 }, 00:22:50.220 { 00:22:50.220 "name": "BaseBdev2", 00:22:50.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:50.220 "is_configured": false, 00:22:50.220 "data_offset": 0, 00:22:50.220 "data_size": 0 00:22:50.220 }, 00:22:50.220 { 00:22:50.220 "name": "BaseBdev3", 00:22:50.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:50.220 "is_configured": false, 00:22:50.220 "data_offset": 0, 00:22:50.220 "data_size": 0 00:22:50.220 }, 00:22:50.220 { 00:22:50.220 "name": "BaseBdev4", 00:22:50.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:50.220 "is_configured": false, 00:22:50.220 "data_offset": 0, 00:22:50.220 "data_size": 0 00:22:50.220 } 00:22:50.220 ] 00:22:50.220 }' 00:22:50.220 23:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:50.220 23:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:50.786 23:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:51.044 [2024-07-13 23:08:40.255834] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:51.044 [2024-07-13 23:08:40.256205] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:22:51.044 23:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:51.318 [2024-07-13 23:08:40.540058] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:51.318 [2024-07-13 23:08:40.542871] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:51.318 [2024-07-13 23:08:40.543089] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:51.318 [2024-07-13 23:08:40.543251] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:51.318 [2024-07-13 23:08:40.543330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:51.318 [2024-07-13 23:08:40.543473] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:51.318 [2024-07-13 23:08:40.543539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:51.318 23:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:22:51.318 23:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:51.318 23:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:51.318 23:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:51.318 23:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:51.318 23:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:51.318 23:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:51.318 23:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:51.318 23:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:51.318 23:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:51.318 23:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:51.318 23:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:51.318 23:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.318 23:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:51.576 23:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:51.576 "name": "Existed_Raid", 00:22:51.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.576 "strip_size_kb": 64, 00:22:51.576 "state": "configuring", 00:22:51.576 "raid_level": "concat", 00:22:51.576 "superblock": false, 00:22:51.576 "num_base_bdevs": 4, 00:22:51.576 "num_base_bdevs_discovered": 1, 00:22:51.576 "num_base_bdevs_operational": 4, 00:22:51.576 "base_bdevs_list": [ 00:22:51.576 { 00:22:51.576 "name": "BaseBdev1", 00:22:51.576 "uuid": "bfe500a9-25d6-4a1d-bc9a-16b57c873f98", 00:22:51.576 "is_configured": true, 00:22:51.576 "data_offset": 0, 00:22:51.576 "data_size": 65536 00:22:51.576 }, 00:22:51.576 { 00:22:51.576 "name": "BaseBdev2", 00:22:51.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.576 "is_configured": false, 00:22:51.576 "data_offset": 0, 00:22:51.576 "data_size": 0 00:22:51.576 }, 00:22:51.576 { 00:22:51.576 "name": "BaseBdev3", 00:22:51.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.576 "is_configured": false, 00:22:51.576 "data_offset": 0, 00:22:51.576 "data_size": 0 00:22:51.576 }, 00:22:51.576 { 00:22:51.576 "name": "BaseBdev4", 00:22:51.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.576 "is_configured": false, 00:22:51.576 "data_offset": 0, 00:22:51.576 "data_size": 0 00:22:51.576 } 00:22:51.576 ] 00:22:51.576 }' 00:22:51.576 23:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:51.576 23:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:52.141 23:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:52.399 [2024-07-13 23:08:41.646328] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:52.399 BaseBdev2 00:22:52.399 23:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:22:52.399 23:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:52.399 23:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:52.399 23:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:52.399 23:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:52.399 23:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:52.399 23:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:52.657 23:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:52.916 [ 00:22:52.916 { 00:22:52.916 "name": "BaseBdev2", 00:22:52.916 "aliases": [ 00:22:52.916 "564dcbb0-88a0-49e4-b9f9-a91c5868b936" 00:22:52.916 ], 00:22:52.916 "product_name": "Malloc disk", 00:22:52.916 "block_size": 512, 00:22:52.916 "num_blocks": 65536, 00:22:52.916 "uuid": "564dcbb0-88a0-49e4-b9f9-a91c5868b936", 00:22:52.916 "assigned_rate_limits": { 00:22:52.916 "rw_ios_per_sec": 0, 00:22:52.916 "rw_mbytes_per_sec": 0, 00:22:52.916 "r_mbytes_per_sec": 0, 00:22:52.916 "w_mbytes_per_sec": 0 00:22:52.916 }, 00:22:52.916 "claimed": true, 00:22:52.916 "claim_type": "exclusive_write", 00:22:52.916 "zoned": false, 00:22:52.916 "supported_io_types": { 00:22:52.916 "read": true, 00:22:52.916 "write": true, 00:22:52.916 "unmap": true, 00:22:52.916 "flush": true, 00:22:52.916 "reset": true, 00:22:52.916 "nvme_admin": false, 00:22:52.916 "nvme_io": false, 00:22:52.916 "nvme_io_md": false, 00:22:52.916 "write_zeroes": true, 00:22:52.916 "zcopy": true, 00:22:52.916 "get_zone_info": false, 00:22:52.916 "zone_management": false, 00:22:52.916 "zone_append": false, 00:22:52.916 "compare": false, 00:22:52.916 "compare_and_write": false, 00:22:52.916 "abort": true, 00:22:52.916 "seek_hole": false, 00:22:52.916 "seek_data": false, 00:22:52.916 "copy": true, 00:22:52.916 "nvme_iov_md": false 00:22:52.916 }, 00:22:52.916 "memory_domains": [ 00:22:52.916 { 00:22:52.916 "dma_device_id": "system", 00:22:52.916 "dma_device_type": 1 00:22:52.916 }, 00:22:52.916 { 00:22:52.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:52.916 "dma_device_type": 2 00:22:52.916 } 00:22:52.916 ], 00:22:52.916 "driver_specific": {} 00:22:52.916 } 00:22:52.916 ] 00:22:52.916 23:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:52.916 23:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:52.916 23:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:52.916 23:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:52.916 23:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:52.916 23:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:52.916 23:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:52.916 23:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:52.916 23:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:52.916 23:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:52.916 23:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:52.916 23:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:52.916 23:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:52.916 23:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.916 23:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:53.175 23:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:53.175 "name": "Existed_Raid", 00:22:53.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:53.175 "strip_size_kb": 64, 00:22:53.175 "state": "configuring", 00:22:53.175 "raid_level": "concat", 00:22:53.175 "superblock": false, 00:22:53.175 "num_base_bdevs": 4, 00:22:53.175 "num_base_bdevs_discovered": 2, 00:22:53.175 "num_base_bdevs_operational": 4, 00:22:53.175 "base_bdevs_list": [ 00:22:53.175 { 00:22:53.175 "name": "BaseBdev1", 00:22:53.175 "uuid": "bfe500a9-25d6-4a1d-bc9a-16b57c873f98", 00:22:53.175 "is_configured": true, 00:22:53.175 "data_offset": 0, 00:22:53.175 "data_size": 65536 00:22:53.175 }, 00:22:53.175 { 00:22:53.175 "name": "BaseBdev2", 00:22:53.175 "uuid": "564dcbb0-88a0-49e4-b9f9-a91c5868b936", 00:22:53.175 "is_configured": true, 00:22:53.175 "data_offset": 0, 00:22:53.175 "data_size": 65536 00:22:53.175 }, 00:22:53.175 { 00:22:53.175 "name": "BaseBdev3", 00:22:53.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:53.175 "is_configured": false, 00:22:53.175 "data_offset": 0, 00:22:53.175 "data_size": 0 00:22:53.175 }, 00:22:53.175 { 00:22:53.175 "name": "BaseBdev4", 00:22:53.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:53.175 "is_configured": false, 00:22:53.175 "data_offset": 0, 00:22:53.175 "data_size": 0 00:22:53.175 } 00:22:53.175 ] 00:22:53.175 }' 00:22:53.175 23:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:53.175 23:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.741 23:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:54.005 [2024-07-13 23:08:43.291279] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:54.005 BaseBdev3 00:22:54.005 23:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:22:54.005 23:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:22:54.005 23:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:54.005 23:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:54.005 23:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:54.005 23:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:54.005 23:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:54.268 23:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:54.525 [ 00:22:54.525 { 00:22:54.525 "name": "BaseBdev3", 00:22:54.525 "aliases": [ 00:22:54.525 "2ba00c85-9c8a-421b-8c9c-7ffc292901b3" 00:22:54.525 ], 00:22:54.525 "product_name": "Malloc disk", 00:22:54.525 "block_size": 512, 00:22:54.525 "num_blocks": 65536, 00:22:54.525 "uuid": "2ba00c85-9c8a-421b-8c9c-7ffc292901b3", 00:22:54.525 "assigned_rate_limits": { 00:22:54.525 "rw_ios_per_sec": 0, 00:22:54.525 "rw_mbytes_per_sec": 0, 00:22:54.525 "r_mbytes_per_sec": 0, 00:22:54.525 "w_mbytes_per_sec": 0 00:22:54.525 }, 00:22:54.525 "claimed": true, 00:22:54.525 "claim_type": "exclusive_write", 00:22:54.525 "zoned": false, 00:22:54.525 "supported_io_types": { 00:22:54.525 "read": true, 00:22:54.525 "write": true, 00:22:54.525 "unmap": true, 00:22:54.525 "flush": true, 00:22:54.525 "reset": true, 00:22:54.525 "nvme_admin": false, 00:22:54.525 "nvme_io": false, 00:22:54.525 "nvme_io_md": false, 00:22:54.525 "write_zeroes": true, 00:22:54.525 "zcopy": true, 00:22:54.525 "get_zone_info": false, 00:22:54.525 "zone_management": false, 00:22:54.525 "zone_append": false, 00:22:54.525 "compare": false, 00:22:54.525 "compare_and_write": false, 00:22:54.525 "abort": true, 00:22:54.525 "seek_hole": false, 00:22:54.525 "seek_data": false, 00:22:54.525 "copy": true, 00:22:54.525 "nvme_iov_md": false 00:22:54.525 }, 00:22:54.525 "memory_domains": [ 00:22:54.525 { 00:22:54.525 "dma_device_id": "system", 00:22:54.525 "dma_device_type": 1 00:22:54.525 }, 00:22:54.525 { 00:22:54.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.525 "dma_device_type": 2 00:22:54.525 } 00:22:54.525 ], 00:22:54.525 "driver_specific": {} 00:22:54.525 } 00:22:54.525 ] 00:22:54.525 23:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:54.525 23:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:54.525 23:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:54.525 23:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:54.525 23:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:54.525 23:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:54.525 23:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:54.525 23:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:54.525 23:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:54.525 23:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:54.525 23:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:54.525 23:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:54.525 23:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:54.526 23:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:54.526 23:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.783 23:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:54.783 "name": "Existed_Raid", 00:22:54.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:54.783 "strip_size_kb": 64, 00:22:54.783 "state": "configuring", 00:22:54.783 "raid_level": "concat", 00:22:54.783 "superblock": false, 00:22:54.783 "num_base_bdevs": 4, 00:22:54.783 "num_base_bdevs_discovered": 3, 00:22:54.783 "num_base_bdevs_operational": 4, 00:22:54.783 "base_bdevs_list": [ 00:22:54.783 { 00:22:54.783 "name": "BaseBdev1", 00:22:54.784 "uuid": "bfe500a9-25d6-4a1d-bc9a-16b57c873f98", 00:22:54.784 "is_configured": true, 00:22:54.784 "data_offset": 0, 00:22:54.784 "data_size": 65536 00:22:54.784 }, 00:22:54.784 { 00:22:54.784 "name": "BaseBdev2", 00:22:54.784 "uuid": "564dcbb0-88a0-49e4-b9f9-a91c5868b936", 00:22:54.784 "is_configured": true, 00:22:54.784 "data_offset": 0, 00:22:54.784 "data_size": 65536 00:22:54.784 }, 00:22:54.784 { 00:22:54.784 "name": "BaseBdev3", 00:22:54.784 "uuid": "2ba00c85-9c8a-421b-8c9c-7ffc292901b3", 00:22:54.784 "is_configured": true, 00:22:54.784 "data_offset": 0, 00:22:54.784 "data_size": 65536 00:22:54.784 }, 00:22:54.784 { 00:22:54.784 "name": "BaseBdev4", 00:22:54.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:54.784 "is_configured": false, 00:22:54.784 "data_offset": 0, 00:22:54.784 "data_size": 0 00:22:54.784 } 00:22:54.784 ] 00:22:54.784 }' 00:22:54.784 23:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:54.784 23:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.733 23:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:55.733 [2024-07-13 23:08:45.080338] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:55.733 [2024-07-13 23:08:45.080435] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:22:55.733 [2024-07-13 23:08:45.080448] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:22:55.733 [2024-07-13 23:08:45.080626] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:22:55.733 [2024-07-13 23:08:45.081145] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:22:55.733 [2024-07-13 23:08:45.081170] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:22:55.733 [2024-07-13 23:08:45.081472] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:55.733 BaseBdev4 00:22:55.733 23:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:22:55.733 23:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:22:55.733 23:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:55.733 23:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:55.733 23:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:55.733 23:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:55.733 23:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:55.991 23:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:56.249 [ 00:22:56.249 { 00:22:56.249 "name": "BaseBdev4", 00:22:56.249 "aliases": [ 00:22:56.249 "fcaaf4b8-73fe-41db-9e7d-437b1d54dcb6" 00:22:56.249 ], 00:22:56.249 "product_name": "Malloc disk", 00:22:56.249 "block_size": 512, 00:22:56.249 "num_blocks": 65536, 00:22:56.249 "uuid": "fcaaf4b8-73fe-41db-9e7d-437b1d54dcb6", 00:22:56.249 "assigned_rate_limits": { 00:22:56.249 "rw_ios_per_sec": 0, 00:22:56.249 "rw_mbytes_per_sec": 0, 00:22:56.249 "r_mbytes_per_sec": 0, 00:22:56.249 "w_mbytes_per_sec": 0 00:22:56.249 }, 00:22:56.249 "claimed": true, 00:22:56.249 "claim_type": "exclusive_write", 00:22:56.250 "zoned": false, 00:22:56.250 "supported_io_types": { 00:22:56.250 "read": true, 00:22:56.250 "write": true, 00:22:56.250 "unmap": true, 00:22:56.250 "flush": true, 00:22:56.250 "reset": true, 00:22:56.250 "nvme_admin": false, 00:22:56.250 "nvme_io": false, 00:22:56.250 "nvme_io_md": false, 00:22:56.250 "write_zeroes": true, 00:22:56.250 "zcopy": true, 00:22:56.250 "get_zone_info": false, 00:22:56.250 "zone_management": false, 00:22:56.250 "zone_append": false, 00:22:56.250 "compare": false, 00:22:56.250 "compare_and_write": false, 00:22:56.250 "abort": true, 00:22:56.250 "seek_hole": false, 00:22:56.250 "seek_data": false, 00:22:56.250 "copy": true, 00:22:56.250 "nvme_iov_md": false 00:22:56.250 }, 00:22:56.250 "memory_domains": [ 00:22:56.250 { 00:22:56.250 "dma_device_id": "system", 00:22:56.250 "dma_device_type": 1 00:22:56.250 }, 00:22:56.250 { 00:22:56.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:56.250 "dma_device_type": 2 00:22:56.250 } 00:22:56.250 ], 00:22:56.250 "driver_specific": {} 00:22:56.250 } 00:22:56.250 ] 00:22:56.250 23:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:56.250 23:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:56.250 23:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:56.250 23:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:22:56.250 23:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:56.250 23:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:56.250 23:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:56.250 23:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:56.250 23:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:56.250 23:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:56.250 23:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:56.250 23:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:56.250 23:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:56.250 23:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.250 23:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:56.509 23:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:56.509 "name": "Existed_Raid", 00:22:56.509 "uuid": "0912e5dd-98ae-4523-843a-cc26cd88170c", 00:22:56.509 "strip_size_kb": 64, 00:22:56.509 "state": "online", 00:22:56.509 "raid_level": "concat", 00:22:56.509 "superblock": false, 00:22:56.509 "num_base_bdevs": 4, 00:22:56.509 "num_base_bdevs_discovered": 4, 00:22:56.509 "num_base_bdevs_operational": 4, 00:22:56.509 "base_bdevs_list": [ 00:22:56.509 { 00:22:56.509 "name": "BaseBdev1", 00:22:56.509 "uuid": "bfe500a9-25d6-4a1d-bc9a-16b57c873f98", 00:22:56.509 "is_configured": true, 00:22:56.509 "data_offset": 0, 00:22:56.509 "data_size": 65536 00:22:56.509 }, 00:22:56.509 { 00:22:56.509 "name": "BaseBdev2", 00:22:56.509 "uuid": "564dcbb0-88a0-49e4-b9f9-a91c5868b936", 00:22:56.509 "is_configured": true, 00:22:56.509 "data_offset": 0, 00:22:56.509 "data_size": 65536 00:22:56.509 }, 00:22:56.509 { 00:22:56.509 "name": "BaseBdev3", 00:22:56.509 "uuid": "2ba00c85-9c8a-421b-8c9c-7ffc292901b3", 00:22:56.509 "is_configured": true, 00:22:56.509 "data_offset": 0, 00:22:56.509 "data_size": 65536 00:22:56.509 }, 00:22:56.509 { 00:22:56.509 "name": "BaseBdev4", 00:22:56.509 "uuid": "fcaaf4b8-73fe-41db-9e7d-437b1d54dcb6", 00:22:56.509 "is_configured": true, 00:22:56.509 "data_offset": 0, 00:22:56.509 "data_size": 65536 00:22:56.509 } 00:22:56.509 ] 00:22:56.509 }' 00:22:56.509 23:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:56.509 23:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.076 23:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:22:57.076 23:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:57.076 23:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:57.076 23:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:57.076 23:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:57.076 23:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:57.076 23:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:57.076 23:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:57.335 [2024-07-13 23:08:46.701335] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:57.335 23:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:57.335 "name": "Existed_Raid", 00:22:57.335 "aliases": [ 00:22:57.335 "0912e5dd-98ae-4523-843a-cc26cd88170c" 00:22:57.335 ], 00:22:57.335 "product_name": "Raid Volume", 00:22:57.335 "block_size": 512, 00:22:57.335 "num_blocks": 262144, 00:22:57.335 "uuid": "0912e5dd-98ae-4523-843a-cc26cd88170c", 00:22:57.335 "assigned_rate_limits": { 00:22:57.335 "rw_ios_per_sec": 0, 00:22:57.335 "rw_mbytes_per_sec": 0, 00:22:57.335 "r_mbytes_per_sec": 0, 00:22:57.335 "w_mbytes_per_sec": 0 00:22:57.335 }, 00:22:57.335 "claimed": false, 00:22:57.335 "zoned": false, 00:22:57.335 "supported_io_types": { 00:22:57.335 "read": true, 00:22:57.335 "write": true, 00:22:57.335 "unmap": true, 00:22:57.335 "flush": true, 00:22:57.335 "reset": true, 00:22:57.335 "nvme_admin": false, 00:22:57.335 "nvme_io": false, 00:22:57.335 "nvme_io_md": false, 00:22:57.335 "write_zeroes": true, 00:22:57.335 "zcopy": false, 00:22:57.335 "get_zone_info": false, 00:22:57.335 "zone_management": false, 00:22:57.335 "zone_append": false, 00:22:57.335 "compare": false, 00:22:57.335 "compare_and_write": false, 00:22:57.335 "abort": false, 00:22:57.335 "seek_hole": false, 00:22:57.335 "seek_data": false, 00:22:57.335 "copy": false, 00:22:57.335 "nvme_iov_md": false 00:22:57.335 }, 00:22:57.335 "memory_domains": [ 00:22:57.335 { 00:22:57.335 "dma_device_id": "system", 00:22:57.335 "dma_device_type": 1 00:22:57.335 }, 00:22:57.335 { 00:22:57.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.335 "dma_device_type": 2 00:22:57.335 }, 00:22:57.335 { 00:22:57.335 "dma_device_id": "system", 00:22:57.335 "dma_device_type": 1 00:22:57.335 }, 00:22:57.335 { 00:22:57.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.335 "dma_device_type": 2 00:22:57.335 }, 00:22:57.335 { 00:22:57.335 "dma_device_id": "system", 00:22:57.335 "dma_device_type": 1 00:22:57.335 }, 00:22:57.335 { 00:22:57.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.335 "dma_device_type": 2 00:22:57.335 }, 00:22:57.335 { 00:22:57.335 "dma_device_id": "system", 00:22:57.335 "dma_device_type": 1 00:22:57.335 }, 00:22:57.335 { 00:22:57.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.335 "dma_device_type": 2 00:22:57.335 } 00:22:57.335 ], 00:22:57.335 "driver_specific": { 00:22:57.335 "raid": { 00:22:57.335 "uuid": "0912e5dd-98ae-4523-843a-cc26cd88170c", 00:22:57.335 "strip_size_kb": 64, 00:22:57.335 "state": "online", 00:22:57.335 "raid_level": "concat", 00:22:57.335 "superblock": false, 00:22:57.335 "num_base_bdevs": 4, 00:22:57.335 "num_base_bdevs_discovered": 4, 00:22:57.335 "num_base_bdevs_operational": 4, 00:22:57.335 "base_bdevs_list": [ 00:22:57.335 { 00:22:57.335 "name": "BaseBdev1", 00:22:57.335 "uuid": "bfe500a9-25d6-4a1d-bc9a-16b57c873f98", 00:22:57.335 "is_configured": true, 00:22:57.335 "data_offset": 0, 00:22:57.335 "data_size": 65536 00:22:57.335 }, 00:22:57.335 { 00:22:57.335 "name": "BaseBdev2", 00:22:57.335 "uuid": "564dcbb0-88a0-49e4-b9f9-a91c5868b936", 00:22:57.335 "is_configured": true, 00:22:57.335 "data_offset": 0, 00:22:57.335 "data_size": 65536 00:22:57.335 }, 00:22:57.335 { 00:22:57.335 "name": "BaseBdev3", 00:22:57.335 "uuid": "2ba00c85-9c8a-421b-8c9c-7ffc292901b3", 00:22:57.335 "is_configured": true, 00:22:57.335 "data_offset": 0, 00:22:57.335 "data_size": 65536 00:22:57.335 }, 00:22:57.335 { 00:22:57.335 "name": "BaseBdev4", 00:22:57.335 "uuid": "fcaaf4b8-73fe-41db-9e7d-437b1d54dcb6", 00:22:57.335 "is_configured": true, 00:22:57.335 "data_offset": 0, 00:22:57.335 "data_size": 65536 00:22:57.335 } 00:22:57.335 ] 00:22:57.335 } 00:22:57.335 } 00:22:57.335 }' 00:22:57.335 23:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:57.592 23:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:22:57.592 BaseBdev2 00:22:57.592 BaseBdev3 00:22:57.592 BaseBdev4' 00:22:57.592 23:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:57.592 23:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:57.592 23:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:57.850 23:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:57.850 "name": "BaseBdev1", 00:22:57.850 "aliases": [ 00:22:57.850 "bfe500a9-25d6-4a1d-bc9a-16b57c873f98" 00:22:57.850 ], 00:22:57.850 "product_name": "Malloc disk", 00:22:57.850 "block_size": 512, 00:22:57.850 "num_blocks": 65536, 00:22:57.850 "uuid": "bfe500a9-25d6-4a1d-bc9a-16b57c873f98", 00:22:57.850 "assigned_rate_limits": { 00:22:57.850 "rw_ios_per_sec": 0, 00:22:57.850 "rw_mbytes_per_sec": 0, 00:22:57.850 "r_mbytes_per_sec": 0, 00:22:57.850 "w_mbytes_per_sec": 0 00:22:57.850 }, 00:22:57.850 "claimed": true, 00:22:57.850 "claim_type": "exclusive_write", 00:22:57.850 "zoned": false, 00:22:57.850 "supported_io_types": { 00:22:57.850 "read": true, 00:22:57.850 "write": true, 00:22:57.850 "unmap": true, 00:22:57.850 "flush": true, 00:22:57.850 "reset": true, 00:22:57.850 "nvme_admin": false, 00:22:57.851 "nvme_io": false, 00:22:57.851 "nvme_io_md": false, 00:22:57.851 "write_zeroes": true, 00:22:57.851 "zcopy": true, 00:22:57.851 "get_zone_info": false, 00:22:57.851 "zone_management": false, 00:22:57.851 "zone_append": false, 00:22:57.851 "compare": false, 00:22:57.851 "compare_and_write": false, 00:22:57.851 "abort": true, 00:22:57.851 "seek_hole": false, 00:22:57.851 "seek_data": false, 00:22:57.851 "copy": true, 00:22:57.851 "nvme_iov_md": false 00:22:57.851 }, 00:22:57.851 "memory_domains": [ 00:22:57.851 { 00:22:57.851 "dma_device_id": "system", 00:22:57.851 "dma_device_type": 1 00:22:57.851 }, 00:22:57.851 { 00:22:57.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.851 "dma_device_type": 2 00:22:57.851 } 00:22:57.851 ], 00:22:57.851 "driver_specific": {} 00:22:57.851 }' 00:22:57.851 23:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:57.851 23:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:57.851 23:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:57.851 23:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:57.851 23:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:57.851 23:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:57.851 23:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:57.851 23:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:58.109 23:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:58.109 23:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:58.109 23:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:58.109 23:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:58.109 23:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:58.109 23:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:58.109 23:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:58.367 23:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:58.367 "name": "BaseBdev2", 00:22:58.367 "aliases": [ 00:22:58.367 "564dcbb0-88a0-49e4-b9f9-a91c5868b936" 00:22:58.367 ], 00:22:58.367 "product_name": "Malloc disk", 00:22:58.367 "block_size": 512, 00:22:58.367 "num_blocks": 65536, 00:22:58.367 "uuid": "564dcbb0-88a0-49e4-b9f9-a91c5868b936", 00:22:58.367 "assigned_rate_limits": { 00:22:58.367 "rw_ios_per_sec": 0, 00:22:58.367 "rw_mbytes_per_sec": 0, 00:22:58.367 "r_mbytes_per_sec": 0, 00:22:58.367 "w_mbytes_per_sec": 0 00:22:58.367 }, 00:22:58.367 "claimed": true, 00:22:58.367 "claim_type": "exclusive_write", 00:22:58.367 "zoned": false, 00:22:58.367 "supported_io_types": { 00:22:58.367 "read": true, 00:22:58.367 "write": true, 00:22:58.367 "unmap": true, 00:22:58.367 "flush": true, 00:22:58.367 "reset": true, 00:22:58.367 "nvme_admin": false, 00:22:58.367 "nvme_io": false, 00:22:58.367 "nvme_io_md": false, 00:22:58.367 "write_zeroes": true, 00:22:58.367 "zcopy": true, 00:22:58.367 "get_zone_info": false, 00:22:58.367 "zone_management": false, 00:22:58.367 "zone_append": false, 00:22:58.367 "compare": false, 00:22:58.367 "compare_and_write": false, 00:22:58.367 "abort": true, 00:22:58.367 "seek_hole": false, 00:22:58.367 "seek_data": false, 00:22:58.367 "copy": true, 00:22:58.367 "nvme_iov_md": false 00:22:58.367 }, 00:22:58.367 "memory_domains": [ 00:22:58.367 { 00:22:58.367 "dma_device_id": "system", 00:22:58.367 "dma_device_type": 1 00:22:58.367 }, 00:22:58.367 { 00:22:58.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:58.367 "dma_device_type": 2 00:22:58.367 } 00:22:58.367 ], 00:22:58.367 "driver_specific": {} 00:22:58.367 }' 00:22:58.367 23:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:58.367 23:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:58.367 23:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:58.367 23:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:58.367 23:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:58.625 23:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:58.625 23:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:58.625 23:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:58.625 23:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:58.625 23:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:58.625 23:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:58.625 23:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:58.625 23:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:58.625 23:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:58.625 23:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:58.883 23:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:58.883 "name": "BaseBdev3", 00:22:58.883 "aliases": [ 00:22:58.883 "2ba00c85-9c8a-421b-8c9c-7ffc292901b3" 00:22:58.883 ], 00:22:58.883 "product_name": "Malloc disk", 00:22:58.883 "block_size": 512, 00:22:58.883 "num_blocks": 65536, 00:22:58.883 "uuid": "2ba00c85-9c8a-421b-8c9c-7ffc292901b3", 00:22:58.883 "assigned_rate_limits": { 00:22:58.883 "rw_ios_per_sec": 0, 00:22:58.883 "rw_mbytes_per_sec": 0, 00:22:58.883 "r_mbytes_per_sec": 0, 00:22:58.883 "w_mbytes_per_sec": 0 00:22:58.883 }, 00:22:58.883 "claimed": true, 00:22:58.883 "claim_type": "exclusive_write", 00:22:58.883 "zoned": false, 00:22:58.883 "supported_io_types": { 00:22:58.883 "read": true, 00:22:58.883 "write": true, 00:22:58.883 "unmap": true, 00:22:58.883 "flush": true, 00:22:58.883 "reset": true, 00:22:58.883 "nvme_admin": false, 00:22:58.883 "nvme_io": false, 00:22:58.883 "nvme_io_md": false, 00:22:58.883 "write_zeroes": true, 00:22:58.883 "zcopy": true, 00:22:58.883 "get_zone_info": false, 00:22:58.883 "zone_management": false, 00:22:58.883 "zone_append": false, 00:22:58.883 "compare": false, 00:22:58.883 "compare_and_write": false, 00:22:58.883 "abort": true, 00:22:58.883 "seek_hole": false, 00:22:58.883 "seek_data": false, 00:22:58.883 "copy": true, 00:22:58.883 "nvme_iov_md": false 00:22:58.883 }, 00:22:58.883 "memory_domains": [ 00:22:58.883 { 00:22:58.883 "dma_device_id": "system", 00:22:58.883 "dma_device_type": 1 00:22:58.883 }, 00:22:58.883 { 00:22:58.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:58.883 "dma_device_type": 2 00:22:58.883 } 00:22:58.883 ], 00:22:58.883 "driver_specific": {} 00:22:58.883 }' 00:22:58.883 23:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:58.883 23:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:59.140 23:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:59.141 23:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:59.141 23:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:59.141 23:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:59.141 23:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:59.141 23:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:59.141 23:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:59.141 23:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:59.398 23:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:59.398 23:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:59.398 23:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:59.398 23:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:59.398 23:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:59.656 23:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:59.656 "name": "BaseBdev4", 00:22:59.656 "aliases": [ 00:22:59.656 "fcaaf4b8-73fe-41db-9e7d-437b1d54dcb6" 00:22:59.656 ], 00:22:59.656 "product_name": "Malloc disk", 00:22:59.656 "block_size": 512, 00:22:59.656 "num_blocks": 65536, 00:22:59.656 "uuid": "fcaaf4b8-73fe-41db-9e7d-437b1d54dcb6", 00:22:59.656 "assigned_rate_limits": { 00:22:59.656 "rw_ios_per_sec": 0, 00:22:59.656 "rw_mbytes_per_sec": 0, 00:22:59.656 "r_mbytes_per_sec": 0, 00:22:59.656 "w_mbytes_per_sec": 0 00:22:59.656 }, 00:22:59.656 "claimed": true, 00:22:59.656 "claim_type": "exclusive_write", 00:22:59.656 "zoned": false, 00:22:59.656 "supported_io_types": { 00:22:59.656 "read": true, 00:22:59.656 "write": true, 00:22:59.656 "unmap": true, 00:22:59.656 "flush": true, 00:22:59.656 "reset": true, 00:22:59.656 "nvme_admin": false, 00:22:59.656 "nvme_io": false, 00:22:59.656 "nvme_io_md": false, 00:22:59.656 "write_zeroes": true, 00:22:59.656 "zcopy": true, 00:22:59.656 "get_zone_info": false, 00:22:59.656 "zone_management": false, 00:22:59.656 "zone_append": false, 00:22:59.656 "compare": false, 00:22:59.656 "compare_and_write": false, 00:22:59.656 "abort": true, 00:22:59.656 "seek_hole": false, 00:22:59.656 "seek_data": false, 00:22:59.656 "copy": true, 00:22:59.656 "nvme_iov_md": false 00:22:59.656 }, 00:22:59.656 "memory_domains": [ 00:22:59.656 { 00:22:59.656 "dma_device_id": "system", 00:22:59.656 "dma_device_type": 1 00:22:59.656 }, 00:22:59.656 { 00:22:59.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:59.656 "dma_device_type": 2 00:22:59.656 } 00:22:59.656 ], 00:22:59.656 "driver_specific": {} 00:22:59.656 }' 00:22:59.656 23:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:59.656 23:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:59.656 23:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:59.656 23:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:59.656 23:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:59.656 23:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:59.656 23:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:59.914 23:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:59.914 23:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:59.914 23:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:59.914 23:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:59.914 23:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:59.914 23:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:00.172 [2024-07-13 23:08:49.445817] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:00.172 [2024-07-13 23:08:49.445887] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:00.172 [2024-07-13 23:08:49.445999] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:00.172 23:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:23:00.172 23:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:23:00.172 23:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:00.172 23:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:23:00.172 23:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:23:00.172 23:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:23:00.172 23:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:00.172 23:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:23:00.172 23:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:00.172 23:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:00.172 23:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:00.172 23:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:00.173 23:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:00.173 23:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:00.173 23:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:00.173 23:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:00.173 23:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.431 23:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:00.431 "name": "Existed_Raid", 00:23:00.431 "uuid": "0912e5dd-98ae-4523-843a-cc26cd88170c", 00:23:00.431 "strip_size_kb": 64, 00:23:00.431 "state": "offline", 00:23:00.431 "raid_level": "concat", 00:23:00.431 "superblock": false, 00:23:00.431 "num_base_bdevs": 4, 00:23:00.431 "num_base_bdevs_discovered": 3, 00:23:00.431 "num_base_bdevs_operational": 3, 00:23:00.431 "base_bdevs_list": [ 00:23:00.431 { 00:23:00.431 "name": null, 00:23:00.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.431 "is_configured": false, 00:23:00.431 "data_offset": 0, 00:23:00.431 "data_size": 65536 00:23:00.431 }, 00:23:00.431 { 00:23:00.431 "name": "BaseBdev2", 00:23:00.431 "uuid": "564dcbb0-88a0-49e4-b9f9-a91c5868b936", 00:23:00.431 "is_configured": true, 00:23:00.431 "data_offset": 0, 00:23:00.431 "data_size": 65536 00:23:00.431 }, 00:23:00.431 { 00:23:00.431 "name": "BaseBdev3", 00:23:00.431 "uuid": "2ba00c85-9c8a-421b-8c9c-7ffc292901b3", 00:23:00.431 "is_configured": true, 00:23:00.431 "data_offset": 0, 00:23:00.431 "data_size": 65536 00:23:00.431 }, 00:23:00.431 { 00:23:00.431 "name": "BaseBdev4", 00:23:00.431 "uuid": "fcaaf4b8-73fe-41db-9e7d-437b1d54dcb6", 00:23:00.431 "is_configured": true, 00:23:00.431 "data_offset": 0, 00:23:00.431 "data_size": 65536 00:23:00.431 } 00:23:00.431 ] 00:23:00.431 }' 00:23:00.431 23:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:00.431 23:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.997 23:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:23:00.997 23:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:00.997 23:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.997 23:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:01.255 23:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:01.255 23:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:01.255 23:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:01.513 [2024-07-13 23:08:50.819474] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:01.513 23:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:01.513 23:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:01.513 23:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.513 23:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:01.772 23:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:01.772 23:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:01.772 23:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:02.030 [2024-07-13 23:08:51.354997] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:02.030 23:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:02.030 23:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:02.030 23:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.030 23:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:02.288 23:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:02.288 23:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:02.288 23:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:23:02.546 [2024-07-13 23:08:51.837497] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:02.546 [2024-07-13 23:08:51.837600] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:23:02.546 23:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:02.546 23:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:02.546 23:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.546 23:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:23:02.805 23:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:23:02.805 23:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:23:02.805 23:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:23:02.805 23:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:23:02.805 23:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:02.805 23:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:03.063 BaseBdev2 00:23:03.063 23:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:23:03.063 23:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:23:03.063 23:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:03.063 23:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:03.063 23:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:03.063 23:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:03.063 23:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:03.321 23:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:03.588 [ 00:23:03.588 { 00:23:03.588 "name": "BaseBdev2", 00:23:03.588 "aliases": [ 00:23:03.588 "0fd4b667-dacc-4366-bb0a-e1cb865db377" 00:23:03.588 ], 00:23:03.588 "product_name": "Malloc disk", 00:23:03.588 "block_size": 512, 00:23:03.588 "num_blocks": 65536, 00:23:03.588 "uuid": "0fd4b667-dacc-4366-bb0a-e1cb865db377", 00:23:03.588 "assigned_rate_limits": { 00:23:03.588 "rw_ios_per_sec": 0, 00:23:03.588 "rw_mbytes_per_sec": 0, 00:23:03.588 "r_mbytes_per_sec": 0, 00:23:03.588 "w_mbytes_per_sec": 0 00:23:03.588 }, 00:23:03.588 "claimed": false, 00:23:03.588 "zoned": false, 00:23:03.588 "supported_io_types": { 00:23:03.588 "read": true, 00:23:03.588 "write": true, 00:23:03.588 "unmap": true, 00:23:03.588 "flush": true, 00:23:03.588 "reset": true, 00:23:03.588 "nvme_admin": false, 00:23:03.588 "nvme_io": false, 00:23:03.588 "nvme_io_md": false, 00:23:03.588 "write_zeroes": true, 00:23:03.588 "zcopy": true, 00:23:03.588 "get_zone_info": false, 00:23:03.588 "zone_management": false, 00:23:03.588 "zone_append": false, 00:23:03.588 "compare": false, 00:23:03.588 "compare_and_write": false, 00:23:03.588 "abort": true, 00:23:03.588 "seek_hole": false, 00:23:03.588 "seek_data": false, 00:23:03.588 "copy": true, 00:23:03.588 "nvme_iov_md": false 00:23:03.588 }, 00:23:03.588 "memory_domains": [ 00:23:03.588 { 00:23:03.588 "dma_device_id": "system", 00:23:03.588 "dma_device_type": 1 00:23:03.588 }, 00:23:03.588 { 00:23:03.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:03.588 "dma_device_type": 2 00:23:03.588 } 00:23:03.588 ], 00:23:03.588 "driver_specific": {} 00:23:03.588 } 00:23:03.588 ] 00:23:03.588 23:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:03.588 23:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:03.588 23:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:03.588 23:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:03.869 BaseBdev3 00:23:03.869 23:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:23:03.869 23:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:23:03.869 23:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:03.869 23:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:03.869 23:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:03.869 23:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:03.869 23:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:04.126 23:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:04.383 [ 00:23:04.383 { 00:23:04.383 "name": "BaseBdev3", 00:23:04.383 "aliases": [ 00:23:04.383 "faebe469-87a6-4d63-9958-f2c8113e737c" 00:23:04.383 ], 00:23:04.383 "product_name": "Malloc disk", 00:23:04.383 "block_size": 512, 00:23:04.383 "num_blocks": 65536, 00:23:04.383 "uuid": "faebe469-87a6-4d63-9958-f2c8113e737c", 00:23:04.383 "assigned_rate_limits": { 00:23:04.383 "rw_ios_per_sec": 0, 00:23:04.383 "rw_mbytes_per_sec": 0, 00:23:04.383 "r_mbytes_per_sec": 0, 00:23:04.383 "w_mbytes_per_sec": 0 00:23:04.383 }, 00:23:04.383 "claimed": false, 00:23:04.383 "zoned": false, 00:23:04.383 "supported_io_types": { 00:23:04.383 "read": true, 00:23:04.383 "write": true, 00:23:04.383 "unmap": true, 00:23:04.383 "flush": true, 00:23:04.383 "reset": true, 00:23:04.383 "nvme_admin": false, 00:23:04.383 "nvme_io": false, 00:23:04.383 "nvme_io_md": false, 00:23:04.383 "write_zeroes": true, 00:23:04.383 "zcopy": true, 00:23:04.383 "get_zone_info": false, 00:23:04.383 "zone_management": false, 00:23:04.383 "zone_append": false, 00:23:04.383 "compare": false, 00:23:04.383 "compare_and_write": false, 00:23:04.383 "abort": true, 00:23:04.383 "seek_hole": false, 00:23:04.383 "seek_data": false, 00:23:04.383 "copy": true, 00:23:04.383 "nvme_iov_md": false 00:23:04.383 }, 00:23:04.383 "memory_domains": [ 00:23:04.383 { 00:23:04.383 "dma_device_id": "system", 00:23:04.383 "dma_device_type": 1 00:23:04.383 }, 00:23:04.383 { 00:23:04.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:04.383 "dma_device_type": 2 00:23:04.383 } 00:23:04.383 ], 00:23:04.383 "driver_specific": {} 00:23:04.383 } 00:23:04.383 ] 00:23:04.383 23:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:04.383 23:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:04.383 23:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:04.383 23:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:04.641 BaseBdev4 00:23:04.641 23:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:23:04.641 23:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:23:04.641 23:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:04.641 23:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:04.641 23:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:04.641 23:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:04.641 23:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:04.898 23:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:04.898 [ 00:23:04.898 { 00:23:04.898 "name": "BaseBdev4", 00:23:04.898 "aliases": [ 00:23:04.898 "6f2e563c-55a0-4452-b46b-2eac0893fe97" 00:23:04.898 ], 00:23:04.898 "product_name": "Malloc disk", 00:23:04.898 "block_size": 512, 00:23:04.898 "num_blocks": 65536, 00:23:04.898 "uuid": "6f2e563c-55a0-4452-b46b-2eac0893fe97", 00:23:04.898 "assigned_rate_limits": { 00:23:04.898 "rw_ios_per_sec": 0, 00:23:04.898 "rw_mbytes_per_sec": 0, 00:23:04.898 "r_mbytes_per_sec": 0, 00:23:04.898 "w_mbytes_per_sec": 0 00:23:04.898 }, 00:23:04.898 "claimed": false, 00:23:04.898 "zoned": false, 00:23:04.898 "supported_io_types": { 00:23:04.898 "read": true, 00:23:04.898 "write": true, 00:23:04.898 "unmap": true, 00:23:04.898 "flush": true, 00:23:04.898 "reset": true, 00:23:04.898 "nvme_admin": false, 00:23:04.898 "nvme_io": false, 00:23:04.898 "nvme_io_md": false, 00:23:04.898 "write_zeroes": true, 00:23:04.898 "zcopy": true, 00:23:04.898 "get_zone_info": false, 00:23:04.898 "zone_management": false, 00:23:04.898 "zone_append": false, 00:23:04.898 "compare": false, 00:23:04.898 "compare_and_write": false, 00:23:04.898 "abort": true, 00:23:04.898 "seek_hole": false, 00:23:04.898 "seek_data": false, 00:23:04.898 "copy": true, 00:23:04.898 "nvme_iov_md": false 00:23:04.898 }, 00:23:04.898 "memory_domains": [ 00:23:04.898 { 00:23:04.898 "dma_device_id": "system", 00:23:04.898 "dma_device_type": 1 00:23:04.898 }, 00:23:04.898 { 00:23:04.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:04.898 "dma_device_type": 2 00:23:04.898 } 00:23:04.898 ], 00:23:04.898 "driver_specific": {} 00:23:04.898 } 00:23:04.898 ] 00:23:04.898 23:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:04.898 23:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:04.898 23:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:04.898 23:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:05.157 [2024-07-13 23:08:54.550760] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:05.157 [2024-07-13 23:08:54.551219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:05.157 [2024-07-13 23:08:54.551365] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:05.157 [2024-07-13 23:08:54.553960] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:05.157 [2024-07-13 23:08:54.554154] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:05.416 23:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:05.416 23:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:05.416 23:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:05.416 23:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:05.416 23:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:05.416 23:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:05.416 23:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:05.416 23:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:05.416 23:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:05.416 23:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:05.416 23:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.416 23:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:05.416 23:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:05.416 "name": "Existed_Raid", 00:23:05.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:05.416 "strip_size_kb": 64, 00:23:05.416 "state": "configuring", 00:23:05.416 "raid_level": "concat", 00:23:05.416 "superblock": false, 00:23:05.416 "num_base_bdevs": 4, 00:23:05.416 "num_base_bdevs_discovered": 3, 00:23:05.416 "num_base_bdevs_operational": 4, 00:23:05.416 "base_bdevs_list": [ 00:23:05.416 { 00:23:05.416 "name": "BaseBdev1", 00:23:05.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:05.416 "is_configured": false, 00:23:05.416 "data_offset": 0, 00:23:05.416 "data_size": 0 00:23:05.416 }, 00:23:05.416 { 00:23:05.416 "name": "BaseBdev2", 00:23:05.416 "uuid": "0fd4b667-dacc-4366-bb0a-e1cb865db377", 00:23:05.416 "is_configured": true, 00:23:05.416 "data_offset": 0, 00:23:05.416 "data_size": 65536 00:23:05.416 }, 00:23:05.416 { 00:23:05.416 "name": "BaseBdev3", 00:23:05.416 "uuid": "faebe469-87a6-4d63-9958-f2c8113e737c", 00:23:05.416 "is_configured": true, 00:23:05.416 "data_offset": 0, 00:23:05.416 "data_size": 65536 00:23:05.416 }, 00:23:05.416 { 00:23:05.416 "name": "BaseBdev4", 00:23:05.416 "uuid": "6f2e563c-55a0-4452-b46b-2eac0893fe97", 00:23:05.416 "is_configured": true, 00:23:05.416 "data_offset": 0, 00:23:05.416 "data_size": 65536 00:23:05.416 } 00:23:05.416 ] 00:23:05.416 }' 00:23:05.416 23:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:05.416 23:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.351 23:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:06.351 [2024-07-13 23:08:55.637485] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:06.351 23:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:06.351 23:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:06.351 23:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:06.351 23:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:06.351 23:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:06.351 23:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:06.351 23:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:06.351 23:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:06.351 23:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:06.351 23:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:06.351 23:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.351 23:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:06.609 23:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:06.609 "name": "Existed_Raid", 00:23:06.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:06.609 "strip_size_kb": 64, 00:23:06.609 "state": "configuring", 00:23:06.609 "raid_level": "concat", 00:23:06.609 "superblock": false, 00:23:06.609 "num_base_bdevs": 4, 00:23:06.609 "num_base_bdevs_discovered": 2, 00:23:06.609 "num_base_bdevs_operational": 4, 00:23:06.609 "base_bdevs_list": [ 00:23:06.609 { 00:23:06.609 "name": "BaseBdev1", 00:23:06.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:06.609 "is_configured": false, 00:23:06.609 "data_offset": 0, 00:23:06.609 "data_size": 0 00:23:06.609 }, 00:23:06.609 { 00:23:06.609 "name": null, 00:23:06.609 "uuid": "0fd4b667-dacc-4366-bb0a-e1cb865db377", 00:23:06.609 "is_configured": false, 00:23:06.609 "data_offset": 0, 00:23:06.609 "data_size": 65536 00:23:06.609 }, 00:23:06.609 { 00:23:06.609 "name": "BaseBdev3", 00:23:06.609 "uuid": "faebe469-87a6-4d63-9958-f2c8113e737c", 00:23:06.609 "is_configured": true, 00:23:06.609 "data_offset": 0, 00:23:06.609 "data_size": 65536 00:23:06.609 }, 00:23:06.609 { 00:23:06.609 "name": "BaseBdev4", 00:23:06.609 "uuid": "6f2e563c-55a0-4452-b46b-2eac0893fe97", 00:23:06.609 "is_configured": true, 00:23:06.609 "data_offset": 0, 00:23:06.609 "data_size": 65536 00:23:06.609 } 00:23:06.609 ] 00:23:06.609 }' 00:23:06.609 23:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:06.609 23:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.176 23:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.176 23:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:07.435 23:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:23:07.435 23:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:07.694 [2024-07-13 23:08:56.971369] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:07.694 BaseBdev1 00:23:07.694 23:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:23:07.694 23:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:23:07.694 23:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:07.694 23:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:07.694 23:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:07.694 23:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:07.694 23:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:07.952 23:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:08.211 [ 00:23:08.211 { 00:23:08.211 "name": "BaseBdev1", 00:23:08.211 "aliases": [ 00:23:08.211 "896ab1f4-42ae-4b82-a9c2-c40a71880ae5" 00:23:08.211 ], 00:23:08.211 "product_name": "Malloc disk", 00:23:08.211 "block_size": 512, 00:23:08.211 "num_blocks": 65536, 00:23:08.211 "uuid": "896ab1f4-42ae-4b82-a9c2-c40a71880ae5", 00:23:08.211 "assigned_rate_limits": { 00:23:08.211 "rw_ios_per_sec": 0, 00:23:08.211 "rw_mbytes_per_sec": 0, 00:23:08.211 "r_mbytes_per_sec": 0, 00:23:08.211 "w_mbytes_per_sec": 0 00:23:08.211 }, 00:23:08.211 "claimed": true, 00:23:08.211 "claim_type": "exclusive_write", 00:23:08.211 "zoned": false, 00:23:08.211 "supported_io_types": { 00:23:08.211 "read": true, 00:23:08.211 "write": true, 00:23:08.211 "unmap": true, 00:23:08.211 "flush": true, 00:23:08.211 "reset": true, 00:23:08.211 "nvme_admin": false, 00:23:08.211 "nvme_io": false, 00:23:08.211 "nvme_io_md": false, 00:23:08.211 "write_zeroes": true, 00:23:08.211 "zcopy": true, 00:23:08.211 "get_zone_info": false, 00:23:08.211 "zone_management": false, 00:23:08.211 "zone_append": false, 00:23:08.211 "compare": false, 00:23:08.211 "compare_and_write": false, 00:23:08.211 "abort": true, 00:23:08.211 "seek_hole": false, 00:23:08.211 "seek_data": false, 00:23:08.211 "copy": true, 00:23:08.211 "nvme_iov_md": false 00:23:08.211 }, 00:23:08.211 "memory_domains": [ 00:23:08.211 { 00:23:08.211 "dma_device_id": "system", 00:23:08.211 "dma_device_type": 1 00:23:08.211 }, 00:23:08.211 { 00:23:08.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:08.211 "dma_device_type": 2 00:23:08.211 } 00:23:08.211 ], 00:23:08.211 "driver_specific": {} 00:23:08.211 } 00:23:08.211 ] 00:23:08.211 23:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:08.211 23:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:08.211 23:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:08.211 23:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:08.211 23:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:08.211 23:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:08.211 23:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:08.211 23:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:08.211 23:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:08.211 23:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:08.211 23:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:08.211 23:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.211 23:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:08.470 23:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:08.470 "name": "Existed_Raid", 00:23:08.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.470 "strip_size_kb": 64, 00:23:08.470 "state": "configuring", 00:23:08.470 "raid_level": "concat", 00:23:08.470 "superblock": false, 00:23:08.470 "num_base_bdevs": 4, 00:23:08.470 "num_base_bdevs_discovered": 3, 00:23:08.470 "num_base_bdevs_operational": 4, 00:23:08.470 "base_bdevs_list": [ 00:23:08.470 { 00:23:08.470 "name": "BaseBdev1", 00:23:08.470 "uuid": "896ab1f4-42ae-4b82-a9c2-c40a71880ae5", 00:23:08.470 "is_configured": true, 00:23:08.470 "data_offset": 0, 00:23:08.470 "data_size": 65536 00:23:08.470 }, 00:23:08.470 { 00:23:08.470 "name": null, 00:23:08.470 "uuid": "0fd4b667-dacc-4366-bb0a-e1cb865db377", 00:23:08.470 "is_configured": false, 00:23:08.470 "data_offset": 0, 00:23:08.470 "data_size": 65536 00:23:08.470 }, 00:23:08.470 { 00:23:08.470 "name": "BaseBdev3", 00:23:08.470 "uuid": "faebe469-87a6-4d63-9958-f2c8113e737c", 00:23:08.470 "is_configured": true, 00:23:08.470 "data_offset": 0, 00:23:08.470 "data_size": 65536 00:23:08.470 }, 00:23:08.470 { 00:23:08.470 "name": "BaseBdev4", 00:23:08.470 "uuid": "6f2e563c-55a0-4452-b46b-2eac0893fe97", 00:23:08.470 "is_configured": true, 00:23:08.470 "data_offset": 0, 00:23:08.470 "data_size": 65536 00:23:08.470 } 00:23:08.470 ] 00:23:08.470 }' 00:23:08.470 23:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:08.470 23:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.036 23:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.036 23:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:09.294 23:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:23:09.294 23:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:23:09.552 [2024-07-13 23:08:58.887999] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:09.552 23:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:09.552 23:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:09.552 23:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:09.552 23:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:09.552 23:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:09.552 23:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:09.552 23:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:09.552 23:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:09.552 23:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:09.553 23:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:09.553 23:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.553 23:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:09.811 23:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:09.811 "name": "Existed_Raid", 00:23:09.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:09.811 "strip_size_kb": 64, 00:23:09.811 "state": "configuring", 00:23:09.811 "raid_level": "concat", 00:23:09.811 "superblock": false, 00:23:09.811 "num_base_bdevs": 4, 00:23:09.811 "num_base_bdevs_discovered": 2, 00:23:09.811 "num_base_bdevs_operational": 4, 00:23:09.811 "base_bdevs_list": [ 00:23:09.811 { 00:23:09.811 "name": "BaseBdev1", 00:23:09.811 "uuid": "896ab1f4-42ae-4b82-a9c2-c40a71880ae5", 00:23:09.811 "is_configured": true, 00:23:09.811 "data_offset": 0, 00:23:09.811 "data_size": 65536 00:23:09.811 }, 00:23:09.811 { 00:23:09.811 "name": null, 00:23:09.811 "uuid": "0fd4b667-dacc-4366-bb0a-e1cb865db377", 00:23:09.811 "is_configured": false, 00:23:09.811 "data_offset": 0, 00:23:09.811 "data_size": 65536 00:23:09.811 }, 00:23:09.811 { 00:23:09.811 "name": null, 00:23:09.811 "uuid": "faebe469-87a6-4d63-9958-f2c8113e737c", 00:23:09.811 "is_configured": false, 00:23:09.811 "data_offset": 0, 00:23:09.811 "data_size": 65536 00:23:09.811 }, 00:23:09.811 { 00:23:09.811 "name": "BaseBdev4", 00:23:09.811 "uuid": "6f2e563c-55a0-4452-b46b-2eac0893fe97", 00:23:09.811 "is_configured": true, 00:23:09.811 "data_offset": 0, 00:23:09.811 "data_size": 65536 00:23:09.811 } 00:23:09.811 ] 00:23:09.811 }' 00:23:09.811 23:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:09.811 23:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.746 23:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.746 23:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:10.746 23:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:23:10.746 23:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:11.004 [2024-07-13 23:09:00.356349] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:11.004 23:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:11.004 23:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:11.004 23:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:11.004 23:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:11.004 23:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:11.004 23:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:11.004 23:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:11.004 23:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:11.004 23:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:11.004 23:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:11.004 23:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.004 23:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:11.262 23:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:11.262 "name": "Existed_Raid", 00:23:11.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.262 "strip_size_kb": 64, 00:23:11.262 "state": "configuring", 00:23:11.262 "raid_level": "concat", 00:23:11.262 "superblock": false, 00:23:11.262 "num_base_bdevs": 4, 00:23:11.262 "num_base_bdevs_discovered": 3, 00:23:11.262 "num_base_bdevs_operational": 4, 00:23:11.262 "base_bdevs_list": [ 00:23:11.262 { 00:23:11.262 "name": "BaseBdev1", 00:23:11.262 "uuid": "896ab1f4-42ae-4b82-a9c2-c40a71880ae5", 00:23:11.262 "is_configured": true, 00:23:11.262 "data_offset": 0, 00:23:11.262 "data_size": 65536 00:23:11.262 }, 00:23:11.262 { 00:23:11.262 "name": null, 00:23:11.262 "uuid": "0fd4b667-dacc-4366-bb0a-e1cb865db377", 00:23:11.262 "is_configured": false, 00:23:11.262 "data_offset": 0, 00:23:11.262 "data_size": 65536 00:23:11.262 }, 00:23:11.262 { 00:23:11.262 "name": "BaseBdev3", 00:23:11.262 "uuid": "faebe469-87a6-4d63-9958-f2c8113e737c", 00:23:11.262 "is_configured": true, 00:23:11.262 "data_offset": 0, 00:23:11.262 "data_size": 65536 00:23:11.262 }, 00:23:11.262 { 00:23:11.262 "name": "BaseBdev4", 00:23:11.262 "uuid": "6f2e563c-55a0-4452-b46b-2eac0893fe97", 00:23:11.262 "is_configured": true, 00:23:11.262 "data_offset": 0, 00:23:11.262 "data_size": 65536 00:23:11.262 } 00:23:11.262 ] 00:23:11.262 }' 00:23:11.262 23:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:11.262 23:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.196 23:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.196 23:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:12.196 23:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:23:12.196 23:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:12.454 [2024-07-13 23:09:01.796715] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:12.454 23:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:12.454 23:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:12.454 23:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:12.454 23:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:12.454 23:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:12.454 23:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:12.454 23:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:12.454 23:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:12.454 23:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:12.454 23:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:12.454 23:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.454 23:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:12.714 23:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:12.714 "name": "Existed_Raid", 00:23:12.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:12.714 "strip_size_kb": 64, 00:23:12.714 "state": "configuring", 00:23:12.714 "raid_level": "concat", 00:23:12.714 "superblock": false, 00:23:12.714 "num_base_bdevs": 4, 00:23:12.714 "num_base_bdevs_discovered": 2, 00:23:12.714 "num_base_bdevs_operational": 4, 00:23:12.714 "base_bdevs_list": [ 00:23:12.714 { 00:23:12.714 "name": null, 00:23:12.714 "uuid": "896ab1f4-42ae-4b82-a9c2-c40a71880ae5", 00:23:12.714 "is_configured": false, 00:23:12.714 "data_offset": 0, 00:23:12.714 "data_size": 65536 00:23:12.714 }, 00:23:12.714 { 00:23:12.714 "name": null, 00:23:12.714 "uuid": "0fd4b667-dacc-4366-bb0a-e1cb865db377", 00:23:12.714 "is_configured": false, 00:23:12.714 "data_offset": 0, 00:23:12.714 "data_size": 65536 00:23:12.714 }, 00:23:12.714 { 00:23:12.714 "name": "BaseBdev3", 00:23:12.714 "uuid": "faebe469-87a6-4d63-9958-f2c8113e737c", 00:23:12.714 "is_configured": true, 00:23:12.714 "data_offset": 0, 00:23:12.714 "data_size": 65536 00:23:12.714 }, 00:23:12.714 { 00:23:12.714 "name": "BaseBdev4", 00:23:12.714 "uuid": "6f2e563c-55a0-4452-b46b-2eac0893fe97", 00:23:12.714 "is_configured": true, 00:23:12.714 "data_offset": 0, 00:23:12.714 "data_size": 65536 00:23:12.714 } 00:23:12.714 ] 00:23:12.714 }' 00:23:12.714 23:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:12.714 23:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:13.647 23:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.647 23:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:13.647 23:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:23:13.647 23:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:13.905 [2024-07-13 23:09:03.211174] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:13.905 23:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:13.905 23:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:13.905 23:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:13.905 23:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:13.905 23:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:13.905 23:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:13.905 23:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:13.905 23:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:13.905 23:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:13.905 23:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:13.905 23:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.905 23:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:14.163 23:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:14.163 "name": "Existed_Raid", 00:23:14.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.163 "strip_size_kb": 64, 00:23:14.163 "state": "configuring", 00:23:14.163 "raid_level": "concat", 00:23:14.163 "superblock": false, 00:23:14.163 "num_base_bdevs": 4, 00:23:14.163 "num_base_bdevs_discovered": 3, 00:23:14.163 "num_base_bdevs_operational": 4, 00:23:14.163 "base_bdevs_list": [ 00:23:14.163 { 00:23:14.163 "name": null, 00:23:14.163 "uuid": "896ab1f4-42ae-4b82-a9c2-c40a71880ae5", 00:23:14.163 "is_configured": false, 00:23:14.163 "data_offset": 0, 00:23:14.163 "data_size": 65536 00:23:14.163 }, 00:23:14.163 { 00:23:14.163 "name": "BaseBdev2", 00:23:14.163 "uuid": "0fd4b667-dacc-4366-bb0a-e1cb865db377", 00:23:14.163 "is_configured": true, 00:23:14.163 "data_offset": 0, 00:23:14.163 "data_size": 65536 00:23:14.163 }, 00:23:14.163 { 00:23:14.163 "name": "BaseBdev3", 00:23:14.163 "uuid": "faebe469-87a6-4d63-9958-f2c8113e737c", 00:23:14.163 "is_configured": true, 00:23:14.163 "data_offset": 0, 00:23:14.163 "data_size": 65536 00:23:14.163 }, 00:23:14.163 { 00:23:14.163 "name": "BaseBdev4", 00:23:14.163 "uuid": "6f2e563c-55a0-4452-b46b-2eac0893fe97", 00:23:14.163 "is_configured": true, 00:23:14.163 "data_offset": 0, 00:23:14.163 "data_size": 65536 00:23:14.163 } 00:23:14.163 ] 00:23:14.163 }' 00:23:14.163 23:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:14.163 23:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.095 23:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:15.095 23:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:15.095 23:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:23:15.095 23:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:15.095 23:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:15.353 23:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 896ab1f4-42ae-4b82-a9c2-c40a71880ae5 00:23:15.610 [2024-07-13 23:09:04.952914] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:15.610 [2024-07-13 23:09:04.953314] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:23:15.610 [2024-07-13 23:09:04.953366] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:23:15.610 [2024-07-13 23:09:04.953573] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:23:15.610 [2024-07-13 23:09:04.954066] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:23:15.610 [2024-07-13 23:09:04.954198] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008180 00:23:15.610 [2024-07-13 23:09:04.954541] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:15.610 NewBaseBdev 00:23:15.610 23:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:23:15.610 23:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:23:15.610 23:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:15.610 23:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:15.610 23:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:15.610 23:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:15.610 23:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:15.868 23:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:16.126 [ 00:23:16.126 { 00:23:16.126 "name": "NewBaseBdev", 00:23:16.126 "aliases": [ 00:23:16.126 "896ab1f4-42ae-4b82-a9c2-c40a71880ae5" 00:23:16.126 ], 00:23:16.126 "product_name": "Malloc disk", 00:23:16.126 "block_size": 512, 00:23:16.127 "num_blocks": 65536, 00:23:16.127 "uuid": "896ab1f4-42ae-4b82-a9c2-c40a71880ae5", 00:23:16.127 "assigned_rate_limits": { 00:23:16.127 "rw_ios_per_sec": 0, 00:23:16.127 "rw_mbytes_per_sec": 0, 00:23:16.127 "r_mbytes_per_sec": 0, 00:23:16.127 "w_mbytes_per_sec": 0 00:23:16.127 }, 00:23:16.127 "claimed": true, 00:23:16.127 "claim_type": "exclusive_write", 00:23:16.127 "zoned": false, 00:23:16.127 "supported_io_types": { 00:23:16.127 "read": true, 00:23:16.127 "write": true, 00:23:16.127 "unmap": true, 00:23:16.127 "flush": true, 00:23:16.127 "reset": true, 00:23:16.127 "nvme_admin": false, 00:23:16.127 "nvme_io": false, 00:23:16.127 "nvme_io_md": false, 00:23:16.127 "write_zeroes": true, 00:23:16.127 "zcopy": true, 00:23:16.127 "get_zone_info": false, 00:23:16.127 "zone_management": false, 00:23:16.127 "zone_append": false, 00:23:16.127 "compare": false, 00:23:16.127 "compare_and_write": false, 00:23:16.127 "abort": true, 00:23:16.127 "seek_hole": false, 00:23:16.127 "seek_data": false, 00:23:16.127 "copy": true, 00:23:16.127 "nvme_iov_md": false 00:23:16.127 }, 00:23:16.127 "memory_domains": [ 00:23:16.127 { 00:23:16.127 "dma_device_id": "system", 00:23:16.127 "dma_device_type": 1 00:23:16.127 }, 00:23:16.127 { 00:23:16.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:16.127 "dma_device_type": 2 00:23:16.127 } 00:23:16.127 ], 00:23:16.127 "driver_specific": {} 00:23:16.127 } 00:23:16.127 ] 00:23:16.127 23:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:16.127 23:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:23:16.127 23:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:16.127 23:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:16.127 23:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:16.127 23:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:16.127 23:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:16.127 23:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:16.127 23:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:16.127 23:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:16.127 23:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:16.127 23:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.127 23:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:16.385 23:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:16.385 "name": "Existed_Raid", 00:23:16.385 "uuid": "873a49af-1687-4f0e-b0cb-ca6c1372c693", 00:23:16.385 "strip_size_kb": 64, 00:23:16.385 "state": "online", 00:23:16.385 "raid_level": "concat", 00:23:16.385 "superblock": false, 00:23:16.385 "num_base_bdevs": 4, 00:23:16.385 "num_base_bdevs_discovered": 4, 00:23:16.385 "num_base_bdevs_operational": 4, 00:23:16.385 "base_bdevs_list": [ 00:23:16.385 { 00:23:16.385 "name": "NewBaseBdev", 00:23:16.385 "uuid": "896ab1f4-42ae-4b82-a9c2-c40a71880ae5", 00:23:16.385 "is_configured": true, 00:23:16.385 "data_offset": 0, 00:23:16.385 "data_size": 65536 00:23:16.385 }, 00:23:16.385 { 00:23:16.385 "name": "BaseBdev2", 00:23:16.385 "uuid": "0fd4b667-dacc-4366-bb0a-e1cb865db377", 00:23:16.385 "is_configured": true, 00:23:16.385 "data_offset": 0, 00:23:16.385 "data_size": 65536 00:23:16.385 }, 00:23:16.385 { 00:23:16.385 "name": "BaseBdev3", 00:23:16.385 "uuid": "faebe469-87a6-4d63-9958-f2c8113e737c", 00:23:16.385 "is_configured": true, 00:23:16.385 "data_offset": 0, 00:23:16.385 "data_size": 65536 00:23:16.385 }, 00:23:16.385 { 00:23:16.385 "name": "BaseBdev4", 00:23:16.385 "uuid": "6f2e563c-55a0-4452-b46b-2eac0893fe97", 00:23:16.385 "is_configured": true, 00:23:16.385 "data_offset": 0, 00:23:16.385 "data_size": 65536 00:23:16.385 } 00:23:16.385 ] 00:23:16.385 }' 00:23:16.385 23:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:16.385 23:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.321 23:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:23:17.321 23:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:17.321 23:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:17.321 23:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:17.321 23:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:17.321 23:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:17.321 23:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:17.321 23:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:17.321 [2024-07-13 23:09:06.613725] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:17.321 23:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:17.321 "name": "Existed_Raid", 00:23:17.321 "aliases": [ 00:23:17.321 "873a49af-1687-4f0e-b0cb-ca6c1372c693" 00:23:17.321 ], 00:23:17.321 "product_name": "Raid Volume", 00:23:17.321 "block_size": 512, 00:23:17.321 "num_blocks": 262144, 00:23:17.321 "uuid": "873a49af-1687-4f0e-b0cb-ca6c1372c693", 00:23:17.321 "assigned_rate_limits": { 00:23:17.321 "rw_ios_per_sec": 0, 00:23:17.321 "rw_mbytes_per_sec": 0, 00:23:17.321 "r_mbytes_per_sec": 0, 00:23:17.321 "w_mbytes_per_sec": 0 00:23:17.321 }, 00:23:17.321 "claimed": false, 00:23:17.321 "zoned": false, 00:23:17.321 "supported_io_types": { 00:23:17.321 "read": true, 00:23:17.321 "write": true, 00:23:17.321 "unmap": true, 00:23:17.321 "flush": true, 00:23:17.321 "reset": true, 00:23:17.321 "nvme_admin": false, 00:23:17.321 "nvme_io": false, 00:23:17.321 "nvme_io_md": false, 00:23:17.321 "write_zeroes": true, 00:23:17.321 "zcopy": false, 00:23:17.321 "get_zone_info": false, 00:23:17.321 "zone_management": false, 00:23:17.321 "zone_append": false, 00:23:17.321 "compare": false, 00:23:17.321 "compare_and_write": false, 00:23:17.321 "abort": false, 00:23:17.321 "seek_hole": false, 00:23:17.321 "seek_data": false, 00:23:17.321 "copy": false, 00:23:17.321 "nvme_iov_md": false 00:23:17.321 }, 00:23:17.321 "memory_domains": [ 00:23:17.321 { 00:23:17.321 "dma_device_id": "system", 00:23:17.321 "dma_device_type": 1 00:23:17.321 }, 00:23:17.321 { 00:23:17.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:17.321 "dma_device_type": 2 00:23:17.321 }, 00:23:17.321 { 00:23:17.321 "dma_device_id": "system", 00:23:17.321 "dma_device_type": 1 00:23:17.321 }, 00:23:17.321 { 00:23:17.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:17.321 "dma_device_type": 2 00:23:17.321 }, 00:23:17.321 { 00:23:17.321 "dma_device_id": "system", 00:23:17.321 "dma_device_type": 1 00:23:17.321 }, 00:23:17.321 { 00:23:17.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:17.321 "dma_device_type": 2 00:23:17.321 }, 00:23:17.321 { 00:23:17.321 "dma_device_id": "system", 00:23:17.321 "dma_device_type": 1 00:23:17.321 }, 00:23:17.321 { 00:23:17.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:17.321 "dma_device_type": 2 00:23:17.321 } 00:23:17.321 ], 00:23:17.321 "driver_specific": { 00:23:17.321 "raid": { 00:23:17.321 "uuid": "873a49af-1687-4f0e-b0cb-ca6c1372c693", 00:23:17.321 "strip_size_kb": 64, 00:23:17.321 "state": "online", 00:23:17.321 "raid_level": "concat", 00:23:17.321 "superblock": false, 00:23:17.321 "num_base_bdevs": 4, 00:23:17.321 "num_base_bdevs_discovered": 4, 00:23:17.321 "num_base_bdevs_operational": 4, 00:23:17.321 "base_bdevs_list": [ 00:23:17.321 { 00:23:17.321 "name": "NewBaseBdev", 00:23:17.321 "uuid": "896ab1f4-42ae-4b82-a9c2-c40a71880ae5", 00:23:17.321 "is_configured": true, 00:23:17.321 "data_offset": 0, 00:23:17.321 "data_size": 65536 00:23:17.321 }, 00:23:17.321 { 00:23:17.321 "name": "BaseBdev2", 00:23:17.321 "uuid": "0fd4b667-dacc-4366-bb0a-e1cb865db377", 00:23:17.321 "is_configured": true, 00:23:17.321 "data_offset": 0, 00:23:17.321 "data_size": 65536 00:23:17.321 }, 00:23:17.321 { 00:23:17.321 "name": "BaseBdev3", 00:23:17.321 "uuid": "faebe469-87a6-4d63-9958-f2c8113e737c", 00:23:17.321 "is_configured": true, 00:23:17.321 "data_offset": 0, 00:23:17.321 "data_size": 65536 00:23:17.321 }, 00:23:17.321 { 00:23:17.321 "name": "BaseBdev4", 00:23:17.321 "uuid": "6f2e563c-55a0-4452-b46b-2eac0893fe97", 00:23:17.321 "is_configured": true, 00:23:17.321 "data_offset": 0, 00:23:17.321 "data_size": 65536 00:23:17.321 } 00:23:17.321 ] 00:23:17.321 } 00:23:17.321 } 00:23:17.321 }' 00:23:17.321 23:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:17.321 23:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:23:17.321 BaseBdev2 00:23:17.321 BaseBdev3 00:23:17.321 BaseBdev4' 00:23:17.321 23:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:17.321 23:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:23:17.321 23:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:17.579 23:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:17.579 "name": "NewBaseBdev", 00:23:17.579 "aliases": [ 00:23:17.579 "896ab1f4-42ae-4b82-a9c2-c40a71880ae5" 00:23:17.579 ], 00:23:17.579 "product_name": "Malloc disk", 00:23:17.579 "block_size": 512, 00:23:17.579 "num_blocks": 65536, 00:23:17.579 "uuid": "896ab1f4-42ae-4b82-a9c2-c40a71880ae5", 00:23:17.579 "assigned_rate_limits": { 00:23:17.579 "rw_ios_per_sec": 0, 00:23:17.579 "rw_mbytes_per_sec": 0, 00:23:17.579 "r_mbytes_per_sec": 0, 00:23:17.579 "w_mbytes_per_sec": 0 00:23:17.579 }, 00:23:17.579 "claimed": true, 00:23:17.579 "claim_type": "exclusive_write", 00:23:17.579 "zoned": false, 00:23:17.579 "supported_io_types": { 00:23:17.579 "read": true, 00:23:17.579 "write": true, 00:23:17.579 "unmap": true, 00:23:17.579 "flush": true, 00:23:17.579 "reset": true, 00:23:17.579 "nvme_admin": false, 00:23:17.579 "nvme_io": false, 00:23:17.579 "nvme_io_md": false, 00:23:17.579 "write_zeroes": true, 00:23:17.579 "zcopy": true, 00:23:17.579 "get_zone_info": false, 00:23:17.579 "zone_management": false, 00:23:17.579 "zone_append": false, 00:23:17.579 "compare": false, 00:23:17.579 "compare_and_write": false, 00:23:17.579 "abort": true, 00:23:17.579 "seek_hole": false, 00:23:17.579 "seek_data": false, 00:23:17.579 "copy": true, 00:23:17.579 "nvme_iov_md": false 00:23:17.579 }, 00:23:17.579 "memory_domains": [ 00:23:17.579 { 00:23:17.579 "dma_device_id": "system", 00:23:17.579 "dma_device_type": 1 00:23:17.579 }, 00:23:17.579 { 00:23:17.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:17.580 "dma_device_type": 2 00:23:17.580 } 00:23:17.580 ], 00:23:17.580 "driver_specific": {} 00:23:17.580 }' 00:23:17.580 23:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:17.580 23:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:17.837 23:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:17.837 23:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:17.837 23:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:17.837 23:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:17.837 23:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:17.837 23:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:17.837 23:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:17.837 23:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:18.096 23:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:18.096 23:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:18.096 23:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:18.096 23:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:18.096 23:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:18.355 23:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:18.355 "name": "BaseBdev2", 00:23:18.355 "aliases": [ 00:23:18.355 "0fd4b667-dacc-4366-bb0a-e1cb865db377" 00:23:18.355 ], 00:23:18.355 "product_name": "Malloc disk", 00:23:18.355 "block_size": 512, 00:23:18.355 "num_blocks": 65536, 00:23:18.355 "uuid": "0fd4b667-dacc-4366-bb0a-e1cb865db377", 00:23:18.355 "assigned_rate_limits": { 00:23:18.355 "rw_ios_per_sec": 0, 00:23:18.355 "rw_mbytes_per_sec": 0, 00:23:18.355 "r_mbytes_per_sec": 0, 00:23:18.355 "w_mbytes_per_sec": 0 00:23:18.355 }, 00:23:18.355 "claimed": true, 00:23:18.355 "claim_type": "exclusive_write", 00:23:18.355 "zoned": false, 00:23:18.355 "supported_io_types": { 00:23:18.355 "read": true, 00:23:18.355 "write": true, 00:23:18.355 "unmap": true, 00:23:18.355 "flush": true, 00:23:18.355 "reset": true, 00:23:18.355 "nvme_admin": false, 00:23:18.355 "nvme_io": false, 00:23:18.355 "nvme_io_md": false, 00:23:18.355 "write_zeroes": true, 00:23:18.355 "zcopy": true, 00:23:18.355 "get_zone_info": false, 00:23:18.355 "zone_management": false, 00:23:18.355 "zone_append": false, 00:23:18.355 "compare": false, 00:23:18.355 "compare_and_write": false, 00:23:18.355 "abort": true, 00:23:18.355 "seek_hole": false, 00:23:18.355 "seek_data": false, 00:23:18.355 "copy": true, 00:23:18.355 "nvme_iov_md": false 00:23:18.355 }, 00:23:18.355 "memory_domains": [ 00:23:18.355 { 00:23:18.355 "dma_device_id": "system", 00:23:18.355 "dma_device_type": 1 00:23:18.355 }, 00:23:18.355 { 00:23:18.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:18.355 "dma_device_type": 2 00:23:18.355 } 00:23:18.355 ], 00:23:18.355 "driver_specific": {} 00:23:18.355 }' 00:23:18.355 23:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:18.355 23:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:18.355 23:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:18.355 23:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:18.355 23:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:18.614 23:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:18.614 23:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:18.614 23:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:18.614 23:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:18.614 23:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:18.614 23:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:18.614 23:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:18.614 23:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:18.614 23:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:18.614 23:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:18.873 23:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:18.873 "name": "BaseBdev3", 00:23:18.873 "aliases": [ 00:23:18.873 "faebe469-87a6-4d63-9958-f2c8113e737c" 00:23:18.873 ], 00:23:18.873 "product_name": "Malloc disk", 00:23:18.873 "block_size": 512, 00:23:18.873 "num_blocks": 65536, 00:23:18.873 "uuid": "faebe469-87a6-4d63-9958-f2c8113e737c", 00:23:18.873 "assigned_rate_limits": { 00:23:18.873 "rw_ios_per_sec": 0, 00:23:18.873 "rw_mbytes_per_sec": 0, 00:23:18.873 "r_mbytes_per_sec": 0, 00:23:18.873 "w_mbytes_per_sec": 0 00:23:18.873 }, 00:23:18.873 "claimed": true, 00:23:18.873 "claim_type": "exclusive_write", 00:23:18.873 "zoned": false, 00:23:18.873 "supported_io_types": { 00:23:18.873 "read": true, 00:23:18.873 "write": true, 00:23:18.873 "unmap": true, 00:23:18.873 "flush": true, 00:23:18.873 "reset": true, 00:23:18.873 "nvme_admin": false, 00:23:18.873 "nvme_io": false, 00:23:18.873 "nvme_io_md": false, 00:23:18.873 "write_zeroes": true, 00:23:18.873 "zcopy": true, 00:23:18.873 "get_zone_info": false, 00:23:18.873 "zone_management": false, 00:23:18.873 "zone_append": false, 00:23:18.873 "compare": false, 00:23:18.873 "compare_and_write": false, 00:23:18.873 "abort": true, 00:23:18.873 "seek_hole": false, 00:23:18.873 "seek_data": false, 00:23:18.873 "copy": true, 00:23:18.873 "nvme_iov_md": false 00:23:18.873 }, 00:23:18.873 "memory_domains": [ 00:23:18.873 { 00:23:18.873 "dma_device_id": "system", 00:23:18.873 "dma_device_type": 1 00:23:18.873 }, 00:23:18.873 { 00:23:18.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:18.873 "dma_device_type": 2 00:23:18.873 } 00:23:18.873 ], 00:23:18.873 "driver_specific": {} 00:23:18.873 }' 00:23:18.873 23:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:19.131 23:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:19.131 23:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:19.131 23:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:19.131 23:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:19.131 23:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:19.131 23:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:19.131 23:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:19.391 23:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:19.391 23:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:19.391 23:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:19.391 23:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:19.391 23:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:19.391 23:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:19.391 23:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:19.649 23:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:19.649 "name": "BaseBdev4", 00:23:19.649 "aliases": [ 00:23:19.649 "6f2e563c-55a0-4452-b46b-2eac0893fe97" 00:23:19.649 ], 00:23:19.649 "product_name": "Malloc disk", 00:23:19.649 "block_size": 512, 00:23:19.649 "num_blocks": 65536, 00:23:19.649 "uuid": "6f2e563c-55a0-4452-b46b-2eac0893fe97", 00:23:19.649 "assigned_rate_limits": { 00:23:19.649 "rw_ios_per_sec": 0, 00:23:19.649 "rw_mbytes_per_sec": 0, 00:23:19.649 "r_mbytes_per_sec": 0, 00:23:19.649 "w_mbytes_per_sec": 0 00:23:19.649 }, 00:23:19.649 "claimed": true, 00:23:19.649 "claim_type": "exclusive_write", 00:23:19.649 "zoned": false, 00:23:19.649 "supported_io_types": { 00:23:19.649 "read": true, 00:23:19.649 "write": true, 00:23:19.649 "unmap": true, 00:23:19.649 "flush": true, 00:23:19.649 "reset": true, 00:23:19.649 "nvme_admin": false, 00:23:19.649 "nvme_io": false, 00:23:19.649 "nvme_io_md": false, 00:23:19.649 "write_zeroes": true, 00:23:19.649 "zcopy": true, 00:23:19.649 "get_zone_info": false, 00:23:19.649 "zone_management": false, 00:23:19.649 "zone_append": false, 00:23:19.649 "compare": false, 00:23:19.649 "compare_and_write": false, 00:23:19.649 "abort": true, 00:23:19.649 "seek_hole": false, 00:23:19.649 "seek_data": false, 00:23:19.649 "copy": true, 00:23:19.649 "nvme_iov_md": false 00:23:19.649 }, 00:23:19.649 "memory_domains": [ 00:23:19.649 { 00:23:19.649 "dma_device_id": "system", 00:23:19.649 "dma_device_type": 1 00:23:19.649 }, 00:23:19.649 { 00:23:19.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:19.649 "dma_device_type": 2 00:23:19.649 } 00:23:19.649 ], 00:23:19.649 "driver_specific": {} 00:23:19.649 }' 00:23:19.649 23:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:19.649 23:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:19.907 23:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:19.907 23:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:19.907 23:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:19.907 23:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:19.907 23:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:19.907 23:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:19.907 23:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:19.907 23:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:19.907 23:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:20.166 23:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:20.166 23:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:20.424 [2024-07-13 23:09:09.582147] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:20.424 [2024-07-13 23:09:09.582404] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:20.424 [2024-07-13 23:09:09.582619] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:20.424 [2024-07-13 23:09:09.582855] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:20.424 [2024-07-13 23:09:09.582976] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name Existed_Raid, state offline 00:23:20.424 23:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 147205 00:23:20.424 23:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 147205 ']' 00:23:20.424 23:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 147205 00:23:20.424 23:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:23:20.424 23:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:20.424 23:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 147205 00:23:20.424 23:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:20.424 23:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:20.424 23:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 147205' 00:23:20.424 killing process with pid 147205 00:23:20.424 23:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 147205 00:23:20.424 [2024-07-13 23:09:09.629456] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:20.424 23:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 147205 00:23:20.424 [2024-07-13 23:09:09.668364] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:23:20.683 00:23:20.683 real 0m34.127s 00:23:20.683 user 1m4.927s 00:23:20.683 sys 0m4.199s 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.683 ************************************ 00:23:20.683 END TEST raid_state_function_test 00:23:20.683 ************************************ 00:23:20.683 23:09:09 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:20.683 23:09:09 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:23:20.683 23:09:09 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:23:20.683 23:09:09 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:20.683 23:09:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:20.683 ************************************ 00:23:20.683 START TEST raid_state_function_test_sb 00:23:20.683 ************************************ 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 true 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:20.683 Process raid pid: 148315 00:23:20.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=148315 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 148315' 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 148315 /var/tmp/spdk-raid.sock 00:23:20.683 23:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:20.684 23:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 148315 ']' 00:23:20.684 23:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:20.684 23:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:20.684 23:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:20.684 23:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:20.684 23:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.684 [2024-07-13 23:09:10.041504] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:20.684 [2024-07-13 23:09:10.041729] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:20.942 [2024-07-13 23:09:10.186241] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.942 [2024-07-13 23:09:10.321195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.201 [2024-07-13 23:09:10.402925] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:21.774 23:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:21.774 23:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:23:21.774 23:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:22.032 [2024-07-13 23:09:11.287409] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:22.032 [2024-07-13 23:09:11.287560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:22.032 [2024-07-13 23:09:11.287592] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:22.032 [2024-07-13 23:09:11.287612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:22.032 [2024-07-13 23:09:11.287620] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:22.032 [2024-07-13 23:09:11.287668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:22.032 [2024-07-13 23:09:11.287677] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:22.032 [2024-07-13 23:09:11.287702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:22.032 23:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:22.032 23:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:22.032 23:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:22.032 23:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:22.032 23:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:22.032 23:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:22.032 23:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:22.032 23:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:22.032 23:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:22.032 23:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:22.032 23:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.032 23:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:22.290 23:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:22.290 "name": "Existed_Raid", 00:23:22.290 "uuid": "6f2ffda9-a830-47aa-9cf1-4b6a59887f01", 00:23:22.290 "strip_size_kb": 64, 00:23:22.290 "state": "configuring", 00:23:22.290 "raid_level": "concat", 00:23:22.290 "superblock": true, 00:23:22.290 "num_base_bdevs": 4, 00:23:22.290 "num_base_bdevs_discovered": 0, 00:23:22.290 "num_base_bdevs_operational": 4, 00:23:22.290 "base_bdevs_list": [ 00:23:22.290 { 00:23:22.290 "name": "BaseBdev1", 00:23:22.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.290 "is_configured": false, 00:23:22.290 "data_offset": 0, 00:23:22.290 "data_size": 0 00:23:22.290 }, 00:23:22.290 { 00:23:22.290 "name": "BaseBdev2", 00:23:22.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.290 "is_configured": false, 00:23:22.290 "data_offset": 0, 00:23:22.290 "data_size": 0 00:23:22.290 }, 00:23:22.290 { 00:23:22.290 "name": "BaseBdev3", 00:23:22.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.290 "is_configured": false, 00:23:22.290 "data_offset": 0, 00:23:22.290 "data_size": 0 00:23:22.290 }, 00:23:22.290 { 00:23:22.290 "name": "BaseBdev4", 00:23:22.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.290 "is_configured": false, 00:23:22.290 "data_offset": 0, 00:23:22.290 "data_size": 0 00:23:22.290 } 00:23:22.290 ] 00:23:22.290 }' 00:23:22.290 23:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:22.290 23:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.883 23:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:23.168 [2024-07-13 23:09:12.427354] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:23.168 [2024-07-13 23:09:12.427415] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:23:23.168 23:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:23.426 [2024-07-13 23:09:12.651410] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:23.426 [2024-07-13 23:09:12.651475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:23.426 [2024-07-13 23:09:12.651487] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:23.426 [2024-07-13 23:09:12.651515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:23.426 [2024-07-13 23:09:12.651524] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:23.426 [2024-07-13 23:09:12.651559] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:23.426 [2024-07-13 23:09:12.651567] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:23.426 [2024-07-13 23:09:12.651596] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:23.426 23:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:23.683 [2024-07-13 23:09:12.934679] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:23.683 BaseBdev1 00:23:23.684 23:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:23:23.684 23:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:23:23.684 23:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:23.684 23:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:23.684 23:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:23.684 23:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:23.684 23:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:23.941 23:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:24.200 [ 00:23:24.200 { 00:23:24.200 "name": "BaseBdev1", 00:23:24.200 "aliases": [ 00:23:24.200 "97bebbfd-17a2-44d4-a3f4-b743d57f28f6" 00:23:24.200 ], 00:23:24.200 "product_name": "Malloc disk", 00:23:24.200 "block_size": 512, 00:23:24.200 "num_blocks": 65536, 00:23:24.200 "uuid": "97bebbfd-17a2-44d4-a3f4-b743d57f28f6", 00:23:24.200 "assigned_rate_limits": { 00:23:24.200 "rw_ios_per_sec": 0, 00:23:24.200 "rw_mbytes_per_sec": 0, 00:23:24.200 "r_mbytes_per_sec": 0, 00:23:24.200 "w_mbytes_per_sec": 0 00:23:24.200 }, 00:23:24.200 "claimed": true, 00:23:24.200 "claim_type": "exclusive_write", 00:23:24.200 "zoned": false, 00:23:24.200 "supported_io_types": { 00:23:24.200 "read": true, 00:23:24.200 "write": true, 00:23:24.200 "unmap": true, 00:23:24.200 "flush": true, 00:23:24.200 "reset": true, 00:23:24.200 "nvme_admin": false, 00:23:24.200 "nvme_io": false, 00:23:24.200 "nvme_io_md": false, 00:23:24.200 "write_zeroes": true, 00:23:24.200 "zcopy": true, 00:23:24.200 "get_zone_info": false, 00:23:24.200 "zone_management": false, 00:23:24.200 "zone_append": false, 00:23:24.200 "compare": false, 00:23:24.200 "compare_and_write": false, 00:23:24.200 "abort": true, 00:23:24.200 "seek_hole": false, 00:23:24.200 "seek_data": false, 00:23:24.200 "copy": true, 00:23:24.200 "nvme_iov_md": false 00:23:24.200 }, 00:23:24.200 "memory_domains": [ 00:23:24.200 { 00:23:24.200 "dma_device_id": "system", 00:23:24.200 "dma_device_type": 1 00:23:24.200 }, 00:23:24.200 { 00:23:24.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:24.200 "dma_device_type": 2 00:23:24.200 } 00:23:24.200 ], 00:23:24.200 "driver_specific": {} 00:23:24.200 } 00:23:24.200 ] 00:23:24.201 23:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:24.201 23:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:24.201 23:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:24.201 23:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:24.201 23:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:24.201 23:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:24.201 23:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:24.201 23:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:24.201 23:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:24.201 23:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:24.201 23:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:24.201 23:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.201 23:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:24.459 23:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:24.459 "name": "Existed_Raid", 00:23:24.459 "uuid": "7836c197-3048-4647-9757-ec1d88fd8e24", 00:23:24.459 "strip_size_kb": 64, 00:23:24.459 "state": "configuring", 00:23:24.459 "raid_level": "concat", 00:23:24.459 "superblock": true, 00:23:24.459 "num_base_bdevs": 4, 00:23:24.459 "num_base_bdevs_discovered": 1, 00:23:24.459 "num_base_bdevs_operational": 4, 00:23:24.459 "base_bdevs_list": [ 00:23:24.459 { 00:23:24.459 "name": "BaseBdev1", 00:23:24.459 "uuid": "97bebbfd-17a2-44d4-a3f4-b743d57f28f6", 00:23:24.459 "is_configured": true, 00:23:24.459 "data_offset": 2048, 00:23:24.459 "data_size": 63488 00:23:24.459 }, 00:23:24.459 { 00:23:24.459 "name": "BaseBdev2", 00:23:24.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.459 "is_configured": false, 00:23:24.459 "data_offset": 0, 00:23:24.459 "data_size": 0 00:23:24.459 }, 00:23:24.459 { 00:23:24.459 "name": "BaseBdev3", 00:23:24.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.459 "is_configured": false, 00:23:24.459 "data_offset": 0, 00:23:24.459 "data_size": 0 00:23:24.459 }, 00:23:24.460 { 00:23:24.460 "name": "BaseBdev4", 00:23:24.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.460 "is_configured": false, 00:23:24.460 "data_offset": 0, 00:23:24.460 "data_size": 0 00:23:24.460 } 00:23:24.460 ] 00:23:24.460 }' 00:23:24.460 23:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:24.460 23:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.027 23:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:25.286 [2024-07-13 23:09:14.563099] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:25.286 [2024-07-13 23:09:14.563194] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:23:25.286 23:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:25.546 [2024-07-13 23:09:14.835156] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:25.546 [2024-07-13 23:09:14.837307] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:25.546 [2024-07-13 23:09:14.837384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:25.546 [2024-07-13 23:09:14.837397] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:25.546 [2024-07-13 23:09:14.837428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:25.546 [2024-07-13 23:09:14.837437] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:25.546 [2024-07-13 23:09:14.837469] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:25.546 23:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:23:25.546 23:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:25.546 23:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:25.546 23:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:25.546 23:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:25.546 23:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:25.546 23:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:25.546 23:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:25.546 23:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:25.546 23:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:25.546 23:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:25.546 23:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:25.546 23:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.546 23:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:25.804 23:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:25.804 "name": "Existed_Raid", 00:23:25.804 "uuid": "6b816652-f7e0-4e3d-9bea-b9b4f166fdca", 00:23:25.804 "strip_size_kb": 64, 00:23:25.804 "state": "configuring", 00:23:25.804 "raid_level": "concat", 00:23:25.804 "superblock": true, 00:23:25.804 "num_base_bdevs": 4, 00:23:25.804 "num_base_bdevs_discovered": 1, 00:23:25.804 "num_base_bdevs_operational": 4, 00:23:25.804 "base_bdevs_list": [ 00:23:25.804 { 00:23:25.804 "name": "BaseBdev1", 00:23:25.804 "uuid": "97bebbfd-17a2-44d4-a3f4-b743d57f28f6", 00:23:25.804 "is_configured": true, 00:23:25.804 "data_offset": 2048, 00:23:25.804 "data_size": 63488 00:23:25.804 }, 00:23:25.804 { 00:23:25.804 "name": "BaseBdev2", 00:23:25.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.804 "is_configured": false, 00:23:25.804 "data_offset": 0, 00:23:25.804 "data_size": 0 00:23:25.805 }, 00:23:25.805 { 00:23:25.805 "name": "BaseBdev3", 00:23:25.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.805 "is_configured": false, 00:23:25.805 "data_offset": 0, 00:23:25.805 "data_size": 0 00:23:25.805 }, 00:23:25.805 { 00:23:25.805 "name": "BaseBdev4", 00:23:25.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.805 "is_configured": false, 00:23:25.805 "data_offset": 0, 00:23:25.805 "data_size": 0 00:23:25.805 } 00:23:25.805 ] 00:23:25.805 }' 00:23:25.805 23:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:25.805 23:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.371 23:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:26.628 [2024-07-13 23:09:16.017584] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:26.628 BaseBdev2 00:23:26.885 23:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:23:26.885 23:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:23:26.885 23:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:26.885 23:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:26.885 23:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:26.885 23:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:26.885 23:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:27.143 23:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:27.143 [ 00:23:27.143 { 00:23:27.143 "name": "BaseBdev2", 00:23:27.143 "aliases": [ 00:23:27.143 "57707563-32c5-4dca-908b-83f9ff3151d8" 00:23:27.143 ], 00:23:27.143 "product_name": "Malloc disk", 00:23:27.143 "block_size": 512, 00:23:27.143 "num_blocks": 65536, 00:23:27.143 "uuid": "57707563-32c5-4dca-908b-83f9ff3151d8", 00:23:27.143 "assigned_rate_limits": { 00:23:27.143 "rw_ios_per_sec": 0, 00:23:27.143 "rw_mbytes_per_sec": 0, 00:23:27.143 "r_mbytes_per_sec": 0, 00:23:27.143 "w_mbytes_per_sec": 0 00:23:27.143 }, 00:23:27.143 "claimed": true, 00:23:27.143 "claim_type": "exclusive_write", 00:23:27.143 "zoned": false, 00:23:27.143 "supported_io_types": { 00:23:27.143 "read": true, 00:23:27.143 "write": true, 00:23:27.143 "unmap": true, 00:23:27.143 "flush": true, 00:23:27.143 "reset": true, 00:23:27.143 "nvme_admin": false, 00:23:27.143 "nvme_io": false, 00:23:27.143 "nvme_io_md": false, 00:23:27.143 "write_zeroes": true, 00:23:27.143 "zcopy": true, 00:23:27.143 "get_zone_info": false, 00:23:27.143 "zone_management": false, 00:23:27.143 "zone_append": false, 00:23:27.143 "compare": false, 00:23:27.143 "compare_and_write": false, 00:23:27.143 "abort": true, 00:23:27.143 "seek_hole": false, 00:23:27.143 "seek_data": false, 00:23:27.143 "copy": true, 00:23:27.143 "nvme_iov_md": false 00:23:27.143 }, 00:23:27.143 "memory_domains": [ 00:23:27.143 { 00:23:27.143 "dma_device_id": "system", 00:23:27.143 "dma_device_type": 1 00:23:27.143 }, 00:23:27.143 { 00:23:27.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:27.143 "dma_device_type": 2 00:23:27.143 } 00:23:27.143 ], 00:23:27.143 "driver_specific": {} 00:23:27.143 } 00:23:27.143 ] 00:23:27.143 23:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:27.143 23:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:27.143 23:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:27.143 23:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:27.143 23:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:27.143 23:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:27.143 23:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:27.143 23:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:27.143 23:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:27.143 23:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:27.143 23:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:27.143 23:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:27.143 23:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:27.143 23:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:27.143 23:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:27.400 23:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:27.400 "name": "Existed_Raid", 00:23:27.400 "uuid": "6b816652-f7e0-4e3d-9bea-b9b4f166fdca", 00:23:27.400 "strip_size_kb": 64, 00:23:27.400 "state": "configuring", 00:23:27.400 "raid_level": "concat", 00:23:27.400 "superblock": true, 00:23:27.400 "num_base_bdevs": 4, 00:23:27.400 "num_base_bdevs_discovered": 2, 00:23:27.400 "num_base_bdevs_operational": 4, 00:23:27.400 "base_bdevs_list": [ 00:23:27.400 { 00:23:27.400 "name": "BaseBdev1", 00:23:27.400 "uuid": "97bebbfd-17a2-44d4-a3f4-b743d57f28f6", 00:23:27.400 "is_configured": true, 00:23:27.400 "data_offset": 2048, 00:23:27.400 "data_size": 63488 00:23:27.400 }, 00:23:27.400 { 00:23:27.400 "name": "BaseBdev2", 00:23:27.400 "uuid": "57707563-32c5-4dca-908b-83f9ff3151d8", 00:23:27.400 "is_configured": true, 00:23:27.400 "data_offset": 2048, 00:23:27.400 "data_size": 63488 00:23:27.400 }, 00:23:27.400 { 00:23:27.400 "name": "BaseBdev3", 00:23:27.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.400 "is_configured": false, 00:23:27.400 "data_offset": 0, 00:23:27.400 "data_size": 0 00:23:27.400 }, 00:23:27.400 { 00:23:27.400 "name": "BaseBdev4", 00:23:27.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.400 "is_configured": false, 00:23:27.400 "data_offset": 0, 00:23:27.400 "data_size": 0 00:23:27.400 } 00:23:27.400 ] 00:23:27.400 }' 00:23:27.400 23:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:27.400 23:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.333 23:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:28.333 [2024-07-13 23:09:17.665752] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:28.333 BaseBdev3 00:23:28.333 23:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:23:28.333 23:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:23:28.333 23:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:28.333 23:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:28.333 23:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:28.333 23:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:28.333 23:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:28.590 23:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:28.847 [ 00:23:28.847 { 00:23:28.847 "name": "BaseBdev3", 00:23:28.847 "aliases": [ 00:23:28.847 "a348a0b0-72d4-4575-9090-2daba4664ff3" 00:23:28.847 ], 00:23:28.847 "product_name": "Malloc disk", 00:23:28.847 "block_size": 512, 00:23:28.847 "num_blocks": 65536, 00:23:28.847 "uuid": "a348a0b0-72d4-4575-9090-2daba4664ff3", 00:23:28.847 "assigned_rate_limits": { 00:23:28.847 "rw_ios_per_sec": 0, 00:23:28.847 "rw_mbytes_per_sec": 0, 00:23:28.847 "r_mbytes_per_sec": 0, 00:23:28.847 "w_mbytes_per_sec": 0 00:23:28.847 }, 00:23:28.847 "claimed": true, 00:23:28.847 "claim_type": "exclusive_write", 00:23:28.847 "zoned": false, 00:23:28.847 "supported_io_types": { 00:23:28.847 "read": true, 00:23:28.847 "write": true, 00:23:28.847 "unmap": true, 00:23:28.847 "flush": true, 00:23:28.847 "reset": true, 00:23:28.847 "nvme_admin": false, 00:23:28.847 "nvme_io": false, 00:23:28.847 "nvme_io_md": false, 00:23:28.847 "write_zeroes": true, 00:23:28.847 "zcopy": true, 00:23:28.847 "get_zone_info": false, 00:23:28.847 "zone_management": false, 00:23:28.847 "zone_append": false, 00:23:28.847 "compare": false, 00:23:28.847 "compare_and_write": false, 00:23:28.847 "abort": true, 00:23:28.847 "seek_hole": false, 00:23:28.847 "seek_data": false, 00:23:28.847 "copy": true, 00:23:28.847 "nvme_iov_md": false 00:23:28.847 }, 00:23:28.847 "memory_domains": [ 00:23:28.847 { 00:23:28.847 "dma_device_id": "system", 00:23:28.847 "dma_device_type": 1 00:23:28.847 }, 00:23:28.847 { 00:23:28.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.847 "dma_device_type": 2 00:23:28.847 } 00:23:28.847 ], 00:23:28.847 "driver_specific": {} 00:23:28.847 } 00:23:28.847 ] 00:23:28.847 23:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:28.847 23:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:28.847 23:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:28.847 23:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:28.847 23:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:28.847 23:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:28.847 23:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:28.847 23:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:28.847 23:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:28.847 23:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:28.847 23:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:28.847 23:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:28.847 23:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:28.847 23:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:28.847 23:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:29.105 23:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:29.105 "name": "Existed_Raid", 00:23:29.105 "uuid": "6b816652-f7e0-4e3d-9bea-b9b4f166fdca", 00:23:29.105 "strip_size_kb": 64, 00:23:29.105 "state": "configuring", 00:23:29.105 "raid_level": "concat", 00:23:29.105 "superblock": true, 00:23:29.105 "num_base_bdevs": 4, 00:23:29.105 "num_base_bdevs_discovered": 3, 00:23:29.105 "num_base_bdevs_operational": 4, 00:23:29.105 "base_bdevs_list": [ 00:23:29.105 { 00:23:29.105 "name": "BaseBdev1", 00:23:29.105 "uuid": "97bebbfd-17a2-44d4-a3f4-b743d57f28f6", 00:23:29.105 "is_configured": true, 00:23:29.105 "data_offset": 2048, 00:23:29.105 "data_size": 63488 00:23:29.105 }, 00:23:29.105 { 00:23:29.105 "name": "BaseBdev2", 00:23:29.105 "uuid": "57707563-32c5-4dca-908b-83f9ff3151d8", 00:23:29.105 "is_configured": true, 00:23:29.105 "data_offset": 2048, 00:23:29.105 "data_size": 63488 00:23:29.105 }, 00:23:29.105 { 00:23:29.105 "name": "BaseBdev3", 00:23:29.105 "uuid": "a348a0b0-72d4-4575-9090-2daba4664ff3", 00:23:29.105 "is_configured": true, 00:23:29.105 "data_offset": 2048, 00:23:29.105 "data_size": 63488 00:23:29.105 }, 00:23:29.105 { 00:23:29.105 "name": "BaseBdev4", 00:23:29.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.105 "is_configured": false, 00:23:29.105 "data_offset": 0, 00:23:29.105 "data_size": 0 00:23:29.105 } 00:23:29.105 ] 00:23:29.105 }' 00:23:29.105 23:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:29.105 23:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.037 23:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:30.037 [2024-07-13 23:09:19.394994] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:30.037 [2024-07-13 23:09:19.395277] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:23:30.037 [2024-07-13 23:09:19.395293] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:30.037 [2024-07-13 23:09:19.395472] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:23:30.037 [2024-07-13 23:09:19.395926] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:23:30.037 [2024-07-13 23:09:19.395952] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:23:30.037 BaseBdev4 00:23:30.037 [2024-07-13 23:09:19.396173] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:30.037 23:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:23:30.037 23:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:23:30.037 23:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:30.037 23:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:30.037 23:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:30.037 23:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:30.037 23:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:30.295 23:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:30.553 [ 00:23:30.553 { 00:23:30.553 "name": "BaseBdev4", 00:23:30.553 "aliases": [ 00:23:30.553 "527d30e8-b773-4e1f-9add-20aeafa03e6f" 00:23:30.553 ], 00:23:30.553 "product_name": "Malloc disk", 00:23:30.553 "block_size": 512, 00:23:30.553 "num_blocks": 65536, 00:23:30.553 "uuid": "527d30e8-b773-4e1f-9add-20aeafa03e6f", 00:23:30.553 "assigned_rate_limits": { 00:23:30.553 "rw_ios_per_sec": 0, 00:23:30.553 "rw_mbytes_per_sec": 0, 00:23:30.553 "r_mbytes_per_sec": 0, 00:23:30.553 "w_mbytes_per_sec": 0 00:23:30.553 }, 00:23:30.553 "claimed": true, 00:23:30.553 "claim_type": "exclusive_write", 00:23:30.553 "zoned": false, 00:23:30.553 "supported_io_types": { 00:23:30.553 "read": true, 00:23:30.553 "write": true, 00:23:30.553 "unmap": true, 00:23:30.553 "flush": true, 00:23:30.553 "reset": true, 00:23:30.553 "nvme_admin": false, 00:23:30.553 "nvme_io": false, 00:23:30.553 "nvme_io_md": false, 00:23:30.553 "write_zeroes": true, 00:23:30.553 "zcopy": true, 00:23:30.553 "get_zone_info": false, 00:23:30.553 "zone_management": false, 00:23:30.553 "zone_append": false, 00:23:30.553 "compare": false, 00:23:30.553 "compare_and_write": false, 00:23:30.553 "abort": true, 00:23:30.553 "seek_hole": false, 00:23:30.553 "seek_data": false, 00:23:30.553 "copy": true, 00:23:30.553 "nvme_iov_md": false 00:23:30.553 }, 00:23:30.553 "memory_domains": [ 00:23:30.553 { 00:23:30.553 "dma_device_id": "system", 00:23:30.553 "dma_device_type": 1 00:23:30.553 }, 00:23:30.553 { 00:23:30.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:30.553 "dma_device_type": 2 00:23:30.553 } 00:23:30.553 ], 00:23:30.553 "driver_specific": {} 00:23:30.553 } 00:23:30.553 ] 00:23:30.553 23:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:30.553 23:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:30.553 23:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:30.553 23:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:23:30.553 23:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:30.553 23:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:30.553 23:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:30.553 23:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:30.553 23:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:30.553 23:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:30.553 23:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:30.553 23:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:30.553 23:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:30.553 23:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:30.553 23:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:30.811 23:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:30.811 "name": "Existed_Raid", 00:23:30.811 "uuid": "6b816652-f7e0-4e3d-9bea-b9b4f166fdca", 00:23:30.811 "strip_size_kb": 64, 00:23:30.811 "state": "online", 00:23:30.811 "raid_level": "concat", 00:23:30.811 "superblock": true, 00:23:30.811 "num_base_bdevs": 4, 00:23:30.811 "num_base_bdevs_discovered": 4, 00:23:30.811 "num_base_bdevs_operational": 4, 00:23:30.811 "base_bdevs_list": [ 00:23:30.811 { 00:23:30.811 "name": "BaseBdev1", 00:23:30.811 "uuid": "97bebbfd-17a2-44d4-a3f4-b743d57f28f6", 00:23:30.811 "is_configured": true, 00:23:30.811 "data_offset": 2048, 00:23:30.811 "data_size": 63488 00:23:30.811 }, 00:23:30.811 { 00:23:30.811 "name": "BaseBdev2", 00:23:30.811 "uuid": "57707563-32c5-4dca-908b-83f9ff3151d8", 00:23:30.811 "is_configured": true, 00:23:30.811 "data_offset": 2048, 00:23:30.811 "data_size": 63488 00:23:30.811 }, 00:23:30.811 { 00:23:30.811 "name": "BaseBdev3", 00:23:30.811 "uuid": "a348a0b0-72d4-4575-9090-2daba4664ff3", 00:23:30.811 "is_configured": true, 00:23:30.811 "data_offset": 2048, 00:23:30.811 "data_size": 63488 00:23:30.811 }, 00:23:30.811 { 00:23:30.811 "name": "BaseBdev4", 00:23:30.811 "uuid": "527d30e8-b773-4e1f-9add-20aeafa03e6f", 00:23:30.811 "is_configured": true, 00:23:30.811 "data_offset": 2048, 00:23:30.811 "data_size": 63488 00:23:30.811 } 00:23:30.811 ] 00:23:30.811 }' 00:23:30.811 23:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:30.811 23:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.379 23:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:23:31.379 23:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:31.637 23:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:31.637 23:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:31.637 23:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:31.637 23:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:23:31.637 23:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:31.637 23:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:31.637 [2024-07-13 23:09:21.011864] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:31.637 23:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:31.637 "name": "Existed_Raid", 00:23:31.637 "aliases": [ 00:23:31.637 "6b816652-f7e0-4e3d-9bea-b9b4f166fdca" 00:23:31.637 ], 00:23:31.637 "product_name": "Raid Volume", 00:23:31.637 "block_size": 512, 00:23:31.637 "num_blocks": 253952, 00:23:31.637 "uuid": "6b816652-f7e0-4e3d-9bea-b9b4f166fdca", 00:23:31.637 "assigned_rate_limits": { 00:23:31.637 "rw_ios_per_sec": 0, 00:23:31.637 "rw_mbytes_per_sec": 0, 00:23:31.637 "r_mbytes_per_sec": 0, 00:23:31.637 "w_mbytes_per_sec": 0 00:23:31.637 }, 00:23:31.637 "claimed": false, 00:23:31.637 "zoned": false, 00:23:31.637 "supported_io_types": { 00:23:31.637 "read": true, 00:23:31.637 "write": true, 00:23:31.637 "unmap": true, 00:23:31.637 "flush": true, 00:23:31.637 "reset": true, 00:23:31.637 "nvme_admin": false, 00:23:31.637 "nvme_io": false, 00:23:31.637 "nvme_io_md": false, 00:23:31.637 "write_zeroes": true, 00:23:31.637 "zcopy": false, 00:23:31.637 "get_zone_info": false, 00:23:31.637 "zone_management": false, 00:23:31.637 "zone_append": false, 00:23:31.637 "compare": false, 00:23:31.637 "compare_and_write": false, 00:23:31.637 "abort": false, 00:23:31.637 "seek_hole": false, 00:23:31.637 "seek_data": false, 00:23:31.637 "copy": false, 00:23:31.637 "nvme_iov_md": false 00:23:31.637 }, 00:23:31.637 "memory_domains": [ 00:23:31.637 { 00:23:31.637 "dma_device_id": "system", 00:23:31.637 "dma_device_type": 1 00:23:31.637 }, 00:23:31.637 { 00:23:31.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:31.637 "dma_device_type": 2 00:23:31.637 }, 00:23:31.637 { 00:23:31.637 "dma_device_id": "system", 00:23:31.638 "dma_device_type": 1 00:23:31.638 }, 00:23:31.638 { 00:23:31.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:31.638 "dma_device_type": 2 00:23:31.638 }, 00:23:31.638 { 00:23:31.638 "dma_device_id": "system", 00:23:31.638 "dma_device_type": 1 00:23:31.638 }, 00:23:31.638 { 00:23:31.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:31.638 "dma_device_type": 2 00:23:31.638 }, 00:23:31.638 { 00:23:31.638 "dma_device_id": "system", 00:23:31.638 "dma_device_type": 1 00:23:31.638 }, 00:23:31.638 { 00:23:31.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:31.638 "dma_device_type": 2 00:23:31.638 } 00:23:31.638 ], 00:23:31.638 "driver_specific": { 00:23:31.638 "raid": { 00:23:31.638 "uuid": "6b816652-f7e0-4e3d-9bea-b9b4f166fdca", 00:23:31.638 "strip_size_kb": 64, 00:23:31.638 "state": "online", 00:23:31.638 "raid_level": "concat", 00:23:31.638 "superblock": true, 00:23:31.638 "num_base_bdevs": 4, 00:23:31.638 "num_base_bdevs_discovered": 4, 00:23:31.638 "num_base_bdevs_operational": 4, 00:23:31.638 "base_bdevs_list": [ 00:23:31.638 { 00:23:31.638 "name": "BaseBdev1", 00:23:31.638 "uuid": "97bebbfd-17a2-44d4-a3f4-b743d57f28f6", 00:23:31.638 "is_configured": true, 00:23:31.638 "data_offset": 2048, 00:23:31.638 "data_size": 63488 00:23:31.638 }, 00:23:31.638 { 00:23:31.638 "name": "BaseBdev2", 00:23:31.638 "uuid": "57707563-32c5-4dca-908b-83f9ff3151d8", 00:23:31.638 "is_configured": true, 00:23:31.638 "data_offset": 2048, 00:23:31.638 "data_size": 63488 00:23:31.638 }, 00:23:31.638 { 00:23:31.638 "name": "BaseBdev3", 00:23:31.638 "uuid": "a348a0b0-72d4-4575-9090-2daba4664ff3", 00:23:31.638 "is_configured": true, 00:23:31.638 "data_offset": 2048, 00:23:31.638 "data_size": 63488 00:23:31.638 }, 00:23:31.638 { 00:23:31.638 "name": "BaseBdev4", 00:23:31.638 "uuid": "527d30e8-b773-4e1f-9add-20aeafa03e6f", 00:23:31.638 "is_configured": true, 00:23:31.638 "data_offset": 2048, 00:23:31.638 "data_size": 63488 00:23:31.638 } 00:23:31.638 ] 00:23:31.638 } 00:23:31.638 } 00:23:31.638 }' 00:23:31.638 23:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:31.896 23:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:23:31.896 BaseBdev2 00:23:31.896 BaseBdev3 00:23:31.896 BaseBdev4' 00:23:31.896 23:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:31.896 23:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:23:31.896 23:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:32.154 23:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:32.154 "name": "BaseBdev1", 00:23:32.154 "aliases": [ 00:23:32.154 "97bebbfd-17a2-44d4-a3f4-b743d57f28f6" 00:23:32.154 ], 00:23:32.154 "product_name": "Malloc disk", 00:23:32.154 "block_size": 512, 00:23:32.154 "num_blocks": 65536, 00:23:32.154 "uuid": "97bebbfd-17a2-44d4-a3f4-b743d57f28f6", 00:23:32.154 "assigned_rate_limits": { 00:23:32.155 "rw_ios_per_sec": 0, 00:23:32.155 "rw_mbytes_per_sec": 0, 00:23:32.155 "r_mbytes_per_sec": 0, 00:23:32.155 "w_mbytes_per_sec": 0 00:23:32.155 }, 00:23:32.155 "claimed": true, 00:23:32.155 "claim_type": "exclusive_write", 00:23:32.155 "zoned": false, 00:23:32.155 "supported_io_types": { 00:23:32.155 "read": true, 00:23:32.155 "write": true, 00:23:32.155 "unmap": true, 00:23:32.155 "flush": true, 00:23:32.155 "reset": true, 00:23:32.155 "nvme_admin": false, 00:23:32.155 "nvme_io": false, 00:23:32.155 "nvme_io_md": false, 00:23:32.155 "write_zeroes": true, 00:23:32.155 "zcopy": true, 00:23:32.155 "get_zone_info": false, 00:23:32.155 "zone_management": false, 00:23:32.155 "zone_append": false, 00:23:32.155 "compare": false, 00:23:32.155 "compare_and_write": false, 00:23:32.155 "abort": true, 00:23:32.155 "seek_hole": false, 00:23:32.155 "seek_data": false, 00:23:32.155 "copy": true, 00:23:32.155 "nvme_iov_md": false 00:23:32.155 }, 00:23:32.155 "memory_domains": [ 00:23:32.155 { 00:23:32.155 "dma_device_id": "system", 00:23:32.155 "dma_device_type": 1 00:23:32.155 }, 00:23:32.155 { 00:23:32.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:32.155 "dma_device_type": 2 00:23:32.155 } 00:23:32.155 ], 00:23:32.155 "driver_specific": {} 00:23:32.155 }' 00:23:32.155 23:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:32.155 23:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:32.155 23:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:32.155 23:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:32.155 23:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:32.155 23:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:32.155 23:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:32.414 23:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:32.414 23:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:32.414 23:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:32.414 23:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:32.414 23:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:32.414 23:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:32.414 23:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:32.414 23:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:32.672 23:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:32.672 "name": "BaseBdev2", 00:23:32.672 "aliases": [ 00:23:32.672 "57707563-32c5-4dca-908b-83f9ff3151d8" 00:23:32.672 ], 00:23:32.672 "product_name": "Malloc disk", 00:23:32.672 "block_size": 512, 00:23:32.672 "num_blocks": 65536, 00:23:32.672 "uuid": "57707563-32c5-4dca-908b-83f9ff3151d8", 00:23:32.672 "assigned_rate_limits": { 00:23:32.672 "rw_ios_per_sec": 0, 00:23:32.672 "rw_mbytes_per_sec": 0, 00:23:32.672 "r_mbytes_per_sec": 0, 00:23:32.672 "w_mbytes_per_sec": 0 00:23:32.672 }, 00:23:32.672 "claimed": true, 00:23:32.672 "claim_type": "exclusive_write", 00:23:32.672 "zoned": false, 00:23:32.672 "supported_io_types": { 00:23:32.672 "read": true, 00:23:32.672 "write": true, 00:23:32.672 "unmap": true, 00:23:32.672 "flush": true, 00:23:32.672 "reset": true, 00:23:32.672 "nvme_admin": false, 00:23:32.672 "nvme_io": false, 00:23:32.672 "nvme_io_md": false, 00:23:32.672 "write_zeroes": true, 00:23:32.672 "zcopy": true, 00:23:32.672 "get_zone_info": false, 00:23:32.672 "zone_management": false, 00:23:32.672 "zone_append": false, 00:23:32.672 "compare": false, 00:23:32.672 "compare_and_write": false, 00:23:32.672 "abort": true, 00:23:32.672 "seek_hole": false, 00:23:32.672 "seek_data": false, 00:23:32.672 "copy": true, 00:23:32.672 "nvme_iov_md": false 00:23:32.672 }, 00:23:32.672 "memory_domains": [ 00:23:32.672 { 00:23:32.672 "dma_device_id": "system", 00:23:32.672 "dma_device_type": 1 00:23:32.672 }, 00:23:32.672 { 00:23:32.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:32.672 "dma_device_type": 2 00:23:32.672 } 00:23:32.672 ], 00:23:32.672 "driver_specific": {} 00:23:32.672 }' 00:23:32.672 23:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:32.672 23:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:32.672 23:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:32.672 23:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:32.929 23:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:32.929 23:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:32.929 23:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:32.929 23:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:32.929 23:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:32.929 23:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:32.929 23:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:33.186 23:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:33.186 23:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:33.186 23:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:33.186 23:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:33.444 23:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:33.444 "name": "BaseBdev3", 00:23:33.444 "aliases": [ 00:23:33.444 "a348a0b0-72d4-4575-9090-2daba4664ff3" 00:23:33.444 ], 00:23:33.444 "product_name": "Malloc disk", 00:23:33.444 "block_size": 512, 00:23:33.444 "num_blocks": 65536, 00:23:33.444 "uuid": "a348a0b0-72d4-4575-9090-2daba4664ff3", 00:23:33.444 "assigned_rate_limits": { 00:23:33.444 "rw_ios_per_sec": 0, 00:23:33.444 "rw_mbytes_per_sec": 0, 00:23:33.444 "r_mbytes_per_sec": 0, 00:23:33.444 "w_mbytes_per_sec": 0 00:23:33.444 }, 00:23:33.444 "claimed": true, 00:23:33.444 "claim_type": "exclusive_write", 00:23:33.444 "zoned": false, 00:23:33.444 "supported_io_types": { 00:23:33.444 "read": true, 00:23:33.444 "write": true, 00:23:33.444 "unmap": true, 00:23:33.444 "flush": true, 00:23:33.444 "reset": true, 00:23:33.444 "nvme_admin": false, 00:23:33.444 "nvme_io": false, 00:23:33.444 "nvme_io_md": false, 00:23:33.444 "write_zeroes": true, 00:23:33.444 "zcopy": true, 00:23:33.444 "get_zone_info": false, 00:23:33.444 "zone_management": false, 00:23:33.444 "zone_append": false, 00:23:33.444 "compare": false, 00:23:33.444 "compare_and_write": false, 00:23:33.444 "abort": true, 00:23:33.444 "seek_hole": false, 00:23:33.444 "seek_data": false, 00:23:33.444 "copy": true, 00:23:33.444 "nvme_iov_md": false 00:23:33.444 }, 00:23:33.444 "memory_domains": [ 00:23:33.444 { 00:23:33.444 "dma_device_id": "system", 00:23:33.444 "dma_device_type": 1 00:23:33.444 }, 00:23:33.444 { 00:23:33.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:33.444 "dma_device_type": 2 00:23:33.444 } 00:23:33.444 ], 00:23:33.444 "driver_specific": {} 00:23:33.444 }' 00:23:33.444 23:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:33.444 23:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:33.444 23:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:33.444 23:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:33.444 23:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:33.444 23:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:33.444 23:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:33.703 23:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:33.703 23:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:33.703 23:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:33.703 23:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:33.703 23:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:33.703 23:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:33.703 23:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:33.703 23:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:33.963 23:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:33.963 "name": "BaseBdev4", 00:23:33.963 "aliases": [ 00:23:33.963 "527d30e8-b773-4e1f-9add-20aeafa03e6f" 00:23:33.963 ], 00:23:33.963 "product_name": "Malloc disk", 00:23:33.963 "block_size": 512, 00:23:33.963 "num_blocks": 65536, 00:23:33.963 "uuid": "527d30e8-b773-4e1f-9add-20aeafa03e6f", 00:23:33.963 "assigned_rate_limits": { 00:23:33.963 "rw_ios_per_sec": 0, 00:23:33.963 "rw_mbytes_per_sec": 0, 00:23:33.963 "r_mbytes_per_sec": 0, 00:23:33.963 "w_mbytes_per_sec": 0 00:23:33.963 }, 00:23:33.963 "claimed": true, 00:23:33.963 "claim_type": "exclusive_write", 00:23:33.963 "zoned": false, 00:23:33.963 "supported_io_types": { 00:23:33.963 "read": true, 00:23:33.963 "write": true, 00:23:33.963 "unmap": true, 00:23:33.963 "flush": true, 00:23:33.963 "reset": true, 00:23:33.963 "nvme_admin": false, 00:23:33.963 "nvme_io": false, 00:23:33.963 "nvme_io_md": false, 00:23:33.963 "write_zeroes": true, 00:23:33.963 "zcopy": true, 00:23:33.963 "get_zone_info": false, 00:23:33.963 "zone_management": false, 00:23:33.963 "zone_append": false, 00:23:33.964 "compare": false, 00:23:33.964 "compare_and_write": false, 00:23:33.964 "abort": true, 00:23:33.964 "seek_hole": false, 00:23:33.964 "seek_data": false, 00:23:33.964 "copy": true, 00:23:33.964 "nvme_iov_md": false 00:23:33.964 }, 00:23:33.964 "memory_domains": [ 00:23:33.964 { 00:23:33.964 "dma_device_id": "system", 00:23:33.964 "dma_device_type": 1 00:23:33.964 }, 00:23:33.964 { 00:23:33.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:33.964 "dma_device_type": 2 00:23:33.964 } 00:23:33.964 ], 00:23:33.964 "driver_specific": {} 00:23:33.964 }' 00:23:33.964 23:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:34.221 23:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:34.221 23:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:34.221 23:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:34.221 23:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:34.221 23:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:34.221 23:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:34.221 23:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:34.479 23:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:34.479 23:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:34.479 23:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:34.479 23:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:34.479 23:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:34.738 [2024-07-13 23:09:24.008406] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:34.738 [2024-07-13 23:09:24.008442] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:34.738 [2024-07-13 23:09:24.008546] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:34.738 23:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:23:34.738 23:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:23:34.738 23:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:34.738 23:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:23:34.738 23:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:23:34.738 23:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:23:34.738 23:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:34.738 23:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:23:34.738 23:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:34.738 23:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:34.738 23:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:34.738 23:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:34.738 23:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:34.738 23:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:34.738 23:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:34.738 23:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.738 23:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:34.996 23:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:34.996 "name": "Existed_Raid", 00:23:34.996 "uuid": "6b816652-f7e0-4e3d-9bea-b9b4f166fdca", 00:23:34.996 "strip_size_kb": 64, 00:23:34.996 "state": "offline", 00:23:34.996 "raid_level": "concat", 00:23:34.996 "superblock": true, 00:23:34.996 "num_base_bdevs": 4, 00:23:34.996 "num_base_bdevs_discovered": 3, 00:23:34.996 "num_base_bdevs_operational": 3, 00:23:34.996 "base_bdevs_list": [ 00:23:34.996 { 00:23:34.996 "name": null, 00:23:34.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:34.996 "is_configured": false, 00:23:34.996 "data_offset": 2048, 00:23:34.996 "data_size": 63488 00:23:34.996 }, 00:23:34.996 { 00:23:34.996 "name": "BaseBdev2", 00:23:34.996 "uuid": "57707563-32c5-4dca-908b-83f9ff3151d8", 00:23:34.996 "is_configured": true, 00:23:34.996 "data_offset": 2048, 00:23:34.996 "data_size": 63488 00:23:34.996 }, 00:23:34.996 { 00:23:34.996 "name": "BaseBdev3", 00:23:34.996 "uuid": "a348a0b0-72d4-4575-9090-2daba4664ff3", 00:23:34.996 "is_configured": true, 00:23:34.996 "data_offset": 2048, 00:23:34.996 "data_size": 63488 00:23:34.996 }, 00:23:34.996 { 00:23:34.996 "name": "BaseBdev4", 00:23:34.996 "uuid": "527d30e8-b773-4e1f-9add-20aeafa03e6f", 00:23:34.996 "is_configured": true, 00:23:34.996 "data_offset": 2048, 00:23:34.996 "data_size": 63488 00:23:34.996 } 00:23:34.996 ] 00:23:34.996 }' 00:23:34.996 23:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:34.996 23:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:35.931 23:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:23:35.931 23:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:35.931 23:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:35.931 23:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:35.931 23:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:35.931 23:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:35.931 23:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:36.190 [2024-07-13 23:09:25.479624] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:36.190 23:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:36.190 23:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:36.190 23:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.190 23:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:36.448 23:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:36.448 23:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:36.448 23:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:36.705 [2024-07-13 23:09:25.941861] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:36.705 23:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:36.705 23:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:36.705 23:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.705 23:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:36.963 23:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:36.963 23:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:36.963 23:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:23:37.220 [2024-07-13 23:09:26.484222] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:37.220 [2024-07-13 23:09:26.484302] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:23:37.220 23:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:37.220 23:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:37.220 23:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.220 23:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:23:37.478 23:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:23:37.478 23:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:23:37.478 23:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:23:37.478 23:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:23:37.478 23:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:37.479 23:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:37.737 BaseBdev2 00:23:37.737 23:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:23:37.737 23:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:23:37.737 23:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:37.737 23:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:37.737 23:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:37.737 23:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:37.737 23:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:37.995 23:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:38.253 [ 00:23:38.254 { 00:23:38.254 "name": "BaseBdev2", 00:23:38.254 "aliases": [ 00:23:38.254 "b7700ce9-89f5-47fa-a5c6-e2409f01891e" 00:23:38.254 ], 00:23:38.254 "product_name": "Malloc disk", 00:23:38.254 "block_size": 512, 00:23:38.254 "num_blocks": 65536, 00:23:38.254 "uuid": "b7700ce9-89f5-47fa-a5c6-e2409f01891e", 00:23:38.254 "assigned_rate_limits": { 00:23:38.254 "rw_ios_per_sec": 0, 00:23:38.254 "rw_mbytes_per_sec": 0, 00:23:38.254 "r_mbytes_per_sec": 0, 00:23:38.254 "w_mbytes_per_sec": 0 00:23:38.254 }, 00:23:38.254 "claimed": false, 00:23:38.254 "zoned": false, 00:23:38.254 "supported_io_types": { 00:23:38.254 "read": true, 00:23:38.254 "write": true, 00:23:38.254 "unmap": true, 00:23:38.254 "flush": true, 00:23:38.254 "reset": true, 00:23:38.254 "nvme_admin": false, 00:23:38.254 "nvme_io": false, 00:23:38.254 "nvme_io_md": false, 00:23:38.254 "write_zeroes": true, 00:23:38.254 "zcopy": true, 00:23:38.254 "get_zone_info": false, 00:23:38.254 "zone_management": false, 00:23:38.254 "zone_append": false, 00:23:38.254 "compare": false, 00:23:38.254 "compare_and_write": false, 00:23:38.254 "abort": true, 00:23:38.254 "seek_hole": false, 00:23:38.254 "seek_data": false, 00:23:38.254 "copy": true, 00:23:38.254 "nvme_iov_md": false 00:23:38.254 }, 00:23:38.254 "memory_domains": [ 00:23:38.254 { 00:23:38.254 "dma_device_id": "system", 00:23:38.254 "dma_device_type": 1 00:23:38.254 }, 00:23:38.254 { 00:23:38.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:38.254 "dma_device_type": 2 00:23:38.254 } 00:23:38.254 ], 00:23:38.254 "driver_specific": {} 00:23:38.254 } 00:23:38.254 ] 00:23:38.254 23:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:38.254 23:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:38.254 23:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:38.254 23:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:38.512 BaseBdev3 00:23:38.512 23:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:23:38.512 23:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:23:38.512 23:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:38.512 23:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:38.512 23:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:38.512 23:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:38.512 23:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:38.512 23:09:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:38.771 [ 00:23:38.771 { 00:23:38.771 "name": "BaseBdev3", 00:23:38.771 "aliases": [ 00:23:38.771 "607c055b-27e0-4eff-b5d9-d24a96992437" 00:23:38.771 ], 00:23:38.771 "product_name": "Malloc disk", 00:23:38.771 "block_size": 512, 00:23:38.771 "num_blocks": 65536, 00:23:38.771 "uuid": "607c055b-27e0-4eff-b5d9-d24a96992437", 00:23:38.771 "assigned_rate_limits": { 00:23:38.771 "rw_ios_per_sec": 0, 00:23:38.771 "rw_mbytes_per_sec": 0, 00:23:38.771 "r_mbytes_per_sec": 0, 00:23:38.771 "w_mbytes_per_sec": 0 00:23:38.771 }, 00:23:38.771 "claimed": false, 00:23:38.771 "zoned": false, 00:23:38.771 "supported_io_types": { 00:23:38.771 "read": true, 00:23:38.771 "write": true, 00:23:38.771 "unmap": true, 00:23:38.771 "flush": true, 00:23:38.771 "reset": true, 00:23:38.771 "nvme_admin": false, 00:23:38.771 "nvme_io": false, 00:23:38.771 "nvme_io_md": false, 00:23:38.771 "write_zeroes": true, 00:23:38.771 "zcopy": true, 00:23:38.771 "get_zone_info": false, 00:23:38.771 "zone_management": false, 00:23:38.771 "zone_append": false, 00:23:38.771 "compare": false, 00:23:38.771 "compare_and_write": false, 00:23:38.771 "abort": true, 00:23:38.771 "seek_hole": false, 00:23:38.771 "seek_data": false, 00:23:38.771 "copy": true, 00:23:38.771 "nvme_iov_md": false 00:23:38.771 }, 00:23:38.771 "memory_domains": [ 00:23:38.771 { 00:23:38.771 "dma_device_id": "system", 00:23:38.771 "dma_device_type": 1 00:23:38.771 }, 00:23:38.771 { 00:23:38.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:38.771 "dma_device_type": 2 00:23:38.771 } 00:23:38.771 ], 00:23:38.771 "driver_specific": {} 00:23:38.771 } 00:23:38.771 ] 00:23:39.029 23:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:39.029 23:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:39.029 23:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:39.029 23:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:39.029 BaseBdev4 00:23:39.029 23:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:23:39.029 23:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:23:39.029 23:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:39.029 23:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:39.029 23:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:39.029 23:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:39.029 23:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:39.287 23:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:39.854 [ 00:23:39.854 { 00:23:39.854 "name": "BaseBdev4", 00:23:39.854 "aliases": [ 00:23:39.854 "11e4e187-d128-4dce-9ae4-18334dbb76b2" 00:23:39.854 ], 00:23:39.854 "product_name": "Malloc disk", 00:23:39.854 "block_size": 512, 00:23:39.854 "num_blocks": 65536, 00:23:39.854 "uuid": "11e4e187-d128-4dce-9ae4-18334dbb76b2", 00:23:39.854 "assigned_rate_limits": { 00:23:39.854 "rw_ios_per_sec": 0, 00:23:39.854 "rw_mbytes_per_sec": 0, 00:23:39.854 "r_mbytes_per_sec": 0, 00:23:39.854 "w_mbytes_per_sec": 0 00:23:39.854 }, 00:23:39.854 "claimed": false, 00:23:39.854 "zoned": false, 00:23:39.854 "supported_io_types": { 00:23:39.854 "read": true, 00:23:39.854 "write": true, 00:23:39.854 "unmap": true, 00:23:39.854 "flush": true, 00:23:39.854 "reset": true, 00:23:39.854 "nvme_admin": false, 00:23:39.854 "nvme_io": false, 00:23:39.854 "nvme_io_md": false, 00:23:39.854 "write_zeroes": true, 00:23:39.854 "zcopy": true, 00:23:39.854 "get_zone_info": false, 00:23:39.854 "zone_management": false, 00:23:39.854 "zone_append": false, 00:23:39.854 "compare": false, 00:23:39.854 "compare_and_write": false, 00:23:39.854 "abort": true, 00:23:39.854 "seek_hole": false, 00:23:39.854 "seek_data": false, 00:23:39.854 "copy": true, 00:23:39.854 "nvme_iov_md": false 00:23:39.854 }, 00:23:39.854 "memory_domains": [ 00:23:39.855 { 00:23:39.855 "dma_device_id": "system", 00:23:39.855 "dma_device_type": 1 00:23:39.855 }, 00:23:39.855 { 00:23:39.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:39.855 "dma_device_type": 2 00:23:39.855 } 00:23:39.855 ], 00:23:39.855 "driver_specific": {} 00:23:39.855 } 00:23:39.855 ] 00:23:39.855 23:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:39.855 23:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:39.855 23:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:39.855 23:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:39.855 [2024-07-13 23:09:29.233895] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:39.855 [2024-07-13 23:09:29.233991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:39.855 [2024-07-13 23:09:29.234023] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:39.855 [2024-07-13 23:09:29.236210] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:39.855 [2024-07-13 23:09:29.236285] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:39.855 23:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:39.855 23:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:39.855 23:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:39.855 23:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:39.855 23:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:39.855 23:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:39.855 23:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:39.855 23:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:39.855 23:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:39.855 23:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:39.855 23:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.855 23:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:40.114 23:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:40.114 "name": "Existed_Raid", 00:23:40.114 "uuid": "02493e10-1ad2-4d25-8d5d-317f3020dab4", 00:23:40.114 "strip_size_kb": 64, 00:23:40.114 "state": "configuring", 00:23:40.114 "raid_level": "concat", 00:23:40.114 "superblock": true, 00:23:40.114 "num_base_bdevs": 4, 00:23:40.114 "num_base_bdevs_discovered": 3, 00:23:40.114 "num_base_bdevs_operational": 4, 00:23:40.114 "base_bdevs_list": [ 00:23:40.114 { 00:23:40.114 "name": "BaseBdev1", 00:23:40.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.114 "is_configured": false, 00:23:40.114 "data_offset": 0, 00:23:40.114 "data_size": 0 00:23:40.114 }, 00:23:40.114 { 00:23:40.114 "name": "BaseBdev2", 00:23:40.114 "uuid": "b7700ce9-89f5-47fa-a5c6-e2409f01891e", 00:23:40.114 "is_configured": true, 00:23:40.114 "data_offset": 2048, 00:23:40.114 "data_size": 63488 00:23:40.114 }, 00:23:40.114 { 00:23:40.114 "name": "BaseBdev3", 00:23:40.114 "uuid": "607c055b-27e0-4eff-b5d9-d24a96992437", 00:23:40.114 "is_configured": true, 00:23:40.114 "data_offset": 2048, 00:23:40.114 "data_size": 63488 00:23:40.114 }, 00:23:40.114 { 00:23:40.114 "name": "BaseBdev4", 00:23:40.114 "uuid": "11e4e187-d128-4dce-9ae4-18334dbb76b2", 00:23:40.114 "is_configured": true, 00:23:40.114 "data_offset": 2048, 00:23:40.114 "data_size": 63488 00:23:40.114 } 00:23:40.114 ] 00:23:40.114 }' 00:23:40.114 23:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:40.114 23:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:41.049 23:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:41.049 [2024-07-13 23:09:30.382150] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:41.049 23:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:41.049 23:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:41.049 23:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:41.049 23:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:41.049 23:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:41.049 23:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:41.049 23:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:41.049 23:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:41.049 23:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:41.049 23:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:41.049 23:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.049 23:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:41.308 23:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:41.308 "name": "Existed_Raid", 00:23:41.308 "uuid": "02493e10-1ad2-4d25-8d5d-317f3020dab4", 00:23:41.308 "strip_size_kb": 64, 00:23:41.308 "state": "configuring", 00:23:41.308 "raid_level": "concat", 00:23:41.308 "superblock": true, 00:23:41.308 "num_base_bdevs": 4, 00:23:41.308 "num_base_bdevs_discovered": 2, 00:23:41.308 "num_base_bdevs_operational": 4, 00:23:41.308 "base_bdevs_list": [ 00:23:41.308 { 00:23:41.308 "name": "BaseBdev1", 00:23:41.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:41.308 "is_configured": false, 00:23:41.308 "data_offset": 0, 00:23:41.308 "data_size": 0 00:23:41.308 }, 00:23:41.308 { 00:23:41.308 "name": null, 00:23:41.308 "uuid": "b7700ce9-89f5-47fa-a5c6-e2409f01891e", 00:23:41.308 "is_configured": false, 00:23:41.308 "data_offset": 2048, 00:23:41.308 "data_size": 63488 00:23:41.308 }, 00:23:41.308 { 00:23:41.308 "name": "BaseBdev3", 00:23:41.308 "uuid": "607c055b-27e0-4eff-b5d9-d24a96992437", 00:23:41.308 "is_configured": true, 00:23:41.308 "data_offset": 2048, 00:23:41.308 "data_size": 63488 00:23:41.308 }, 00:23:41.308 { 00:23:41.308 "name": "BaseBdev4", 00:23:41.308 "uuid": "11e4e187-d128-4dce-9ae4-18334dbb76b2", 00:23:41.308 "is_configured": true, 00:23:41.308 "data_offset": 2048, 00:23:41.308 "data_size": 63488 00:23:41.308 } 00:23:41.308 ] 00:23:41.308 }' 00:23:41.308 23:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:41.308 23:09:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:41.875 23:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.875 23:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:42.135 23:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:23:42.135 23:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:42.394 [2024-07-13 23:09:31.730880] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:42.394 BaseBdev1 00:23:42.394 23:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:23:42.394 23:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:23:42.394 23:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:42.394 23:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:42.394 23:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:42.394 23:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:42.394 23:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:42.652 23:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:42.911 [ 00:23:42.911 { 00:23:42.911 "name": "BaseBdev1", 00:23:42.911 "aliases": [ 00:23:42.911 "b001c53e-82dc-4b60-b574-5a2e51ceebeb" 00:23:42.911 ], 00:23:42.911 "product_name": "Malloc disk", 00:23:42.911 "block_size": 512, 00:23:42.911 "num_blocks": 65536, 00:23:42.911 "uuid": "b001c53e-82dc-4b60-b574-5a2e51ceebeb", 00:23:42.911 "assigned_rate_limits": { 00:23:42.911 "rw_ios_per_sec": 0, 00:23:42.911 "rw_mbytes_per_sec": 0, 00:23:42.911 "r_mbytes_per_sec": 0, 00:23:42.911 "w_mbytes_per_sec": 0 00:23:42.911 }, 00:23:42.911 "claimed": true, 00:23:42.911 "claim_type": "exclusive_write", 00:23:42.911 "zoned": false, 00:23:42.911 "supported_io_types": { 00:23:42.911 "read": true, 00:23:42.911 "write": true, 00:23:42.911 "unmap": true, 00:23:42.911 "flush": true, 00:23:42.911 "reset": true, 00:23:42.911 "nvme_admin": false, 00:23:42.911 "nvme_io": false, 00:23:42.911 "nvme_io_md": false, 00:23:42.911 "write_zeroes": true, 00:23:42.911 "zcopy": true, 00:23:42.911 "get_zone_info": false, 00:23:42.911 "zone_management": false, 00:23:42.911 "zone_append": false, 00:23:42.911 "compare": false, 00:23:42.911 "compare_and_write": false, 00:23:42.911 "abort": true, 00:23:42.911 "seek_hole": false, 00:23:42.911 "seek_data": false, 00:23:42.911 "copy": true, 00:23:42.911 "nvme_iov_md": false 00:23:42.911 }, 00:23:42.911 "memory_domains": [ 00:23:42.911 { 00:23:42.911 "dma_device_id": "system", 00:23:42.911 "dma_device_type": 1 00:23:42.911 }, 00:23:42.911 { 00:23:42.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:42.911 "dma_device_type": 2 00:23:42.911 } 00:23:42.911 ], 00:23:42.911 "driver_specific": {} 00:23:42.911 } 00:23:42.911 ] 00:23:42.911 23:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:42.911 23:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:42.911 23:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:42.911 23:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:42.911 23:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:42.911 23:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:42.911 23:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:42.911 23:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:42.911 23:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:42.911 23:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:42.911 23:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:42.911 23:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:42.911 23:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.174 23:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:43.174 "name": "Existed_Raid", 00:23:43.174 "uuid": "02493e10-1ad2-4d25-8d5d-317f3020dab4", 00:23:43.174 "strip_size_kb": 64, 00:23:43.174 "state": "configuring", 00:23:43.174 "raid_level": "concat", 00:23:43.174 "superblock": true, 00:23:43.174 "num_base_bdevs": 4, 00:23:43.174 "num_base_bdevs_discovered": 3, 00:23:43.174 "num_base_bdevs_operational": 4, 00:23:43.174 "base_bdevs_list": [ 00:23:43.174 { 00:23:43.174 "name": "BaseBdev1", 00:23:43.174 "uuid": "b001c53e-82dc-4b60-b574-5a2e51ceebeb", 00:23:43.174 "is_configured": true, 00:23:43.174 "data_offset": 2048, 00:23:43.174 "data_size": 63488 00:23:43.174 }, 00:23:43.174 { 00:23:43.174 "name": null, 00:23:43.174 "uuid": "b7700ce9-89f5-47fa-a5c6-e2409f01891e", 00:23:43.174 "is_configured": false, 00:23:43.174 "data_offset": 2048, 00:23:43.174 "data_size": 63488 00:23:43.174 }, 00:23:43.174 { 00:23:43.174 "name": "BaseBdev3", 00:23:43.174 "uuid": "607c055b-27e0-4eff-b5d9-d24a96992437", 00:23:43.174 "is_configured": true, 00:23:43.174 "data_offset": 2048, 00:23:43.174 "data_size": 63488 00:23:43.174 }, 00:23:43.174 { 00:23:43.174 "name": "BaseBdev4", 00:23:43.174 "uuid": "11e4e187-d128-4dce-9ae4-18334dbb76b2", 00:23:43.174 "is_configured": true, 00:23:43.174 "data_offset": 2048, 00:23:43.174 "data_size": 63488 00:23:43.174 } 00:23:43.174 ] 00:23:43.174 }' 00:23:43.174 23:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:43.174 23:09:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:43.755 23:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.755 23:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:44.025 23:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:23:44.025 23:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:23:44.289 [2024-07-13 23:09:33.627467] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:44.289 23:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:44.289 23:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:44.289 23:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:44.289 23:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:44.289 23:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:44.289 23:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:44.289 23:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:44.289 23:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:44.289 23:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:44.289 23:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:44.289 23:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:44.289 23:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:44.546 23:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:44.546 "name": "Existed_Raid", 00:23:44.546 "uuid": "02493e10-1ad2-4d25-8d5d-317f3020dab4", 00:23:44.546 "strip_size_kb": 64, 00:23:44.546 "state": "configuring", 00:23:44.546 "raid_level": "concat", 00:23:44.546 "superblock": true, 00:23:44.546 "num_base_bdevs": 4, 00:23:44.546 "num_base_bdevs_discovered": 2, 00:23:44.546 "num_base_bdevs_operational": 4, 00:23:44.546 "base_bdevs_list": [ 00:23:44.546 { 00:23:44.546 "name": "BaseBdev1", 00:23:44.546 "uuid": "b001c53e-82dc-4b60-b574-5a2e51ceebeb", 00:23:44.546 "is_configured": true, 00:23:44.546 "data_offset": 2048, 00:23:44.546 "data_size": 63488 00:23:44.546 }, 00:23:44.546 { 00:23:44.546 "name": null, 00:23:44.546 "uuid": "b7700ce9-89f5-47fa-a5c6-e2409f01891e", 00:23:44.546 "is_configured": false, 00:23:44.546 "data_offset": 2048, 00:23:44.546 "data_size": 63488 00:23:44.546 }, 00:23:44.546 { 00:23:44.547 "name": null, 00:23:44.547 "uuid": "607c055b-27e0-4eff-b5d9-d24a96992437", 00:23:44.547 "is_configured": false, 00:23:44.547 "data_offset": 2048, 00:23:44.547 "data_size": 63488 00:23:44.547 }, 00:23:44.547 { 00:23:44.547 "name": "BaseBdev4", 00:23:44.547 "uuid": "11e4e187-d128-4dce-9ae4-18334dbb76b2", 00:23:44.547 "is_configured": true, 00:23:44.547 "data_offset": 2048, 00:23:44.547 "data_size": 63488 00:23:44.547 } 00:23:44.547 ] 00:23:44.547 }' 00:23:44.547 23:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:44.547 23:09:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:45.481 23:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.481 23:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:45.481 23:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:23:45.481 23:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:45.739 [2024-07-13 23:09:35.069243] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:45.739 23:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:45.739 23:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:45.739 23:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:45.739 23:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:45.739 23:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:45.739 23:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:45.739 23:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:45.739 23:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:45.739 23:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:45.739 23:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:45.739 23:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.739 23:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:45.998 23:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:45.998 "name": "Existed_Raid", 00:23:45.998 "uuid": "02493e10-1ad2-4d25-8d5d-317f3020dab4", 00:23:45.998 "strip_size_kb": 64, 00:23:45.998 "state": "configuring", 00:23:45.998 "raid_level": "concat", 00:23:45.998 "superblock": true, 00:23:45.998 "num_base_bdevs": 4, 00:23:45.998 "num_base_bdevs_discovered": 3, 00:23:45.998 "num_base_bdevs_operational": 4, 00:23:45.998 "base_bdevs_list": [ 00:23:45.998 { 00:23:45.998 "name": "BaseBdev1", 00:23:45.998 "uuid": "b001c53e-82dc-4b60-b574-5a2e51ceebeb", 00:23:45.998 "is_configured": true, 00:23:45.998 "data_offset": 2048, 00:23:45.998 "data_size": 63488 00:23:45.998 }, 00:23:45.998 { 00:23:45.998 "name": null, 00:23:45.998 "uuid": "b7700ce9-89f5-47fa-a5c6-e2409f01891e", 00:23:45.998 "is_configured": false, 00:23:45.998 "data_offset": 2048, 00:23:45.998 "data_size": 63488 00:23:45.998 }, 00:23:45.998 { 00:23:45.998 "name": "BaseBdev3", 00:23:45.998 "uuid": "607c055b-27e0-4eff-b5d9-d24a96992437", 00:23:45.998 "is_configured": true, 00:23:45.998 "data_offset": 2048, 00:23:45.998 "data_size": 63488 00:23:45.998 }, 00:23:45.998 { 00:23:45.998 "name": "BaseBdev4", 00:23:45.998 "uuid": "11e4e187-d128-4dce-9ae4-18334dbb76b2", 00:23:45.998 "is_configured": true, 00:23:45.998 "data_offset": 2048, 00:23:45.998 "data_size": 63488 00:23:45.998 } 00:23:45.998 ] 00:23:45.998 }' 00:23:45.998 23:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:45.998 23:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.565 23:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.565 23:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:47.132 23:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:23:47.132 23:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:47.132 [2024-07-13 23:09:36.489644] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:47.132 23:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:47.132 23:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:47.132 23:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:47.132 23:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:47.132 23:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:47.132 23:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:47.132 23:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:47.132 23:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:47.132 23:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:47.132 23:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:47.132 23:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.132 23:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:47.390 23:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:47.390 "name": "Existed_Raid", 00:23:47.390 "uuid": "02493e10-1ad2-4d25-8d5d-317f3020dab4", 00:23:47.390 "strip_size_kb": 64, 00:23:47.390 "state": "configuring", 00:23:47.390 "raid_level": "concat", 00:23:47.391 "superblock": true, 00:23:47.391 "num_base_bdevs": 4, 00:23:47.391 "num_base_bdevs_discovered": 2, 00:23:47.391 "num_base_bdevs_operational": 4, 00:23:47.391 "base_bdevs_list": [ 00:23:47.391 { 00:23:47.391 "name": null, 00:23:47.391 "uuid": "b001c53e-82dc-4b60-b574-5a2e51ceebeb", 00:23:47.391 "is_configured": false, 00:23:47.391 "data_offset": 2048, 00:23:47.391 "data_size": 63488 00:23:47.391 }, 00:23:47.391 { 00:23:47.391 "name": null, 00:23:47.391 "uuid": "b7700ce9-89f5-47fa-a5c6-e2409f01891e", 00:23:47.391 "is_configured": false, 00:23:47.391 "data_offset": 2048, 00:23:47.391 "data_size": 63488 00:23:47.391 }, 00:23:47.391 { 00:23:47.391 "name": "BaseBdev3", 00:23:47.391 "uuid": "607c055b-27e0-4eff-b5d9-d24a96992437", 00:23:47.391 "is_configured": true, 00:23:47.391 "data_offset": 2048, 00:23:47.391 "data_size": 63488 00:23:47.391 }, 00:23:47.391 { 00:23:47.391 "name": "BaseBdev4", 00:23:47.391 "uuid": "11e4e187-d128-4dce-9ae4-18334dbb76b2", 00:23:47.391 "is_configured": true, 00:23:47.391 "data_offset": 2048, 00:23:47.391 "data_size": 63488 00:23:47.391 } 00:23:47.391 ] 00:23:47.391 }' 00:23:47.391 23:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:47.391 23:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:48.325 23:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.325 23:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:48.325 23:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:23:48.325 23:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:48.583 [2024-07-13 23:09:37.951434] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:48.583 23:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:48.583 23:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:48.583 23:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:48.584 23:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:48.584 23:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:48.584 23:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:48.584 23:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:48.584 23:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:48.584 23:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:48.584 23:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:48.584 23:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.584 23:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:48.842 23:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:48.842 "name": "Existed_Raid", 00:23:48.842 "uuid": "02493e10-1ad2-4d25-8d5d-317f3020dab4", 00:23:48.842 "strip_size_kb": 64, 00:23:48.842 "state": "configuring", 00:23:48.842 "raid_level": "concat", 00:23:48.842 "superblock": true, 00:23:48.842 "num_base_bdevs": 4, 00:23:48.842 "num_base_bdevs_discovered": 3, 00:23:48.842 "num_base_bdevs_operational": 4, 00:23:48.842 "base_bdevs_list": [ 00:23:48.842 { 00:23:48.842 "name": null, 00:23:48.842 "uuid": "b001c53e-82dc-4b60-b574-5a2e51ceebeb", 00:23:48.842 "is_configured": false, 00:23:48.842 "data_offset": 2048, 00:23:48.842 "data_size": 63488 00:23:48.842 }, 00:23:48.842 { 00:23:48.842 "name": "BaseBdev2", 00:23:48.842 "uuid": "b7700ce9-89f5-47fa-a5c6-e2409f01891e", 00:23:48.842 "is_configured": true, 00:23:48.842 "data_offset": 2048, 00:23:48.842 "data_size": 63488 00:23:48.842 }, 00:23:48.842 { 00:23:48.842 "name": "BaseBdev3", 00:23:48.842 "uuid": "607c055b-27e0-4eff-b5d9-d24a96992437", 00:23:48.842 "is_configured": true, 00:23:48.842 "data_offset": 2048, 00:23:48.842 "data_size": 63488 00:23:48.842 }, 00:23:48.842 { 00:23:48.842 "name": "BaseBdev4", 00:23:48.842 "uuid": "11e4e187-d128-4dce-9ae4-18334dbb76b2", 00:23:48.842 "is_configured": true, 00:23:48.842 "data_offset": 2048, 00:23:48.842 "data_size": 63488 00:23:48.842 } 00:23:48.842 ] 00:23:48.842 }' 00:23:48.842 23:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:48.842 23:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:49.775 23:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.775 23:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:49.775 23:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:23:49.775 23:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.775 23:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:50.033 23:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u b001c53e-82dc-4b60-b574-5a2e51ceebeb 00:23:50.292 [2024-07-13 23:09:39.692527] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:50.292 [2024-07-13 23:09:39.692753] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:23:50.292 [2024-07-13 23:09:39.692768] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:50.292 [2024-07-13 23:09:39.692845] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:23:50.292 [2024-07-13 23:09:39.693223] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:23:50.292 [2024-07-13 23:09:39.693405] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008180 00:23:50.292 [2024-07-13 23:09:39.693632] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:50.292 NewBaseBdev 00:23:50.550 23:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:23:50.550 23:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:23:50.550 23:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:50.550 23:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:50.550 23:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:50.550 23:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:50.550 23:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:50.809 23:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:50.809 [ 00:23:50.809 { 00:23:50.809 "name": "NewBaseBdev", 00:23:50.809 "aliases": [ 00:23:50.809 "b001c53e-82dc-4b60-b574-5a2e51ceebeb" 00:23:50.809 ], 00:23:50.809 "product_name": "Malloc disk", 00:23:50.809 "block_size": 512, 00:23:50.809 "num_blocks": 65536, 00:23:50.809 "uuid": "b001c53e-82dc-4b60-b574-5a2e51ceebeb", 00:23:50.809 "assigned_rate_limits": { 00:23:50.809 "rw_ios_per_sec": 0, 00:23:50.809 "rw_mbytes_per_sec": 0, 00:23:50.809 "r_mbytes_per_sec": 0, 00:23:50.809 "w_mbytes_per_sec": 0 00:23:50.809 }, 00:23:50.809 "claimed": true, 00:23:50.809 "claim_type": "exclusive_write", 00:23:50.809 "zoned": false, 00:23:50.809 "supported_io_types": { 00:23:50.809 "read": true, 00:23:50.809 "write": true, 00:23:50.809 "unmap": true, 00:23:50.809 "flush": true, 00:23:50.809 "reset": true, 00:23:50.809 "nvme_admin": false, 00:23:50.809 "nvme_io": false, 00:23:50.809 "nvme_io_md": false, 00:23:50.809 "write_zeroes": true, 00:23:50.809 "zcopy": true, 00:23:50.809 "get_zone_info": false, 00:23:50.809 "zone_management": false, 00:23:50.809 "zone_append": false, 00:23:50.809 "compare": false, 00:23:50.809 "compare_and_write": false, 00:23:50.809 "abort": true, 00:23:50.809 "seek_hole": false, 00:23:50.809 "seek_data": false, 00:23:50.809 "copy": true, 00:23:50.809 "nvme_iov_md": false 00:23:50.809 }, 00:23:50.809 "memory_domains": [ 00:23:50.809 { 00:23:50.809 "dma_device_id": "system", 00:23:50.809 "dma_device_type": 1 00:23:50.809 }, 00:23:50.809 { 00:23:50.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:50.809 "dma_device_type": 2 00:23:50.809 } 00:23:50.809 ], 00:23:50.809 "driver_specific": {} 00:23:50.809 } 00:23:50.809 ] 00:23:51.067 23:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:51.068 23:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:23:51.068 23:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:51.068 23:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:51.068 23:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:51.068 23:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:51.068 23:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:51.068 23:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:51.068 23:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:51.068 23:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:51.068 23:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:51.068 23:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.068 23:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:51.068 23:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:51.068 "name": "Existed_Raid", 00:23:51.068 "uuid": "02493e10-1ad2-4d25-8d5d-317f3020dab4", 00:23:51.068 "strip_size_kb": 64, 00:23:51.068 "state": "online", 00:23:51.068 "raid_level": "concat", 00:23:51.068 "superblock": true, 00:23:51.068 "num_base_bdevs": 4, 00:23:51.068 "num_base_bdevs_discovered": 4, 00:23:51.068 "num_base_bdevs_operational": 4, 00:23:51.068 "base_bdevs_list": [ 00:23:51.068 { 00:23:51.068 "name": "NewBaseBdev", 00:23:51.068 "uuid": "b001c53e-82dc-4b60-b574-5a2e51ceebeb", 00:23:51.068 "is_configured": true, 00:23:51.068 "data_offset": 2048, 00:23:51.068 "data_size": 63488 00:23:51.068 }, 00:23:51.068 { 00:23:51.068 "name": "BaseBdev2", 00:23:51.068 "uuid": "b7700ce9-89f5-47fa-a5c6-e2409f01891e", 00:23:51.068 "is_configured": true, 00:23:51.068 "data_offset": 2048, 00:23:51.068 "data_size": 63488 00:23:51.068 }, 00:23:51.068 { 00:23:51.068 "name": "BaseBdev3", 00:23:51.068 "uuid": "607c055b-27e0-4eff-b5d9-d24a96992437", 00:23:51.068 "is_configured": true, 00:23:51.068 "data_offset": 2048, 00:23:51.068 "data_size": 63488 00:23:51.068 }, 00:23:51.068 { 00:23:51.068 "name": "BaseBdev4", 00:23:51.068 "uuid": "11e4e187-d128-4dce-9ae4-18334dbb76b2", 00:23:51.068 "is_configured": true, 00:23:51.068 "data_offset": 2048, 00:23:51.068 "data_size": 63488 00:23:51.068 } 00:23:51.068 ] 00:23:51.068 }' 00:23:51.068 23:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:51.068 23:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:52.003 23:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:23:52.003 23:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:52.003 23:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:52.003 23:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:52.003 23:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:52.003 23:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:23:52.003 23:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:52.003 23:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:52.003 [2024-07-13 23:09:41.277659] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:52.003 23:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:52.003 "name": "Existed_Raid", 00:23:52.003 "aliases": [ 00:23:52.003 "02493e10-1ad2-4d25-8d5d-317f3020dab4" 00:23:52.003 ], 00:23:52.003 "product_name": "Raid Volume", 00:23:52.003 "block_size": 512, 00:23:52.003 "num_blocks": 253952, 00:23:52.003 "uuid": "02493e10-1ad2-4d25-8d5d-317f3020dab4", 00:23:52.003 "assigned_rate_limits": { 00:23:52.003 "rw_ios_per_sec": 0, 00:23:52.003 "rw_mbytes_per_sec": 0, 00:23:52.003 "r_mbytes_per_sec": 0, 00:23:52.003 "w_mbytes_per_sec": 0 00:23:52.003 }, 00:23:52.003 "claimed": false, 00:23:52.003 "zoned": false, 00:23:52.003 "supported_io_types": { 00:23:52.003 "read": true, 00:23:52.003 "write": true, 00:23:52.003 "unmap": true, 00:23:52.003 "flush": true, 00:23:52.003 "reset": true, 00:23:52.003 "nvme_admin": false, 00:23:52.003 "nvme_io": false, 00:23:52.003 "nvme_io_md": false, 00:23:52.003 "write_zeroes": true, 00:23:52.003 "zcopy": false, 00:23:52.003 "get_zone_info": false, 00:23:52.003 "zone_management": false, 00:23:52.003 "zone_append": false, 00:23:52.003 "compare": false, 00:23:52.003 "compare_and_write": false, 00:23:52.003 "abort": false, 00:23:52.003 "seek_hole": false, 00:23:52.003 "seek_data": false, 00:23:52.003 "copy": false, 00:23:52.003 "nvme_iov_md": false 00:23:52.003 }, 00:23:52.003 "memory_domains": [ 00:23:52.003 { 00:23:52.003 "dma_device_id": "system", 00:23:52.003 "dma_device_type": 1 00:23:52.003 }, 00:23:52.003 { 00:23:52.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.003 "dma_device_type": 2 00:23:52.003 }, 00:23:52.003 { 00:23:52.003 "dma_device_id": "system", 00:23:52.003 "dma_device_type": 1 00:23:52.003 }, 00:23:52.003 { 00:23:52.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.003 "dma_device_type": 2 00:23:52.003 }, 00:23:52.003 { 00:23:52.003 "dma_device_id": "system", 00:23:52.003 "dma_device_type": 1 00:23:52.003 }, 00:23:52.003 { 00:23:52.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.003 "dma_device_type": 2 00:23:52.003 }, 00:23:52.003 { 00:23:52.003 "dma_device_id": "system", 00:23:52.003 "dma_device_type": 1 00:23:52.003 }, 00:23:52.003 { 00:23:52.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.003 "dma_device_type": 2 00:23:52.003 } 00:23:52.003 ], 00:23:52.003 "driver_specific": { 00:23:52.003 "raid": { 00:23:52.003 "uuid": "02493e10-1ad2-4d25-8d5d-317f3020dab4", 00:23:52.003 "strip_size_kb": 64, 00:23:52.003 "state": "online", 00:23:52.003 "raid_level": "concat", 00:23:52.003 "superblock": true, 00:23:52.003 "num_base_bdevs": 4, 00:23:52.003 "num_base_bdevs_discovered": 4, 00:23:52.003 "num_base_bdevs_operational": 4, 00:23:52.003 "base_bdevs_list": [ 00:23:52.003 { 00:23:52.003 "name": "NewBaseBdev", 00:23:52.003 "uuid": "b001c53e-82dc-4b60-b574-5a2e51ceebeb", 00:23:52.003 "is_configured": true, 00:23:52.003 "data_offset": 2048, 00:23:52.003 "data_size": 63488 00:23:52.003 }, 00:23:52.003 { 00:23:52.003 "name": "BaseBdev2", 00:23:52.003 "uuid": "b7700ce9-89f5-47fa-a5c6-e2409f01891e", 00:23:52.003 "is_configured": true, 00:23:52.003 "data_offset": 2048, 00:23:52.003 "data_size": 63488 00:23:52.003 }, 00:23:52.003 { 00:23:52.003 "name": "BaseBdev3", 00:23:52.003 "uuid": "607c055b-27e0-4eff-b5d9-d24a96992437", 00:23:52.003 "is_configured": true, 00:23:52.003 "data_offset": 2048, 00:23:52.003 "data_size": 63488 00:23:52.003 }, 00:23:52.003 { 00:23:52.003 "name": "BaseBdev4", 00:23:52.003 "uuid": "11e4e187-d128-4dce-9ae4-18334dbb76b2", 00:23:52.003 "is_configured": true, 00:23:52.003 "data_offset": 2048, 00:23:52.003 "data_size": 63488 00:23:52.003 } 00:23:52.003 ] 00:23:52.003 } 00:23:52.003 } 00:23:52.003 }' 00:23:52.003 23:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:52.003 23:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:23:52.003 BaseBdev2 00:23:52.003 BaseBdev3 00:23:52.003 BaseBdev4' 00:23:52.004 23:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:52.004 23:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:23:52.004 23:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:52.262 23:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:52.262 "name": "NewBaseBdev", 00:23:52.262 "aliases": [ 00:23:52.262 "b001c53e-82dc-4b60-b574-5a2e51ceebeb" 00:23:52.262 ], 00:23:52.262 "product_name": "Malloc disk", 00:23:52.262 "block_size": 512, 00:23:52.262 "num_blocks": 65536, 00:23:52.262 "uuid": "b001c53e-82dc-4b60-b574-5a2e51ceebeb", 00:23:52.262 "assigned_rate_limits": { 00:23:52.262 "rw_ios_per_sec": 0, 00:23:52.262 "rw_mbytes_per_sec": 0, 00:23:52.262 "r_mbytes_per_sec": 0, 00:23:52.262 "w_mbytes_per_sec": 0 00:23:52.262 }, 00:23:52.262 "claimed": true, 00:23:52.262 "claim_type": "exclusive_write", 00:23:52.262 "zoned": false, 00:23:52.262 "supported_io_types": { 00:23:52.262 "read": true, 00:23:52.262 "write": true, 00:23:52.262 "unmap": true, 00:23:52.262 "flush": true, 00:23:52.262 "reset": true, 00:23:52.262 "nvme_admin": false, 00:23:52.262 "nvme_io": false, 00:23:52.262 "nvme_io_md": false, 00:23:52.262 "write_zeroes": true, 00:23:52.262 "zcopy": true, 00:23:52.262 "get_zone_info": false, 00:23:52.262 "zone_management": false, 00:23:52.262 "zone_append": false, 00:23:52.262 "compare": false, 00:23:52.262 "compare_and_write": false, 00:23:52.262 "abort": true, 00:23:52.262 "seek_hole": false, 00:23:52.262 "seek_data": false, 00:23:52.262 "copy": true, 00:23:52.262 "nvme_iov_md": false 00:23:52.262 }, 00:23:52.262 "memory_domains": [ 00:23:52.262 { 00:23:52.262 "dma_device_id": "system", 00:23:52.262 "dma_device_type": 1 00:23:52.262 }, 00:23:52.262 { 00:23:52.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.262 "dma_device_type": 2 00:23:52.262 } 00:23:52.262 ], 00:23:52.262 "driver_specific": {} 00:23:52.262 }' 00:23:52.262 23:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:52.262 23:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:52.262 23:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:52.262 23:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:52.521 23:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:52.521 23:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:52.521 23:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:52.521 23:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:52.521 23:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:52.521 23:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:52.521 23:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:52.521 23:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:52.521 23:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:52.521 23:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:52.521 23:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:52.779 23:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:52.779 "name": "BaseBdev2", 00:23:52.779 "aliases": [ 00:23:52.779 "b7700ce9-89f5-47fa-a5c6-e2409f01891e" 00:23:52.779 ], 00:23:52.779 "product_name": "Malloc disk", 00:23:52.779 "block_size": 512, 00:23:52.779 "num_blocks": 65536, 00:23:52.779 "uuid": "b7700ce9-89f5-47fa-a5c6-e2409f01891e", 00:23:52.779 "assigned_rate_limits": { 00:23:52.779 "rw_ios_per_sec": 0, 00:23:52.779 "rw_mbytes_per_sec": 0, 00:23:52.779 "r_mbytes_per_sec": 0, 00:23:52.779 "w_mbytes_per_sec": 0 00:23:52.779 }, 00:23:52.779 "claimed": true, 00:23:52.779 "claim_type": "exclusive_write", 00:23:52.779 "zoned": false, 00:23:52.779 "supported_io_types": { 00:23:52.779 "read": true, 00:23:52.779 "write": true, 00:23:52.779 "unmap": true, 00:23:52.779 "flush": true, 00:23:52.779 "reset": true, 00:23:52.779 "nvme_admin": false, 00:23:52.779 "nvme_io": false, 00:23:52.779 "nvme_io_md": false, 00:23:52.779 "write_zeroes": true, 00:23:52.779 "zcopy": true, 00:23:52.779 "get_zone_info": false, 00:23:52.779 "zone_management": false, 00:23:52.779 "zone_append": false, 00:23:52.779 "compare": false, 00:23:52.779 "compare_and_write": false, 00:23:52.779 "abort": true, 00:23:52.779 "seek_hole": false, 00:23:52.779 "seek_data": false, 00:23:52.779 "copy": true, 00:23:52.779 "nvme_iov_md": false 00:23:52.779 }, 00:23:52.779 "memory_domains": [ 00:23:52.779 { 00:23:52.779 "dma_device_id": "system", 00:23:52.779 "dma_device_type": 1 00:23:52.779 }, 00:23:52.779 { 00:23:52.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.779 "dma_device_type": 2 00:23:52.779 } 00:23:52.779 ], 00:23:52.779 "driver_specific": {} 00:23:52.779 }' 00:23:52.779 23:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:53.039 23:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:53.039 23:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:53.039 23:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:53.039 23:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:53.039 23:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:53.039 23:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:53.039 23:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:53.298 23:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:53.298 23:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:53.298 23:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:53.298 23:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:53.298 23:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:53.298 23:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:53.298 23:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:53.556 23:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:53.556 "name": "BaseBdev3", 00:23:53.556 "aliases": [ 00:23:53.556 "607c055b-27e0-4eff-b5d9-d24a96992437" 00:23:53.556 ], 00:23:53.556 "product_name": "Malloc disk", 00:23:53.556 "block_size": 512, 00:23:53.556 "num_blocks": 65536, 00:23:53.556 "uuid": "607c055b-27e0-4eff-b5d9-d24a96992437", 00:23:53.556 "assigned_rate_limits": { 00:23:53.556 "rw_ios_per_sec": 0, 00:23:53.556 "rw_mbytes_per_sec": 0, 00:23:53.556 "r_mbytes_per_sec": 0, 00:23:53.556 "w_mbytes_per_sec": 0 00:23:53.556 }, 00:23:53.556 "claimed": true, 00:23:53.556 "claim_type": "exclusive_write", 00:23:53.556 "zoned": false, 00:23:53.556 "supported_io_types": { 00:23:53.556 "read": true, 00:23:53.556 "write": true, 00:23:53.556 "unmap": true, 00:23:53.556 "flush": true, 00:23:53.556 "reset": true, 00:23:53.556 "nvme_admin": false, 00:23:53.556 "nvme_io": false, 00:23:53.556 "nvme_io_md": false, 00:23:53.556 "write_zeroes": true, 00:23:53.556 "zcopy": true, 00:23:53.556 "get_zone_info": false, 00:23:53.556 "zone_management": false, 00:23:53.556 "zone_append": false, 00:23:53.556 "compare": false, 00:23:53.556 "compare_and_write": false, 00:23:53.556 "abort": true, 00:23:53.556 "seek_hole": false, 00:23:53.556 "seek_data": false, 00:23:53.556 "copy": true, 00:23:53.556 "nvme_iov_md": false 00:23:53.556 }, 00:23:53.556 "memory_domains": [ 00:23:53.556 { 00:23:53.556 "dma_device_id": "system", 00:23:53.556 "dma_device_type": 1 00:23:53.556 }, 00:23:53.556 { 00:23:53.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:53.556 "dma_device_type": 2 00:23:53.556 } 00:23:53.556 ], 00:23:53.556 "driver_specific": {} 00:23:53.556 }' 00:23:53.556 23:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:53.556 23:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:53.556 23:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:53.556 23:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:53.556 23:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:53.814 23:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:53.814 23:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:53.814 23:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:53.814 23:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:53.814 23:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:53.814 23:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:53.814 23:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:53.814 23:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:53.814 23:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:53.814 23:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:54.071 23:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:54.071 "name": "BaseBdev4", 00:23:54.071 "aliases": [ 00:23:54.071 "11e4e187-d128-4dce-9ae4-18334dbb76b2" 00:23:54.071 ], 00:23:54.071 "product_name": "Malloc disk", 00:23:54.071 "block_size": 512, 00:23:54.071 "num_blocks": 65536, 00:23:54.071 "uuid": "11e4e187-d128-4dce-9ae4-18334dbb76b2", 00:23:54.071 "assigned_rate_limits": { 00:23:54.071 "rw_ios_per_sec": 0, 00:23:54.071 "rw_mbytes_per_sec": 0, 00:23:54.071 "r_mbytes_per_sec": 0, 00:23:54.071 "w_mbytes_per_sec": 0 00:23:54.071 }, 00:23:54.071 "claimed": true, 00:23:54.071 "claim_type": "exclusive_write", 00:23:54.071 "zoned": false, 00:23:54.071 "supported_io_types": { 00:23:54.071 "read": true, 00:23:54.071 "write": true, 00:23:54.071 "unmap": true, 00:23:54.071 "flush": true, 00:23:54.071 "reset": true, 00:23:54.071 "nvme_admin": false, 00:23:54.071 "nvme_io": false, 00:23:54.071 "nvme_io_md": false, 00:23:54.071 "write_zeroes": true, 00:23:54.071 "zcopy": true, 00:23:54.071 "get_zone_info": false, 00:23:54.071 "zone_management": false, 00:23:54.071 "zone_append": false, 00:23:54.071 "compare": false, 00:23:54.071 "compare_and_write": false, 00:23:54.071 "abort": true, 00:23:54.071 "seek_hole": false, 00:23:54.071 "seek_data": false, 00:23:54.071 "copy": true, 00:23:54.071 "nvme_iov_md": false 00:23:54.071 }, 00:23:54.071 "memory_domains": [ 00:23:54.071 { 00:23:54.071 "dma_device_id": "system", 00:23:54.071 "dma_device_type": 1 00:23:54.071 }, 00:23:54.071 { 00:23:54.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:54.071 "dma_device_type": 2 00:23:54.071 } 00:23:54.071 ], 00:23:54.071 "driver_specific": {} 00:23:54.071 }' 00:23:54.071 23:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:54.071 23:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:54.328 23:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:54.328 23:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:54.328 23:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:54.328 23:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:54.329 23:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:54.329 23:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:54.329 23:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:54.329 23:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:54.586 23:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:54.586 23:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:54.586 23:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:54.843 [2024-07-13 23:09:44.034069] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:54.843 [2024-07-13 23:09:44.034360] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:54.843 [2024-07-13 23:09:44.034582] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:54.843 [2024-07-13 23:09:44.034771] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:54.843 [2024-07-13 23:09:44.034884] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name Existed_Raid, state offline 00:23:54.844 23:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 148315 00:23:54.844 23:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 148315 ']' 00:23:54.844 23:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 148315 00:23:54.844 23:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:23:54.844 23:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:54.844 23:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 148315 00:23:54.844 23:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:54.844 23:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:54.844 23:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 148315' 00:23:54.844 killing process with pid 148315 00:23:54.844 23:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 148315 00:23:54.844 [2024-07-13 23:09:44.072613] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:54.844 23:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 148315 00:23:54.844 [2024-07-13 23:09:44.107475] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:55.101 ************************************ 00:23:55.101 END TEST raid_state_function_test_sb 00:23:55.101 ************************************ 00:23:55.101 23:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:23:55.101 00:23:55.101 real 0m34.349s 00:23:55.101 user 1m5.551s 00:23:55.101 sys 0m3.994s 00:23:55.101 23:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:55.101 23:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:55.101 23:09:44 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:55.101 23:09:44 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:23:55.101 23:09:44 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:23:55.101 23:09:44 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:55.101 23:09:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:55.101 ************************************ 00:23:55.101 START TEST raid_superblock_test 00:23:55.101 ************************************ 00:23:55.101 23:09:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 4 00:23:55.101 23:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:23:55.101 23:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:23:55.101 23:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:23:55.101 23:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:23:55.101 23:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:23:55.101 23:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:23:55.101 23:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:23:55.101 23:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:23:55.101 23:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:23:55.101 23:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:23:55.101 23:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:23:55.101 23:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:23:55.101 23:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:23:55.101 23:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:23:55.101 23:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:23:55.101 23:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:23:55.101 23:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=149416 00:23:55.101 23:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:23:55.101 23:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 149416 /var/tmp/spdk-raid.sock 00:23:55.101 23:09:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 149416 ']' 00:23:55.101 23:09:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:55.101 23:09:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:55.101 23:09:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:55.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:55.101 23:09:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:55.101 23:09:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.101 [2024-07-13 23:09:44.447040] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:55.102 [2024-07-13 23:09:44.447411] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149416 ] 00:23:55.359 [2024-07-13 23:09:44.589284] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.359 [2024-07-13 23:09:44.719927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.616 [2024-07-13 23:09:44.797497] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:56.182 23:09:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:56.182 23:09:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:23:56.182 23:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:23:56.182 23:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:56.182 23:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:23:56.182 23:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:23:56.182 23:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:56.182 23:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:56.182 23:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:56.182 23:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:56.182 23:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:23:56.440 malloc1 00:23:56.440 23:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:56.698 [2024-07-13 23:09:45.942437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:56.698 [2024-07-13 23:09:45.942952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.698 [2024-07-13 23:09:45.943138] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:23:56.698 [2024-07-13 23:09:45.943320] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.698 [2024-07-13 23:09:45.946599] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.698 [2024-07-13 23:09:45.946844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:56.698 pt1 00:23:56.698 23:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:56.698 23:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:56.698 23:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:23:56.698 23:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:23:56.698 23:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:56.698 23:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:56.698 23:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:56.698 23:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:56.698 23:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:23:56.956 malloc2 00:23:56.956 23:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:57.214 [2024-07-13 23:09:46.446416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:57.214 [2024-07-13 23:09:46.446769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:57.214 [2024-07-13 23:09:46.446968] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:23:57.214 [2024-07-13 23:09:46.447150] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:57.214 [2024-07-13 23:09:46.449814] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:57.214 [2024-07-13 23:09:46.450014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:57.214 pt2 00:23:57.214 23:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:57.214 23:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:57.214 23:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:23:57.214 23:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:23:57.214 23:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:57.214 23:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:57.214 23:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:57.214 23:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:57.214 23:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:23:57.472 malloc3 00:23:57.472 23:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:57.730 [2024-07-13 23:09:46.901961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:57.730 [2024-07-13 23:09:46.902360] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:57.730 [2024-07-13 23:09:46.902468] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:57.730 [2024-07-13 23:09:46.902770] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:57.730 [2024-07-13 23:09:46.905570] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:57.730 [2024-07-13 23:09:46.905770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:57.730 pt3 00:23:57.730 23:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:57.730 23:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:57.730 23:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:23:57.730 23:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:23:57.730 23:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:23:57.730 23:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:57.730 23:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:57.730 23:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:57.730 23:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:23:57.992 malloc4 00:23:57.992 23:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:57.992 [2024-07-13 23:09:47.361656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:57.992 [2024-07-13 23:09:47.362023] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:57.992 [2024-07-13 23:09:47.362200] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:57.992 [2024-07-13 23:09:47.362384] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:57.992 [2024-07-13 23:09:47.365023] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:57.992 [2024-07-13 23:09:47.365228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:57.992 pt4 00:23:57.992 23:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:57.992 23:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:57.992 23:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:23:58.250 [2024-07-13 23:09:47.573771] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:58.250 [2024-07-13 23:09:47.576150] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:58.250 [2024-07-13 23:09:47.576390] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:58.250 [2024-07-13 23:09:47.576490] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:58.250 [2024-07-13 23:09:47.576813] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:23:58.250 [2024-07-13 23:09:47.576945] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:58.250 [2024-07-13 23:09:47.577234] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:23:58.250 [2024-07-13 23:09:47.577775] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:23:58.250 [2024-07-13 23:09:47.577903] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:23:58.250 [2024-07-13 23:09:47.578189] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:58.251 23:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:23:58.251 23:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:58.251 23:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:58.251 23:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:58.251 23:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:58.251 23:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:58.251 23:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:58.251 23:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:58.251 23:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:58.251 23:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:58.251 23:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.251 23:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.509 23:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:58.509 "name": "raid_bdev1", 00:23:58.509 "uuid": "616f7c50-6632-48fa-bd76-c73edb7232f6", 00:23:58.509 "strip_size_kb": 64, 00:23:58.509 "state": "online", 00:23:58.509 "raid_level": "concat", 00:23:58.509 "superblock": true, 00:23:58.509 "num_base_bdevs": 4, 00:23:58.510 "num_base_bdevs_discovered": 4, 00:23:58.510 "num_base_bdevs_operational": 4, 00:23:58.510 "base_bdevs_list": [ 00:23:58.510 { 00:23:58.510 "name": "pt1", 00:23:58.510 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:58.510 "is_configured": true, 00:23:58.510 "data_offset": 2048, 00:23:58.510 "data_size": 63488 00:23:58.510 }, 00:23:58.510 { 00:23:58.510 "name": "pt2", 00:23:58.510 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:58.510 "is_configured": true, 00:23:58.510 "data_offset": 2048, 00:23:58.510 "data_size": 63488 00:23:58.510 }, 00:23:58.510 { 00:23:58.510 "name": "pt3", 00:23:58.510 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:58.510 "is_configured": true, 00:23:58.510 "data_offset": 2048, 00:23:58.510 "data_size": 63488 00:23:58.510 }, 00:23:58.510 { 00:23:58.510 "name": "pt4", 00:23:58.510 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:58.510 "is_configured": true, 00:23:58.510 "data_offset": 2048, 00:23:58.510 "data_size": 63488 00:23:58.510 } 00:23:58.510 ] 00:23:58.510 }' 00:23:58.510 23:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:58.510 23:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.075 23:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:23:59.075 23:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:23:59.075 23:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:59.075 23:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:59.075 23:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:59.075 23:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:59.075 23:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:59.075 23:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:59.334 [2024-07-13 23:09:48.647007] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:59.334 23:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:59.334 "name": "raid_bdev1", 00:23:59.334 "aliases": [ 00:23:59.334 "616f7c50-6632-48fa-bd76-c73edb7232f6" 00:23:59.334 ], 00:23:59.334 "product_name": "Raid Volume", 00:23:59.334 "block_size": 512, 00:23:59.334 "num_blocks": 253952, 00:23:59.334 "uuid": "616f7c50-6632-48fa-bd76-c73edb7232f6", 00:23:59.334 "assigned_rate_limits": { 00:23:59.334 "rw_ios_per_sec": 0, 00:23:59.334 "rw_mbytes_per_sec": 0, 00:23:59.334 "r_mbytes_per_sec": 0, 00:23:59.334 "w_mbytes_per_sec": 0 00:23:59.334 }, 00:23:59.334 "claimed": false, 00:23:59.334 "zoned": false, 00:23:59.334 "supported_io_types": { 00:23:59.334 "read": true, 00:23:59.334 "write": true, 00:23:59.334 "unmap": true, 00:23:59.334 "flush": true, 00:23:59.334 "reset": true, 00:23:59.334 "nvme_admin": false, 00:23:59.334 "nvme_io": false, 00:23:59.334 "nvme_io_md": false, 00:23:59.334 "write_zeroes": true, 00:23:59.334 "zcopy": false, 00:23:59.334 "get_zone_info": false, 00:23:59.334 "zone_management": false, 00:23:59.334 "zone_append": false, 00:23:59.334 "compare": false, 00:23:59.334 "compare_and_write": false, 00:23:59.334 "abort": false, 00:23:59.334 "seek_hole": false, 00:23:59.334 "seek_data": false, 00:23:59.334 "copy": false, 00:23:59.334 "nvme_iov_md": false 00:23:59.334 }, 00:23:59.334 "memory_domains": [ 00:23:59.334 { 00:23:59.334 "dma_device_id": "system", 00:23:59.334 "dma_device_type": 1 00:23:59.334 }, 00:23:59.334 { 00:23:59.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:59.334 "dma_device_type": 2 00:23:59.334 }, 00:23:59.334 { 00:23:59.334 "dma_device_id": "system", 00:23:59.334 "dma_device_type": 1 00:23:59.334 }, 00:23:59.334 { 00:23:59.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:59.334 "dma_device_type": 2 00:23:59.334 }, 00:23:59.334 { 00:23:59.334 "dma_device_id": "system", 00:23:59.334 "dma_device_type": 1 00:23:59.334 }, 00:23:59.334 { 00:23:59.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:59.334 "dma_device_type": 2 00:23:59.334 }, 00:23:59.334 { 00:23:59.334 "dma_device_id": "system", 00:23:59.334 "dma_device_type": 1 00:23:59.334 }, 00:23:59.334 { 00:23:59.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:59.334 "dma_device_type": 2 00:23:59.334 } 00:23:59.334 ], 00:23:59.334 "driver_specific": { 00:23:59.334 "raid": { 00:23:59.334 "uuid": "616f7c50-6632-48fa-bd76-c73edb7232f6", 00:23:59.334 "strip_size_kb": 64, 00:23:59.334 "state": "online", 00:23:59.334 "raid_level": "concat", 00:23:59.334 "superblock": true, 00:23:59.334 "num_base_bdevs": 4, 00:23:59.334 "num_base_bdevs_discovered": 4, 00:23:59.334 "num_base_bdevs_operational": 4, 00:23:59.334 "base_bdevs_list": [ 00:23:59.334 { 00:23:59.334 "name": "pt1", 00:23:59.334 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:59.334 "is_configured": true, 00:23:59.334 "data_offset": 2048, 00:23:59.334 "data_size": 63488 00:23:59.334 }, 00:23:59.334 { 00:23:59.334 "name": "pt2", 00:23:59.334 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:59.334 "is_configured": true, 00:23:59.334 "data_offset": 2048, 00:23:59.334 "data_size": 63488 00:23:59.334 }, 00:23:59.334 { 00:23:59.334 "name": "pt3", 00:23:59.334 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:59.334 "is_configured": true, 00:23:59.334 "data_offset": 2048, 00:23:59.334 "data_size": 63488 00:23:59.334 }, 00:23:59.334 { 00:23:59.334 "name": "pt4", 00:23:59.334 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:59.334 "is_configured": true, 00:23:59.334 "data_offset": 2048, 00:23:59.334 "data_size": 63488 00:23:59.334 } 00:23:59.334 ] 00:23:59.334 } 00:23:59.334 } 00:23:59.334 }' 00:23:59.334 23:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:59.334 23:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:23:59.334 pt2 00:23:59.334 pt3 00:23:59.334 pt4' 00:23:59.334 23:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:59.334 23:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:59.334 23:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:59.592 23:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:59.592 "name": "pt1", 00:23:59.592 "aliases": [ 00:23:59.592 "00000000-0000-0000-0000-000000000001" 00:23:59.592 ], 00:23:59.592 "product_name": "passthru", 00:23:59.592 "block_size": 512, 00:23:59.592 "num_blocks": 65536, 00:23:59.592 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:59.592 "assigned_rate_limits": { 00:23:59.592 "rw_ios_per_sec": 0, 00:23:59.592 "rw_mbytes_per_sec": 0, 00:23:59.592 "r_mbytes_per_sec": 0, 00:23:59.592 "w_mbytes_per_sec": 0 00:23:59.592 }, 00:23:59.592 "claimed": true, 00:23:59.592 "claim_type": "exclusive_write", 00:23:59.592 "zoned": false, 00:23:59.592 "supported_io_types": { 00:23:59.592 "read": true, 00:23:59.592 "write": true, 00:23:59.592 "unmap": true, 00:23:59.592 "flush": true, 00:23:59.592 "reset": true, 00:23:59.592 "nvme_admin": false, 00:23:59.592 "nvme_io": false, 00:23:59.592 "nvme_io_md": false, 00:23:59.592 "write_zeroes": true, 00:23:59.592 "zcopy": true, 00:23:59.592 "get_zone_info": false, 00:23:59.592 "zone_management": false, 00:23:59.592 "zone_append": false, 00:23:59.592 "compare": false, 00:23:59.592 "compare_and_write": false, 00:23:59.592 "abort": true, 00:23:59.592 "seek_hole": false, 00:23:59.592 "seek_data": false, 00:23:59.592 "copy": true, 00:23:59.592 "nvme_iov_md": false 00:23:59.592 }, 00:23:59.592 "memory_domains": [ 00:23:59.592 { 00:23:59.592 "dma_device_id": "system", 00:23:59.592 "dma_device_type": 1 00:23:59.592 }, 00:23:59.592 { 00:23:59.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:59.592 "dma_device_type": 2 00:23:59.592 } 00:23:59.592 ], 00:23:59.592 "driver_specific": { 00:23:59.592 "passthru": { 00:23:59.592 "name": "pt1", 00:23:59.592 "base_bdev_name": "malloc1" 00:23:59.592 } 00:23:59.592 } 00:23:59.592 }' 00:23:59.592 23:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:59.592 23:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:59.851 23:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:59.851 23:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:59.851 23:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:59.851 23:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:59.851 23:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:59.851 23:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:59.851 23:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:59.851 23:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:00.109 23:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:00.109 23:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:00.109 23:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:00.109 23:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:00.109 23:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:24:00.368 23:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:00.368 "name": "pt2", 00:24:00.368 "aliases": [ 00:24:00.368 "00000000-0000-0000-0000-000000000002" 00:24:00.368 ], 00:24:00.368 "product_name": "passthru", 00:24:00.368 "block_size": 512, 00:24:00.368 "num_blocks": 65536, 00:24:00.368 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:00.368 "assigned_rate_limits": { 00:24:00.368 "rw_ios_per_sec": 0, 00:24:00.368 "rw_mbytes_per_sec": 0, 00:24:00.368 "r_mbytes_per_sec": 0, 00:24:00.368 "w_mbytes_per_sec": 0 00:24:00.368 }, 00:24:00.368 "claimed": true, 00:24:00.368 "claim_type": "exclusive_write", 00:24:00.368 "zoned": false, 00:24:00.368 "supported_io_types": { 00:24:00.368 "read": true, 00:24:00.368 "write": true, 00:24:00.368 "unmap": true, 00:24:00.368 "flush": true, 00:24:00.368 "reset": true, 00:24:00.368 "nvme_admin": false, 00:24:00.368 "nvme_io": false, 00:24:00.368 "nvme_io_md": false, 00:24:00.368 "write_zeroes": true, 00:24:00.368 "zcopy": true, 00:24:00.368 "get_zone_info": false, 00:24:00.368 "zone_management": false, 00:24:00.368 "zone_append": false, 00:24:00.368 "compare": false, 00:24:00.368 "compare_and_write": false, 00:24:00.368 "abort": true, 00:24:00.368 "seek_hole": false, 00:24:00.368 "seek_data": false, 00:24:00.368 "copy": true, 00:24:00.368 "nvme_iov_md": false 00:24:00.368 }, 00:24:00.368 "memory_domains": [ 00:24:00.368 { 00:24:00.368 "dma_device_id": "system", 00:24:00.368 "dma_device_type": 1 00:24:00.368 }, 00:24:00.368 { 00:24:00.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:00.368 "dma_device_type": 2 00:24:00.368 } 00:24:00.368 ], 00:24:00.368 "driver_specific": { 00:24:00.368 "passthru": { 00:24:00.368 "name": "pt2", 00:24:00.368 "base_bdev_name": "malloc2" 00:24:00.368 } 00:24:00.368 } 00:24:00.368 }' 00:24:00.368 23:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:00.368 23:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:00.368 23:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:00.368 23:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:00.368 23:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:00.626 23:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:00.626 23:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:00.626 23:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:00.626 23:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:00.626 23:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:00.626 23:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:00.626 23:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:00.626 23:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:00.626 23:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:00.626 23:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:24:00.885 23:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:00.885 "name": "pt3", 00:24:00.885 "aliases": [ 00:24:00.885 "00000000-0000-0000-0000-000000000003" 00:24:00.885 ], 00:24:00.885 "product_name": "passthru", 00:24:00.885 "block_size": 512, 00:24:00.885 "num_blocks": 65536, 00:24:00.885 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:00.885 "assigned_rate_limits": { 00:24:00.885 "rw_ios_per_sec": 0, 00:24:00.885 "rw_mbytes_per_sec": 0, 00:24:00.885 "r_mbytes_per_sec": 0, 00:24:00.885 "w_mbytes_per_sec": 0 00:24:00.885 }, 00:24:00.885 "claimed": true, 00:24:00.885 "claim_type": "exclusive_write", 00:24:00.885 "zoned": false, 00:24:00.885 "supported_io_types": { 00:24:00.885 "read": true, 00:24:00.885 "write": true, 00:24:00.885 "unmap": true, 00:24:00.885 "flush": true, 00:24:00.885 "reset": true, 00:24:00.885 "nvme_admin": false, 00:24:00.885 "nvme_io": false, 00:24:00.885 "nvme_io_md": false, 00:24:00.885 "write_zeroes": true, 00:24:00.885 "zcopy": true, 00:24:00.885 "get_zone_info": false, 00:24:00.885 "zone_management": false, 00:24:00.885 "zone_append": false, 00:24:00.885 "compare": false, 00:24:00.885 "compare_and_write": false, 00:24:00.885 "abort": true, 00:24:00.885 "seek_hole": false, 00:24:00.885 "seek_data": false, 00:24:00.885 "copy": true, 00:24:00.885 "nvme_iov_md": false 00:24:00.885 }, 00:24:00.885 "memory_domains": [ 00:24:00.885 { 00:24:00.885 "dma_device_id": "system", 00:24:00.885 "dma_device_type": 1 00:24:00.885 }, 00:24:00.885 { 00:24:00.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:00.885 "dma_device_type": 2 00:24:00.885 } 00:24:00.885 ], 00:24:00.885 "driver_specific": { 00:24:00.885 "passthru": { 00:24:00.885 "name": "pt3", 00:24:00.885 "base_bdev_name": "malloc3" 00:24:00.885 } 00:24:00.885 } 00:24:00.885 }' 00:24:00.885 23:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:00.885 23:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:00.885 23:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:01.144 23:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:01.144 23:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:01.144 23:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:01.144 23:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:01.144 23:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:01.144 23:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:01.144 23:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:01.144 23:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:01.423 23:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:01.423 23:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:01.423 23:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:24:01.423 23:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:01.681 23:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:01.681 "name": "pt4", 00:24:01.681 "aliases": [ 00:24:01.681 "00000000-0000-0000-0000-000000000004" 00:24:01.681 ], 00:24:01.681 "product_name": "passthru", 00:24:01.681 "block_size": 512, 00:24:01.681 "num_blocks": 65536, 00:24:01.681 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:01.681 "assigned_rate_limits": { 00:24:01.681 "rw_ios_per_sec": 0, 00:24:01.681 "rw_mbytes_per_sec": 0, 00:24:01.681 "r_mbytes_per_sec": 0, 00:24:01.681 "w_mbytes_per_sec": 0 00:24:01.681 }, 00:24:01.681 "claimed": true, 00:24:01.681 "claim_type": "exclusive_write", 00:24:01.681 "zoned": false, 00:24:01.681 "supported_io_types": { 00:24:01.681 "read": true, 00:24:01.681 "write": true, 00:24:01.681 "unmap": true, 00:24:01.681 "flush": true, 00:24:01.681 "reset": true, 00:24:01.681 "nvme_admin": false, 00:24:01.681 "nvme_io": false, 00:24:01.681 "nvme_io_md": false, 00:24:01.681 "write_zeroes": true, 00:24:01.681 "zcopy": true, 00:24:01.681 "get_zone_info": false, 00:24:01.681 "zone_management": false, 00:24:01.681 "zone_append": false, 00:24:01.681 "compare": false, 00:24:01.681 "compare_and_write": false, 00:24:01.681 "abort": true, 00:24:01.681 "seek_hole": false, 00:24:01.681 "seek_data": false, 00:24:01.681 "copy": true, 00:24:01.681 "nvme_iov_md": false 00:24:01.681 }, 00:24:01.681 "memory_domains": [ 00:24:01.681 { 00:24:01.681 "dma_device_id": "system", 00:24:01.681 "dma_device_type": 1 00:24:01.681 }, 00:24:01.681 { 00:24:01.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:01.681 "dma_device_type": 2 00:24:01.681 } 00:24:01.681 ], 00:24:01.681 "driver_specific": { 00:24:01.681 "passthru": { 00:24:01.681 "name": "pt4", 00:24:01.681 "base_bdev_name": "malloc4" 00:24:01.681 } 00:24:01.681 } 00:24:01.681 }' 00:24:01.681 23:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:01.681 23:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:01.681 23:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:01.681 23:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:01.681 23:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:01.681 23:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:01.681 23:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:01.681 23:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:01.940 23:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:01.940 23:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:01.940 23:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:01.940 23:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:01.940 23:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:01.940 23:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:24:02.198 [2024-07-13 23:09:51.485880] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:02.198 23:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=616f7c50-6632-48fa-bd76-c73edb7232f6 00:24:02.198 23:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 616f7c50-6632-48fa-bd76-c73edb7232f6 ']' 00:24:02.198 23:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:02.456 [2024-07-13 23:09:51.769688] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:02.457 [2024-07-13 23:09:51.770024] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:02.457 [2024-07-13 23:09:51.770283] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:02.457 [2024-07-13 23:09:51.770544] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:02.457 [2024-07-13 23:09:51.770696] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:24:02.457 23:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:24:02.457 23:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.714 23:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:24:02.714 23:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:24:02.714 23:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:24:02.715 23:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:02.972 23:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:24:02.972 23:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:03.231 23:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:24:03.231 23:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:03.488 23:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:24:03.488 23:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:24:03.747 23:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:24:03.747 23:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:04.004 23:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:24:04.004 23:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:04.004 23:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:24:04.004 23:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:04.004 23:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:04.004 23:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:04.004 23:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:04.004 23:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:04.004 23:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:04.004 23:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:04.004 23:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:04.004 23:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:04.004 23:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:04.261 [2024-07-13 23:09:53.437762] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:04.261 [2024-07-13 23:09:53.440100] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:04.261 [2024-07-13 23:09:53.440283] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:24:04.261 [2024-07-13 23:09:53.440365] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:24:04.261 [2024-07-13 23:09:53.440584] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:24:04.261 [2024-07-13 23:09:53.440914] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:24:04.261 [2024-07-13 23:09:53.441117] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:24:04.261 [2024-07-13 23:09:53.441314] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:24:04.261 [2024-07-13 23:09:53.441447] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:04.261 [2024-07-13 23:09:53.441545] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state configuring 00:24:04.261 request: 00:24:04.261 { 00:24:04.261 "name": "raid_bdev1", 00:24:04.261 "raid_level": "concat", 00:24:04.261 "base_bdevs": [ 00:24:04.261 "malloc1", 00:24:04.261 "malloc2", 00:24:04.261 "malloc3", 00:24:04.261 "malloc4" 00:24:04.261 ], 00:24:04.261 "strip_size_kb": 64, 00:24:04.261 "superblock": false, 00:24:04.261 "method": "bdev_raid_create", 00:24:04.261 "req_id": 1 00:24:04.261 } 00:24:04.261 Got JSON-RPC error response 00:24:04.261 response: 00:24:04.261 { 00:24:04.261 "code": -17, 00:24:04.261 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:04.261 } 00:24:04.261 23:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:24:04.261 23:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:04.261 23:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:04.261 23:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:04.261 23:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.261 23:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:24:04.519 23:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:24:04.519 23:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:24:04.519 23:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:04.777 [2024-07-13 23:09:53.929961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:04.777 [2024-07-13 23:09:53.930332] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:04.777 [2024-07-13 23:09:53.930531] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:04.777 [2024-07-13 23:09:53.930700] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:04.777 [2024-07-13 23:09:53.933250] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:04.777 [2024-07-13 23:09:53.933506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:04.777 [2024-07-13 23:09:53.933728] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:04.777 [2024-07-13 23:09:53.933907] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:04.777 pt1 00:24:04.777 23:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:24:04.777 23:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:04.777 23:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:04.777 23:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:04.777 23:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:04.777 23:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:04.777 23:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:04.777 23:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:04.777 23:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:04.777 23:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:04.777 23:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.777 23:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.777 23:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:04.777 "name": "raid_bdev1", 00:24:04.777 "uuid": "616f7c50-6632-48fa-bd76-c73edb7232f6", 00:24:04.777 "strip_size_kb": 64, 00:24:04.777 "state": "configuring", 00:24:04.777 "raid_level": "concat", 00:24:04.777 "superblock": true, 00:24:04.777 "num_base_bdevs": 4, 00:24:04.777 "num_base_bdevs_discovered": 1, 00:24:04.777 "num_base_bdevs_operational": 4, 00:24:04.777 "base_bdevs_list": [ 00:24:04.777 { 00:24:04.777 "name": "pt1", 00:24:04.777 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:04.777 "is_configured": true, 00:24:04.777 "data_offset": 2048, 00:24:04.777 "data_size": 63488 00:24:04.777 }, 00:24:04.777 { 00:24:04.777 "name": null, 00:24:04.777 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:04.777 "is_configured": false, 00:24:04.777 "data_offset": 2048, 00:24:04.777 "data_size": 63488 00:24:04.777 }, 00:24:04.777 { 00:24:04.777 "name": null, 00:24:04.777 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:04.777 "is_configured": false, 00:24:04.777 "data_offset": 2048, 00:24:04.777 "data_size": 63488 00:24:04.777 }, 00:24:04.777 { 00:24:04.777 "name": null, 00:24:04.777 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:04.777 "is_configured": false, 00:24:04.777 "data_offset": 2048, 00:24:04.777 "data_size": 63488 00:24:04.777 } 00:24:04.777 ] 00:24:04.777 }' 00:24:04.777 23:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:04.777 23:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:05.343 23:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:24:05.343 23:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:05.615 [2024-07-13 23:09:55.006567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:05.615 [2024-07-13 23:09:55.007028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:05.615 [2024-07-13 23:09:55.007203] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:24:05.615 [2024-07-13 23:09:55.007336] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:05.615 [2024-07-13 23:09:55.008072] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:05.615 [2024-07-13 23:09:55.008260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:05.615 [2024-07-13 23:09:55.008543] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:05.615 [2024-07-13 23:09:55.008679] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:05.615 pt2 00:24:05.874 23:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:05.874 [2024-07-13 23:09:55.234660] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:24:05.874 23:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:24:05.874 23:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:05.874 23:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:05.874 23:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:05.874 23:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:05.874 23:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:05.874 23:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:05.874 23:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:05.874 23:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:05.874 23:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:05.874 23:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.874 23:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:06.132 23:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:06.132 "name": "raid_bdev1", 00:24:06.132 "uuid": "616f7c50-6632-48fa-bd76-c73edb7232f6", 00:24:06.132 "strip_size_kb": 64, 00:24:06.132 "state": "configuring", 00:24:06.132 "raid_level": "concat", 00:24:06.132 "superblock": true, 00:24:06.132 "num_base_bdevs": 4, 00:24:06.132 "num_base_bdevs_discovered": 1, 00:24:06.132 "num_base_bdevs_operational": 4, 00:24:06.132 "base_bdevs_list": [ 00:24:06.132 { 00:24:06.132 "name": "pt1", 00:24:06.132 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:06.132 "is_configured": true, 00:24:06.132 "data_offset": 2048, 00:24:06.132 "data_size": 63488 00:24:06.132 }, 00:24:06.132 { 00:24:06.132 "name": null, 00:24:06.132 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:06.132 "is_configured": false, 00:24:06.132 "data_offset": 2048, 00:24:06.132 "data_size": 63488 00:24:06.132 }, 00:24:06.132 { 00:24:06.132 "name": null, 00:24:06.132 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:06.132 "is_configured": false, 00:24:06.132 "data_offset": 2048, 00:24:06.132 "data_size": 63488 00:24:06.132 }, 00:24:06.132 { 00:24:06.133 "name": null, 00:24:06.133 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:06.133 "is_configured": false, 00:24:06.133 "data_offset": 2048, 00:24:06.133 "data_size": 63488 00:24:06.133 } 00:24:06.133 ] 00:24:06.133 }' 00:24:06.133 23:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:06.133 23:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.065 23:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:24:07.065 23:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:24:07.065 23:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:07.065 [2024-07-13 23:09:56.398929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:07.065 [2024-07-13 23:09:56.399304] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:07.065 [2024-07-13 23:09:56.399413] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:24:07.065 [2024-07-13 23:09:56.399654] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:07.065 [2024-07-13 23:09:56.400465] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:07.065 [2024-07-13 23:09:56.400725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:07.065 [2024-07-13 23:09:56.401021] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:07.065 [2024-07-13 23:09:56.401169] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:07.065 pt2 00:24:07.065 23:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:24:07.065 23:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:24:07.065 23:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:07.322 [2024-07-13 23:09:56.618946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:07.322 [2024-07-13 23:09:56.619196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:07.322 [2024-07-13 23:09:56.619273] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:24:07.322 [2024-07-13 23:09:56.619560] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:07.322 [2024-07-13 23:09:56.620165] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:07.322 [2024-07-13 23:09:56.620345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:07.322 [2024-07-13 23:09:56.620540] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:24:07.322 [2024-07-13 23:09:56.620698] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:07.322 pt3 00:24:07.322 23:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:24:07.322 23:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:24:07.322 23:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:07.581 [2024-07-13 23:09:56.819005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:07.581 [2024-07-13 23:09:56.819337] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:07.581 [2024-07-13 23:09:56.819505] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:24:07.581 [2024-07-13 23:09:56.819638] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:07.581 [2024-07-13 23:09:56.820236] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:07.581 [2024-07-13 23:09:56.820419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:07.581 [2024-07-13 23:09:56.820693] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:24:07.581 [2024-07-13 23:09:56.820820] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:07.581 [2024-07-13 23:09:56.821062] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:24:07.581 [2024-07-13 23:09:56.821210] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:07.581 [2024-07-13 23:09:56.821412] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:24:07.581 [2024-07-13 23:09:56.821931] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:24:07.581 [2024-07-13 23:09:56.822080] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:24:07.581 [2024-07-13 23:09:56.822342] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:07.581 pt4 00:24:07.581 23:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:24:07.581 23:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:24:07.581 23:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:24:07.581 23:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:07.581 23:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:07.581 23:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:07.581 23:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:07.581 23:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:07.581 23:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:07.581 23:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:07.581 23:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:07.581 23:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:07.581 23:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.581 23:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:07.839 23:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:07.839 "name": "raid_bdev1", 00:24:07.839 "uuid": "616f7c50-6632-48fa-bd76-c73edb7232f6", 00:24:07.839 "strip_size_kb": 64, 00:24:07.839 "state": "online", 00:24:07.839 "raid_level": "concat", 00:24:07.839 "superblock": true, 00:24:07.839 "num_base_bdevs": 4, 00:24:07.839 "num_base_bdevs_discovered": 4, 00:24:07.839 "num_base_bdevs_operational": 4, 00:24:07.839 "base_bdevs_list": [ 00:24:07.839 { 00:24:07.839 "name": "pt1", 00:24:07.839 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:07.839 "is_configured": true, 00:24:07.839 "data_offset": 2048, 00:24:07.839 "data_size": 63488 00:24:07.839 }, 00:24:07.839 { 00:24:07.839 "name": "pt2", 00:24:07.839 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:07.839 "is_configured": true, 00:24:07.839 "data_offset": 2048, 00:24:07.839 "data_size": 63488 00:24:07.839 }, 00:24:07.839 { 00:24:07.839 "name": "pt3", 00:24:07.839 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:07.839 "is_configured": true, 00:24:07.839 "data_offset": 2048, 00:24:07.839 "data_size": 63488 00:24:07.839 }, 00:24:07.839 { 00:24:07.839 "name": "pt4", 00:24:07.839 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:07.839 "is_configured": true, 00:24:07.839 "data_offset": 2048, 00:24:07.839 "data_size": 63488 00:24:07.839 } 00:24:07.839 ] 00:24:07.839 }' 00:24:07.839 23:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:07.839 23:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.405 23:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:24:08.405 23:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:24:08.405 23:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:08.405 23:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:08.405 23:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:08.405 23:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:08.405 23:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:08.405 23:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:08.663 [2024-07-13 23:09:57.983671] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:08.663 23:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:08.663 "name": "raid_bdev1", 00:24:08.663 "aliases": [ 00:24:08.663 "616f7c50-6632-48fa-bd76-c73edb7232f6" 00:24:08.663 ], 00:24:08.663 "product_name": "Raid Volume", 00:24:08.664 "block_size": 512, 00:24:08.664 "num_blocks": 253952, 00:24:08.664 "uuid": "616f7c50-6632-48fa-bd76-c73edb7232f6", 00:24:08.664 "assigned_rate_limits": { 00:24:08.664 "rw_ios_per_sec": 0, 00:24:08.664 "rw_mbytes_per_sec": 0, 00:24:08.664 "r_mbytes_per_sec": 0, 00:24:08.664 "w_mbytes_per_sec": 0 00:24:08.664 }, 00:24:08.664 "claimed": false, 00:24:08.664 "zoned": false, 00:24:08.664 "supported_io_types": { 00:24:08.664 "read": true, 00:24:08.664 "write": true, 00:24:08.664 "unmap": true, 00:24:08.664 "flush": true, 00:24:08.664 "reset": true, 00:24:08.664 "nvme_admin": false, 00:24:08.664 "nvme_io": false, 00:24:08.664 "nvme_io_md": false, 00:24:08.664 "write_zeroes": true, 00:24:08.664 "zcopy": false, 00:24:08.664 "get_zone_info": false, 00:24:08.664 "zone_management": false, 00:24:08.664 "zone_append": false, 00:24:08.664 "compare": false, 00:24:08.664 "compare_and_write": false, 00:24:08.664 "abort": false, 00:24:08.664 "seek_hole": false, 00:24:08.664 "seek_data": false, 00:24:08.664 "copy": false, 00:24:08.664 "nvme_iov_md": false 00:24:08.664 }, 00:24:08.664 "memory_domains": [ 00:24:08.664 { 00:24:08.664 "dma_device_id": "system", 00:24:08.664 "dma_device_type": 1 00:24:08.664 }, 00:24:08.664 { 00:24:08.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:08.664 "dma_device_type": 2 00:24:08.664 }, 00:24:08.664 { 00:24:08.664 "dma_device_id": "system", 00:24:08.664 "dma_device_type": 1 00:24:08.664 }, 00:24:08.664 { 00:24:08.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:08.664 "dma_device_type": 2 00:24:08.664 }, 00:24:08.664 { 00:24:08.664 "dma_device_id": "system", 00:24:08.664 "dma_device_type": 1 00:24:08.664 }, 00:24:08.664 { 00:24:08.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:08.664 "dma_device_type": 2 00:24:08.664 }, 00:24:08.664 { 00:24:08.664 "dma_device_id": "system", 00:24:08.664 "dma_device_type": 1 00:24:08.664 }, 00:24:08.664 { 00:24:08.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:08.664 "dma_device_type": 2 00:24:08.664 } 00:24:08.664 ], 00:24:08.664 "driver_specific": { 00:24:08.664 "raid": { 00:24:08.664 "uuid": "616f7c50-6632-48fa-bd76-c73edb7232f6", 00:24:08.664 "strip_size_kb": 64, 00:24:08.664 "state": "online", 00:24:08.664 "raid_level": "concat", 00:24:08.664 "superblock": true, 00:24:08.664 "num_base_bdevs": 4, 00:24:08.664 "num_base_bdevs_discovered": 4, 00:24:08.664 "num_base_bdevs_operational": 4, 00:24:08.664 "base_bdevs_list": [ 00:24:08.664 { 00:24:08.664 "name": "pt1", 00:24:08.664 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:08.664 "is_configured": true, 00:24:08.664 "data_offset": 2048, 00:24:08.664 "data_size": 63488 00:24:08.664 }, 00:24:08.664 { 00:24:08.664 "name": "pt2", 00:24:08.664 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:08.664 "is_configured": true, 00:24:08.664 "data_offset": 2048, 00:24:08.664 "data_size": 63488 00:24:08.664 }, 00:24:08.664 { 00:24:08.664 "name": "pt3", 00:24:08.664 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:08.664 "is_configured": true, 00:24:08.664 "data_offset": 2048, 00:24:08.664 "data_size": 63488 00:24:08.664 }, 00:24:08.664 { 00:24:08.664 "name": "pt4", 00:24:08.664 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:08.664 "is_configured": true, 00:24:08.664 "data_offset": 2048, 00:24:08.664 "data_size": 63488 00:24:08.664 } 00:24:08.664 ] 00:24:08.664 } 00:24:08.664 } 00:24:08.664 }' 00:24:08.664 23:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:08.664 23:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:24:08.664 pt2 00:24:08.664 pt3 00:24:08.664 pt4' 00:24:08.664 23:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:08.664 23:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:24:08.664 23:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:08.923 23:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:08.923 "name": "pt1", 00:24:08.923 "aliases": [ 00:24:08.923 "00000000-0000-0000-0000-000000000001" 00:24:08.923 ], 00:24:08.923 "product_name": "passthru", 00:24:08.923 "block_size": 512, 00:24:08.923 "num_blocks": 65536, 00:24:08.923 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:08.923 "assigned_rate_limits": { 00:24:08.923 "rw_ios_per_sec": 0, 00:24:08.923 "rw_mbytes_per_sec": 0, 00:24:08.923 "r_mbytes_per_sec": 0, 00:24:08.923 "w_mbytes_per_sec": 0 00:24:08.923 }, 00:24:08.923 "claimed": true, 00:24:08.923 "claim_type": "exclusive_write", 00:24:08.923 "zoned": false, 00:24:08.923 "supported_io_types": { 00:24:08.923 "read": true, 00:24:08.923 "write": true, 00:24:08.923 "unmap": true, 00:24:08.923 "flush": true, 00:24:08.923 "reset": true, 00:24:08.923 "nvme_admin": false, 00:24:08.923 "nvme_io": false, 00:24:08.923 "nvme_io_md": false, 00:24:08.923 "write_zeroes": true, 00:24:08.923 "zcopy": true, 00:24:08.923 "get_zone_info": false, 00:24:08.923 "zone_management": false, 00:24:08.923 "zone_append": false, 00:24:08.923 "compare": false, 00:24:08.923 "compare_and_write": false, 00:24:08.923 "abort": true, 00:24:08.923 "seek_hole": false, 00:24:08.923 "seek_data": false, 00:24:08.923 "copy": true, 00:24:08.923 "nvme_iov_md": false 00:24:08.923 }, 00:24:08.923 "memory_domains": [ 00:24:08.923 { 00:24:08.923 "dma_device_id": "system", 00:24:08.923 "dma_device_type": 1 00:24:08.923 }, 00:24:08.923 { 00:24:08.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:08.923 "dma_device_type": 2 00:24:08.923 } 00:24:08.923 ], 00:24:08.923 "driver_specific": { 00:24:08.923 "passthru": { 00:24:08.923 "name": "pt1", 00:24:08.923 "base_bdev_name": "malloc1" 00:24:08.923 } 00:24:08.923 } 00:24:08.923 }' 00:24:08.923 23:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:09.182 23:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:09.182 23:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:09.182 23:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:09.182 23:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:09.182 23:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:09.182 23:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:09.182 23:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:09.441 23:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:09.441 23:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:09.441 23:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:09.441 23:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:09.441 23:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:09.441 23:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:24:09.441 23:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:09.699 23:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:09.699 "name": "pt2", 00:24:09.699 "aliases": [ 00:24:09.699 "00000000-0000-0000-0000-000000000002" 00:24:09.699 ], 00:24:09.699 "product_name": "passthru", 00:24:09.699 "block_size": 512, 00:24:09.699 "num_blocks": 65536, 00:24:09.699 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:09.699 "assigned_rate_limits": { 00:24:09.699 "rw_ios_per_sec": 0, 00:24:09.699 "rw_mbytes_per_sec": 0, 00:24:09.699 "r_mbytes_per_sec": 0, 00:24:09.699 "w_mbytes_per_sec": 0 00:24:09.699 }, 00:24:09.699 "claimed": true, 00:24:09.699 "claim_type": "exclusive_write", 00:24:09.699 "zoned": false, 00:24:09.699 "supported_io_types": { 00:24:09.699 "read": true, 00:24:09.699 "write": true, 00:24:09.699 "unmap": true, 00:24:09.699 "flush": true, 00:24:09.699 "reset": true, 00:24:09.699 "nvme_admin": false, 00:24:09.699 "nvme_io": false, 00:24:09.699 "nvme_io_md": false, 00:24:09.699 "write_zeroes": true, 00:24:09.699 "zcopy": true, 00:24:09.699 "get_zone_info": false, 00:24:09.699 "zone_management": false, 00:24:09.699 "zone_append": false, 00:24:09.699 "compare": false, 00:24:09.699 "compare_and_write": false, 00:24:09.699 "abort": true, 00:24:09.699 "seek_hole": false, 00:24:09.699 "seek_data": false, 00:24:09.699 "copy": true, 00:24:09.699 "nvme_iov_md": false 00:24:09.699 }, 00:24:09.699 "memory_domains": [ 00:24:09.699 { 00:24:09.699 "dma_device_id": "system", 00:24:09.699 "dma_device_type": 1 00:24:09.699 }, 00:24:09.699 { 00:24:09.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:09.699 "dma_device_type": 2 00:24:09.699 } 00:24:09.699 ], 00:24:09.699 "driver_specific": { 00:24:09.699 "passthru": { 00:24:09.699 "name": "pt2", 00:24:09.699 "base_bdev_name": "malloc2" 00:24:09.699 } 00:24:09.699 } 00:24:09.699 }' 00:24:09.699 23:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:09.699 23:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:09.699 23:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:09.699 23:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:09.963 23:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:09.963 23:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:09.963 23:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:09.963 23:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:09.963 23:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:09.963 23:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:09.963 23:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:09.963 23:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:09.963 23:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:09.963 23:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:24:09.963 23:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:10.233 23:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:10.233 "name": "pt3", 00:24:10.233 "aliases": [ 00:24:10.233 "00000000-0000-0000-0000-000000000003" 00:24:10.233 ], 00:24:10.233 "product_name": "passthru", 00:24:10.233 "block_size": 512, 00:24:10.233 "num_blocks": 65536, 00:24:10.233 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:10.233 "assigned_rate_limits": { 00:24:10.233 "rw_ios_per_sec": 0, 00:24:10.233 "rw_mbytes_per_sec": 0, 00:24:10.233 "r_mbytes_per_sec": 0, 00:24:10.233 "w_mbytes_per_sec": 0 00:24:10.233 }, 00:24:10.233 "claimed": true, 00:24:10.233 "claim_type": "exclusive_write", 00:24:10.233 "zoned": false, 00:24:10.233 "supported_io_types": { 00:24:10.233 "read": true, 00:24:10.233 "write": true, 00:24:10.233 "unmap": true, 00:24:10.233 "flush": true, 00:24:10.234 "reset": true, 00:24:10.234 "nvme_admin": false, 00:24:10.234 "nvme_io": false, 00:24:10.234 "nvme_io_md": false, 00:24:10.234 "write_zeroes": true, 00:24:10.234 "zcopy": true, 00:24:10.234 "get_zone_info": false, 00:24:10.234 "zone_management": false, 00:24:10.234 "zone_append": false, 00:24:10.234 "compare": false, 00:24:10.234 "compare_and_write": false, 00:24:10.234 "abort": true, 00:24:10.234 "seek_hole": false, 00:24:10.234 "seek_data": false, 00:24:10.234 "copy": true, 00:24:10.234 "nvme_iov_md": false 00:24:10.234 }, 00:24:10.234 "memory_domains": [ 00:24:10.234 { 00:24:10.234 "dma_device_id": "system", 00:24:10.234 "dma_device_type": 1 00:24:10.234 }, 00:24:10.234 { 00:24:10.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:10.234 "dma_device_type": 2 00:24:10.234 } 00:24:10.234 ], 00:24:10.234 "driver_specific": { 00:24:10.234 "passthru": { 00:24:10.234 "name": "pt3", 00:24:10.234 "base_bdev_name": "malloc3" 00:24:10.234 } 00:24:10.234 } 00:24:10.234 }' 00:24:10.234 23:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:10.492 23:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:10.492 23:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:10.492 23:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:10.492 23:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:10.492 23:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:10.492 23:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:10.492 23:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:10.492 23:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:10.492 23:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:10.749 23:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:10.749 23:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:10.749 23:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:10.749 23:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:10.749 23:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:24:11.007 23:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:11.007 "name": "pt4", 00:24:11.007 "aliases": [ 00:24:11.007 "00000000-0000-0000-0000-000000000004" 00:24:11.007 ], 00:24:11.007 "product_name": "passthru", 00:24:11.007 "block_size": 512, 00:24:11.007 "num_blocks": 65536, 00:24:11.007 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:11.007 "assigned_rate_limits": { 00:24:11.007 "rw_ios_per_sec": 0, 00:24:11.007 "rw_mbytes_per_sec": 0, 00:24:11.007 "r_mbytes_per_sec": 0, 00:24:11.007 "w_mbytes_per_sec": 0 00:24:11.007 }, 00:24:11.007 "claimed": true, 00:24:11.007 "claim_type": "exclusive_write", 00:24:11.007 "zoned": false, 00:24:11.007 "supported_io_types": { 00:24:11.007 "read": true, 00:24:11.007 "write": true, 00:24:11.007 "unmap": true, 00:24:11.007 "flush": true, 00:24:11.007 "reset": true, 00:24:11.007 "nvme_admin": false, 00:24:11.007 "nvme_io": false, 00:24:11.007 "nvme_io_md": false, 00:24:11.007 "write_zeroes": true, 00:24:11.007 "zcopy": true, 00:24:11.007 "get_zone_info": false, 00:24:11.007 "zone_management": false, 00:24:11.007 "zone_append": false, 00:24:11.007 "compare": false, 00:24:11.007 "compare_and_write": false, 00:24:11.007 "abort": true, 00:24:11.007 "seek_hole": false, 00:24:11.007 "seek_data": false, 00:24:11.007 "copy": true, 00:24:11.007 "nvme_iov_md": false 00:24:11.007 }, 00:24:11.007 "memory_domains": [ 00:24:11.007 { 00:24:11.007 "dma_device_id": "system", 00:24:11.007 "dma_device_type": 1 00:24:11.007 }, 00:24:11.007 { 00:24:11.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:11.007 "dma_device_type": 2 00:24:11.007 } 00:24:11.007 ], 00:24:11.007 "driver_specific": { 00:24:11.007 "passthru": { 00:24:11.007 "name": "pt4", 00:24:11.007 "base_bdev_name": "malloc4" 00:24:11.007 } 00:24:11.007 } 00:24:11.007 }' 00:24:11.007 23:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:11.007 23:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:11.007 23:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:11.007 23:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:11.007 23:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:11.266 23:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:11.266 23:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:11.266 23:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:11.266 23:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:11.266 23:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:11.266 23:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:11.266 23:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:11.266 23:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:11.266 23:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:24:11.524 [2024-07-13 23:10:00.842437] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:11.525 23:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 616f7c50-6632-48fa-bd76-c73edb7232f6 '!=' 616f7c50-6632-48fa-bd76-c73edb7232f6 ']' 00:24:11.525 23:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:24:11.525 23:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:11.525 23:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:24:11.525 23:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 149416 00:24:11.525 23:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 149416 ']' 00:24:11.525 23:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 149416 00:24:11.525 23:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:24:11.525 23:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:11.525 23:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 149416 00:24:11.525 killing process with pid 149416 00:24:11.525 23:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:11.525 23:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:11.525 23:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 149416' 00:24:11.525 23:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 149416 00:24:11.525 23:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 149416 00:24:11.525 [2024-07-13 23:10:00.885648] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:11.525 [2024-07-13 23:10:00.885732] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:11.525 [2024-07-13 23:10:00.885806] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:11.525 [2024-07-13 23:10:00.885817] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:24:11.525 [2024-07-13 23:10:00.928595] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:11.783 23:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:24:11.783 00:24:11.783 real 0m16.760s 00:24:11.783 user 0m31.172s 00:24:11.783 sys 0m2.064s 00:24:11.783 23:10:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:11.783 23:10:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:11.783 ************************************ 00:24:11.783 END TEST raid_superblock_test 00:24:11.783 ************************************ 00:24:12.042 23:10:01 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:24:12.042 23:10:01 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:24:12.042 23:10:01 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:24:12.042 23:10:01 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:12.042 23:10:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:12.042 ************************************ 00:24:12.042 START TEST raid_read_error_test 00:24:12.042 ************************************ 00:24:12.042 23:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 read 00:24:12.042 23:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:24:12.042 23:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:24:12.042 23:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:24:12.042 23:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:24:12.042 23:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:12.042 23:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:24:12.042 23:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:12.042 23:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:12.042 23:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:24:12.042 23:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:12.042 23:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:12.042 23:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:24:12.042 23:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:12.042 23:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:12.042 23:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:24:12.042 23:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:12.042 23:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:12.042 23:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:12.042 23:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:24:12.042 23:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:24:12.042 23:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:24:12.042 23:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:24:12.042 23:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:24:12.042 23:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:24:12.042 23:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:24:12.042 23:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:24:12.042 23:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:24:12.042 23:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:24:12.043 23:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.5WmPwsbcrh 00:24:12.043 23:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=149959 00:24:12.043 23:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:12.043 23:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 149959 /var/tmp/spdk-raid.sock 00:24:12.043 23:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 149959 ']' 00:24:12.043 23:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:12.043 23:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:12.043 23:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:12.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:12.043 23:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:12.043 23:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.043 [2024-07-13 23:10:01.284605] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:24:12.043 [2024-07-13 23:10:01.285044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149959 ] 00:24:12.043 [2024-07-13 23:10:01.423801] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.300 [2024-07-13 23:10:01.511813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.300 [2024-07-13 23:10:01.589938] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:13.236 23:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:13.236 23:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:24:13.236 23:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:13.236 23:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:13.236 BaseBdev1_malloc 00:24:13.236 23:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:24:13.493 true 00:24:13.493 23:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:13.751 [2024-07-13 23:10:02.949936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:13.751 [2024-07-13 23:10:02.950333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:13.751 [2024-07-13 23:10:02.950524] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:24:13.751 [2024-07-13 23:10:02.950744] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:13.751 [2024-07-13 23:10:02.954032] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:13.751 [2024-07-13 23:10:02.954214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:13.751 BaseBdev1 00:24:13.751 23:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:13.751 23:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:14.009 BaseBdev2_malloc 00:24:14.009 23:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:24:14.009 true 00:24:14.266 23:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:14.267 [2024-07-13 23:10:03.657189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:14.267 [2024-07-13 23:10:03.657520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:14.267 [2024-07-13 23:10:03.657731] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:24:14.267 [2024-07-13 23:10:03.657918] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:14.267 [2024-07-13 23:10:03.660709] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:14.267 [2024-07-13 23:10:03.660886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:14.267 BaseBdev2 00:24:14.524 23:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:14.524 23:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:14.524 BaseBdev3_malloc 00:24:14.524 23:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:24:14.780 true 00:24:14.780 23:10:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:24:15.038 [2024-07-13 23:10:04.320267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:24:15.038 [2024-07-13 23:10:04.320580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:15.038 [2024-07-13 23:10:04.320747] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:24:15.038 [2024-07-13 23:10:04.320985] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:15.038 [2024-07-13 23:10:04.323835] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:15.038 [2024-07-13 23:10:04.324068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:15.038 BaseBdev3 00:24:15.038 23:10:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:15.038 23:10:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:15.298 BaseBdev4_malloc 00:24:15.298 23:10:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:24:15.556 true 00:24:15.556 23:10:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:24:15.815 [2024-07-13 23:10:04.971213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:24:15.815 [2024-07-13 23:10:04.971510] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:15.815 [2024-07-13 23:10:04.971755] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:15.815 [2024-07-13 23:10:04.971923] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:15.815 [2024-07-13 23:10:04.974811] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:15.815 [2024-07-13 23:10:04.975015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:15.815 BaseBdev4 00:24:15.815 23:10:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:24:15.815 [2024-07-13 23:10:05.187606] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:15.816 [2024-07-13 23:10:05.190809] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:15.816 [2024-07-13 23:10:05.191062] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:15.816 [2024-07-13 23:10:05.191191] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:15.816 [2024-07-13 23:10:05.191669] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:24:15.816 [2024-07-13 23:10:05.191800] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:15.816 [2024-07-13 23:10:05.192156] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:24:15.816 [2024-07-13 23:10:05.192821] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:24:15.816 [2024-07-13 23:10:05.193010] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:24:15.816 [2024-07-13 23:10:05.193303] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:15.816 23:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:24:15.816 23:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:15.816 23:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:15.816 23:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:15.816 23:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:15.816 23:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:15.816 23:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:15.816 23:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:15.816 23:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:15.816 23:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:15.816 23:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.816 23:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.075 23:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:16.075 "name": "raid_bdev1", 00:24:16.075 "uuid": "35fb2f12-7e2c-4479-999c-0a9c9ee82494", 00:24:16.075 "strip_size_kb": 64, 00:24:16.075 "state": "online", 00:24:16.075 "raid_level": "concat", 00:24:16.075 "superblock": true, 00:24:16.075 "num_base_bdevs": 4, 00:24:16.075 "num_base_bdevs_discovered": 4, 00:24:16.075 "num_base_bdevs_operational": 4, 00:24:16.075 "base_bdevs_list": [ 00:24:16.075 { 00:24:16.075 "name": "BaseBdev1", 00:24:16.075 "uuid": "77a77969-8020-5539-9fbb-84f4f599574c", 00:24:16.075 "is_configured": true, 00:24:16.075 "data_offset": 2048, 00:24:16.075 "data_size": 63488 00:24:16.075 }, 00:24:16.075 { 00:24:16.075 "name": "BaseBdev2", 00:24:16.075 "uuid": "64aeadd9-28cb-5144-9c85-d946ac1d3815", 00:24:16.075 "is_configured": true, 00:24:16.075 "data_offset": 2048, 00:24:16.075 "data_size": 63488 00:24:16.075 }, 00:24:16.075 { 00:24:16.075 "name": "BaseBdev3", 00:24:16.075 "uuid": "e29b5cc0-3b50-5b05-9224-f1e0405b4d17", 00:24:16.075 "is_configured": true, 00:24:16.075 "data_offset": 2048, 00:24:16.075 "data_size": 63488 00:24:16.075 }, 00:24:16.075 { 00:24:16.075 "name": "BaseBdev4", 00:24:16.075 "uuid": "8425eabf-d9a4-5d4c-a9dd-9da703a7d408", 00:24:16.075 "is_configured": true, 00:24:16.075 "data_offset": 2048, 00:24:16.075 "data_size": 63488 00:24:16.075 } 00:24:16.075 ] 00:24:16.075 }' 00:24:16.075 23:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:16.075 23:10:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:17.010 23:10:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:24:17.010 23:10:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:17.010 [2024-07-13 23:10:06.196477] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:24:17.944 23:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:24:17.944 23:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:24:17.944 23:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:24:17.944 23:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:24:17.944 23:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:24:17.944 23:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:17.944 23:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:17.944 23:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:17.944 23:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:17.944 23:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:17.944 23:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:17.944 23:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:17.944 23:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:17.944 23:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:17.944 23:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.944 23:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:18.203 23:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:18.203 "name": "raid_bdev1", 00:24:18.203 "uuid": "35fb2f12-7e2c-4479-999c-0a9c9ee82494", 00:24:18.203 "strip_size_kb": 64, 00:24:18.203 "state": "online", 00:24:18.203 "raid_level": "concat", 00:24:18.203 "superblock": true, 00:24:18.203 "num_base_bdevs": 4, 00:24:18.203 "num_base_bdevs_discovered": 4, 00:24:18.203 "num_base_bdevs_operational": 4, 00:24:18.203 "base_bdevs_list": [ 00:24:18.203 { 00:24:18.203 "name": "BaseBdev1", 00:24:18.203 "uuid": "77a77969-8020-5539-9fbb-84f4f599574c", 00:24:18.203 "is_configured": true, 00:24:18.203 "data_offset": 2048, 00:24:18.203 "data_size": 63488 00:24:18.203 }, 00:24:18.203 { 00:24:18.203 "name": "BaseBdev2", 00:24:18.203 "uuid": "64aeadd9-28cb-5144-9c85-d946ac1d3815", 00:24:18.203 "is_configured": true, 00:24:18.203 "data_offset": 2048, 00:24:18.203 "data_size": 63488 00:24:18.203 }, 00:24:18.203 { 00:24:18.203 "name": "BaseBdev3", 00:24:18.203 "uuid": "e29b5cc0-3b50-5b05-9224-f1e0405b4d17", 00:24:18.203 "is_configured": true, 00:24:18.203 "data_offset": 2048, 00:24:18.203 "data_size": 63488 00:24:18.203 }, 00:24:18.203 { 00:24:18.203 "name": "BaseBdev4", 00:24:18.203 "uuid": "8425eabf-d9a4-5d4c-a9dd-9da703a7d408", 00:24:18.203 "is_configured": true, 00:24:18.203 "data_offset": 2048, 00:24:18.203 "data_size": 63488 00:24:18.203 } 00:24:18.203 ] 00:24:18.203 }' 00:24:18.203 23:10:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:18.203 23:10:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:18.769 23:10:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:19.028 [2024-07-13 23:10:08.415856] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:19.028 [2024-07-13 23:10:08.416248] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:19.028 [2024-07-13 23:10:08.418919] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:19.028 [2024-07-13 23:10:08.419126] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:19.028 [2024-07-13 23:10:08.419216] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:19.028 [2024-07-13 23:10:08.419473] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:24:19.028 0 00:24:19.028 23:10:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 149959 00:24:19.028 23:10:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 149959 ']' 00:24:19.028 23:10:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 149959 00:24:19.288 23:10:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:24:19.288 23:10:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:19.288 23:10:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 149959 00:24:19.288 killing process with pid 149959 00:24:19.288 23:10:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:19.288 23:10:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:19.288 23:10:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 149959' 00:24:19.288 23:10:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 149959 00:24:19.288 [2024-07-13 23:10:08.453649] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:19.288 23:10:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 149959 00:24:19.288 [2024-07-13 23:10:08.495492] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:19.565 23:10:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:24:19.565 23:10:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.5WmPwsbcrh 00:24:19.565 23:10:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:24:19.565 23:10:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.45 00:24:19.565 23:10:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:24:19.565 23:10:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:19.565 23:10:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:24:19.565 23:10:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.45 != \0\.\0\0 ]] 00:24:19.565 00:24:19.565 real 0m7.598s 00:24:19.565 user 0m12.396s 00:24:19.565 sys 0m0.930s 00:24:19.565 23:10:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:19.565 23:10:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:19.565 ************************************ 00:24:19.565 END TEST raid_read_error_test 00:24:19.565 ************************************ 00:24:19.565 23:10:08 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:24:19.565 23:10:08 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:24:19.565 23:10:08 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:24:19.565 23:10:08 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:19.565 23:10:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:19.565 ************************************ 00:24:19.565 START TEST raid_write_error_test 00:24:19.565 ************************************ 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 write 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.UECIHuTGXQ 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=150162 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 150162 /var/tmp/spdk-raid.sock 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 150162 ']' 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:19.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:19.565 23:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:19.565 [2024-07-13 23:10:08.949242] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:24:19.565 [2024-07-13 23:10:08.949695] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150162 ] 00:24:19.833 [2024-07-13 23:10:09.089989] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.833 [2024-07-13 23:10:09.186511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:20.091 [2024-07-13 23:10:09.258232] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:20.091 23:10:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:20.091 23:10:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:24:20.091 23:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:20.091 23:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:20.349 BaseBdev1_malloc 00:24:20.349 23:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:24:20.607 true 00:24:20.607 23:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:20.865 [2024-07-13 23:10:10.083668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:20.865 [2024-07-13 23:10:10.084014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:20.865 [2024-07-13 23:10:10.084224] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:24:20.865 [2024-07-13 23:10:10.084446] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:20.865 [2024-07-13 23:10:10.088038] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:20.865 [2024-07-13 23:10:10.088257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:20.865 BaseBdev1 00:24:20.865 23:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:20.865 23:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:21.123 BaseBdev2_malloc 00:24:21.123 23:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:24:21.381 true 00:24:21.381 23:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:21.663 [2024-07-13 23:10:10.798273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:21.663 [2024-07-13 23:10:10.798567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:21.663 [2024-07-13 23:10:10.798673] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:24:21.663 [2024-07-13 23:10:10.798961] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:21.663 [2024-07-13 23:10:10.801646] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:21.663 [2024-07-13 23:10:10.801823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:21.663 BaseBdev2 00:24:21.663 23:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:21.663 23:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:21.921 BaseBdev3_malloc 00:24:21.921 23:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:24:21.921 true 00:24:21.921 23:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:24:22.180 [2024-07-13 23:10:11.569983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:24:22.180 [2024-07-13 23:10:11.570362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:22.180 [2024-07-13 23:10:11.570458] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:24:22.180 [2024-07-13 23:10:11.570716] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:22.180 [2024-07-13 23:10:11.573325] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:22.180 [2024-07-13 23:10:11.573510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:22.180 BaseBdev3 00:24:22.180 23:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:22.438 23:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:22.438 BaseBdev4_malloc 00:24:22.696 23:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:24:22.696 true 00:24:22.696 23:10:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:24:22.954 [2024-07-13 23:10:12.272255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:24:22.954 [2024-07-13 23:10:12.272609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:22.954 [2024-07-13 23:10:12.272774] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:22.954 [2024-07-13 23:10:12.272935] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:22.954 [2024-07-13 23:10:12.275785] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:22.954 [2024-07-13 23:10:12.275967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:22.954 BaseBdev4 00:24:22.954 23:10:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:24:23.212 [2024-07-13 23:10:12.480438] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:23.212 [2024-07-13 23:10:12.482835] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:23.212 [2024-07-13 23:10:12.483070] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:23.212 [2024-07-13 23:10:12.483269] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:23.212 [2024-07-13 23:10:12.483645] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:24:23.212 [2024-07-13 23:10:12.483791] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:23.212 [2024-07-13 23:10:12.483985] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:24:23.212 [2024-07-13 23:10:12.484546] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:24:23.212 [2024-07-13 23:10:12.484671] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:24:23.212 [2024-07-13 23:10:12.484972] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:23.212 23:10:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:24:23.212 23:10:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:23.212 23:10:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:23.212 23:10:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:23.212 23:10:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:23.212 23:10:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:23.212 23:10:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:23.212 23:10:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:23.212 23:10:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:23.212 23:10:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:23.212 23:10:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:23.212 23:10:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.470 23:10:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:23.470 "name": "raid_bdev1", 00:24:23.470 "uuid": "f2a4a415-b376-4ebd-9a89-85a64de3fcc5", 00:24:23.470 "strip_size_kb": 64, 00:24:23.470 "state": "online", 00:24:23.470 "raid_level": "concat", 00:24:23.470 "superblock": true, 00:24:23.470 "num_base_bdevs": 4, 00:24:23.470 "num_base_bdevs_discovered": 4, 00:24:23.470 "num_base_bdevs_operational": 4, 00:24:23.470 "base_bdevs_list": [ 00:24:23.470 { 00:24:23.470 "name": "BaseBdev1", 00:24:23.470 "uuid": "5bc9e50f-e582-59da-8ca9-2a744c67c5b3", 00:24:23.470 "is_configured": true, 00:24:23.470 "data_offset": 2048, 00:24:23.470 "data_size": 63488 00:24:23.470 }, 00:24:23.470 { 00:24:23.470 "name": "BaseBdev2", 00:24:23.470 "uuid": "effd8a44-5df4-5abf-83ba-ce0abacfc282", 00:24:23.470 "is_configured": true, 00:24:23.470 "data_offset": 2048, 00:24:23.470 "data_size": 63488 00:24:23.470 }, 00:24:23.470 { 00:24:23.470 "name": "BaseBdev3", 00:24:23.470 "uuid": "a03a27d5-7912-583d-a951-1d40e27231b0", 00:24:23.470 "is_configured": true, 00:24:23.470 "data_offset": 2048, 00:24:23.470 "data_size": 63488 00:24:23.470 }, 00:24:23.470 { 00:24:23.470 "name": "BaseBdev4", 00:24:23.470 "uuid": "742318dc-37e8-552b-a6c0-6ca2d5ece63a", 00:24:23.470 "is_configured": true, 00:24:23.470 "data_offset": 2048, 00:24:23.470 "data_size": 63488 00:24:23.470 } 00:24:23.470 ] 00:24:23.470 }' 00:24:23.470 23:10:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:23.470 23:10:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.035 23:10:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:24:24.035 23:10:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:24.035 [2024-07-13 23:10:13.409647] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:24:24.966 23:10:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:24:25.223 23:10:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:24:25.223 23:10:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:24:25.223 23:10:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:24:25.223 23:10:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:24:25.223 23:10:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:25.223 23:10:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:25.223 23:10:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:25.223 23:10:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:25.223 23:10:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:25.223 23:10:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:25.223 23:10:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:25.223 23:10:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:25.223 23:10:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:25.223 23:10:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:25.223 23:10:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:25.481 23:10:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:25.481 "name": "raid_bdev1", 00:24:25.481 "uuid": "f2a4a415-b376-4ebd-9a89-85a64de3fcc5", 00:24:25.481 "strip_size_kb": 64, 00:24:25.481 "state": "online", 00:24:25.481 "raid_level": "concat", 00:24:25.481 "superblock": true, 00:24:25.481 "num_base_bdevs": 4, 00:24:25.481 "num_base_bdevs_discovered": 4, 00:24:25.481 "num_base_bdevs_operational": 4, 00:24:25.481 "base_bdevs_list": [ 00:24:25.481 { 00:24:25.481 "name": "BaseBdev1", 00:24:25.481 "uuid": "5bc9e50f-e582-59da-8ca9-2a744c67c5b3", 00:24:25.481 "is_configured": true, 00:24:25.481 "data_offset": 2048, 00:24:25.481 "data_size": 63488 00:24:25.481 }, 00:24:25.481 { 00:24:25.481 "name": "BaseBdev2", 00:24:25.481 "uuid": "effd8a44-5df4-5abf-83ba-ce0abacfc282", 00:24:25.481 "is_configured": true, 00:24:25.481 "data_offset": 2048, 00:24:25.481 "data_size": 63488 00:24:25.481 }, 00:24:25.481 { 00:24:25.481 "name": "BaseBdev3", 00:24:25.481 "uuid": "a03a27d5-7912-583d-a951-1d40e27231b0", 00:24:25.481 "is_configured": true, 00:24:25.481 "data_offset": 2048, 00:24:25.481 "data_size": 63488 00:24:25.481 }, 00:24:25.481 { 00:24:25.481 "name": "BaseBdev4", 00:24:25.481 "uuid": "742318dc-37e8-552b-a6c0-6ca2d5ece63a", 00:24:25.481 "is_configured": true, 00:24:25.481 "data_offset": 2048, 00:24:25.481 "data_size": 63488 00:24:25.481 } 00:24:25.481 ] 00:24:25.481 }' 00:24:25.481 23:10:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:25.481 23:10:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:26.416 23:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:26.416 [2024-07-13 23:10:15.729149] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:26.416 [2024-07-13 23:10:15.729606] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:26.416 [2024-07-13 23:10:15.732761] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:26.416 [2024-07-13 23:10:15.733036] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:26.416 [2024-07-13 23:10:15.733214] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:26.416 [2024-07-13 23:10:15.733399] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:24:26.416 0 00:24:26.416 23:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 150162 00:24:26.416 23:10:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 150162 ']' 00:24:26.416 23:10:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 150162 00:24:26.416 23:10:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:24:26.416 23:10:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:26.416 23:10:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 150162 00:24:26.416 23:10:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:26.416 23:10:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:26.416 23:10:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 150162' 00:24:26.416 killing process with pid 150162 00:24:26.416 23:10:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 150162 00:24:26.416 [2024-07-13 23:10:15.770549] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:26.416 23:10:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 150162 00:24:26.416 [2024-07-13 23:10:15.818792] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:26.982 23:10:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.UECIHuTGXQ 00:24:26.982 23:10:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:24:26.982 23:10:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:24:26.982 23:10:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:24:26.982 23:10:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:24:26.982 23:10:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:26.982 23:10:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:24:26.982 23:10:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:24:26.982 00:24:26.982 real 0m7.312s 00:24:26.982 user 0m11.961s 00:24:26.982 sys 0m1.080s 00:24:26.982 23:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:26.982 23:10:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:26.982 ************************************ 00:24:26.982 END TEST raid_write_error_test 00:24:26.982 ************************************ 00:24:26.982 23:10:16 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:24:26.982 23:10:16 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:24:26.982 23:10:16 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:24:26.982 23:10:16 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:24:26.982 23:10:16 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:26.982 23:10:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:26.982 ************************************ 00:24:26.982 START TEST raid_state_function_test 00:24:26.982 ************************************ 00:24:26.982 23:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 false 00:24:26.982 23:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:24:26.982 23:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:24:26.982 23:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:24:26.982 23:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:24:26.982 23:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:24:26.982 23:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:26.982 23:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:24:26.982 23:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:26.982 23:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:26.982 23:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:24:26.982 23:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:26.982 23:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:26.982 23:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:24:26.982 23:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:26.983 23:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:26.983 23:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:24:26.983 23:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:26.983 23:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:26.983 23:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:26.983 23:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:24:26.983 23:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:24:26.983 23:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:24:26.983 23:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:24:26.983 23:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:24:26.983 23:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:24:26.983 23:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:24:26.983 23:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:24:26.983 23:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:24:26.983 23:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=150356 00:24:26.983 23:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:26.983 Process raid pid: 150356 00:24:26.983 23:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 150356' 00:24:26.983 23:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 150356 /var/tmp/spdk-raid.sock 00:24:26.983 23:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 150356 ']' 00:24:26.983 23:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:26.983 23:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:26.983 23:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:26.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:26.983 23:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:26.983 23:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:26.983 [2024-07-13 23:10:16.315172] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:24:26.983 [2024-07-13 23:10:16.315559] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:27.241 [2024-07-13 23:10:16.458686] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.241 [2024-07-13 23:10:16.556763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.241 [2024-07-13 23:10:16.635464] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:28.182 23:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:28.182 23:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:24:28.182 23:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:28.182 [2024-07-13 23:10:17.503344] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:28.182 [2024-07-13 23:10:17.503711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:28.182 [2024-07-13 23:10:17.503841] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:28.182 [2024-07-13 23:10:17.503909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:28.182 [2024-07-13 23:10:17.504080] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:28.182 [2024-07-13 23:10:17.504175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:28.182 [2024-07-13 23:10:17.504391] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:28.182 [2024-07-13 23:10:17.504565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:28.182 23:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:28.182 23:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:28.182 23:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:28.182 23:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:28.182 23:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:28.182 23:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:28.182 23:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:28.182 23:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:28.182 23:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:28.182 23:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:28.182 23:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.182 23:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:28.441 23:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:28.441 "name": "Existed_Raid", 00:24:28.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.441 "strip_size_kb": 0, 00:24:28.441 "state": "configuring", 00:24:28.441 "raid_level": "raid1", 00:24:28.441 "superblock": false, 00:24:28.441 "num_base_bdevs": 4, 00:24:28.441 "num_base_bdevs_discovered": 0, 00:24:28.441 "num_base_bdevs_operational": 4, 00:24:28.441 "base_bdevs_list": [ 00:24:28.441 { 00:24:28.441 "name": "BaseBdev1", 00:24:28.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.441 "is_configured": false, 00:24:28.441 "data_offset": 0, 00:24:28.441 "data_size": 0 00:24:28.441 }, 00:24:28.441 { 00:24:28.441 "name": "BaseBdev2", 00:24:28.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.441 "is_configured": false, 00:24:28.441 "data_offset": 0, 00:24:28.441 "data_size": 0 00:24:28.441 }, 00:24:28.441 { 00:24:28.441 "name": "BaseBdev3", 00:24:28.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.441 "is_configured": false, 00:24:28.441 "data_offset": 0, 00:24:28.441 "data_size": 0 00:24:28.441 }, 00:24:28.441 { 00:24:28.441 "name": "BaseBdev4", 00:24:28.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.441 "is_configured": false, 00:24:28.441 "data_offset": 0, 00:24:28.441 "data_size": 0 00:24:28.441 } 00:24:28.441 ] 00:24:28.441 }' 00:24:28.441 23:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:28.441 23:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:29.008 23:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:29.266 [2024-07-13 23:10:18.560654] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:29.266 [2024-07-13 23:10:18.560898] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:24:29.266 23:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:29.525 [2024-07-13 23:10:18.768668] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:29.525 [2024-07-13 23:10:18.768893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:29.525 [2024-07-13 23:10:18.769031] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:29.525 [2024-07-13 23:10:18.769102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:29.525 [2024-07-13 23:10:18.769292] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:29.525 [2024-07-13 23:10:18.769356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:29.525 [2024-07-13 23:10:18.769388] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:29.525 [2024-07-13 23:10:18.769606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:29.525 23:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:29.784 [2024-07-13 23:10:19.000218] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:29.784 BaseBdev1 00:24:29.784 23:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:24:29.784 23:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:24:29.784 23:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:29.784 23:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:29.784 23:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:29.784 23:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:29.784 23:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:30.043 23:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:30.301 [ 00:24:30.301 { 00:24:30.301 "name": "BaseBdev1", 00:24:30.301 "aliases": [ 00:24:30.301 "6a012de5-3518-41e9-b8f9-736e99f7c5c0" 00:24:30.301 ], 00:24:30.301 "product_name": "Malloc disk", 00:24:30.301 "block_size": 512, 00:24:30.301 "num_blocks": 65536, 00:24:30.301 "uuid": "6a012de5-3518-41e9-b8f9-736e99f7c5c0", 00:24:30.301 "assigned_rate_limits": { 00:24:30.301 "rw_ios_per_sec": 0, 00:24:30.301 "rw_mbytes_per_sec": 0, 00:24:30.301 "r_mbytes_per_sec": 0, 00:24:30.301 "w_mbytes_per_sec": 0 00:24:30.301 }, 00:24:30.301 "claimed": true, 00:24:30.301 "claim_type": "exclusive_write", 00:24:30.301 "zoned": false, 00:24:30.301 "supported_io_types": { 00:24:30.301 "read": true, 00:24:30.301 "write": true, 00:24:30.301 "unmap": true, 00:24:30.301 "flush": true, 00:24:30.301 "reset": true, 00:24:30.301 "nvme_admin": false, 00:24:30.301 "nvme_io": false, 00:24:30.301 "nvme_io_md": false, 00:24:30.301 "write_zeroes": true, 00:24:30.301 "zcopy": true, 00:24:30.301 "get_zone_info": false, 00:24:30.301 "zone_management": false, 00:24:30.301 "zone_append": false, 00:24:30.301 "compare": false, 00:24:30.301 "compare_and_write": false, 00:24:30.301 "abort": true, 00:24:30.301 "seek_hole": false, 00:24:30.301 "seek_data": false, 00:24:30.301 "copy": true, 00:24:30.301 "nvme_iov_md": false 00:24:30.301 }, 00:24:30.301 "memory_domains": [ 00:24:30.301 { 00:24:30.301 "dma_device_id": "system", 00:24:30.301 "dma_device_type": 1 00:24:30.301 }, 00:24:30.301 { 00:24:30.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:30.301 "dma_device_type": 2 00:24:30.301 } 00:24:30.301 ], 00:24:30.301 "driver_specific": {} 00:24:30.301 } 00:24:30.301 ] 00:24:30.301 23:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:30.301 23:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:30.301 23:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:30.301 23:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:30.301 23:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:30.301 23:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:30.301 23:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:30.301 23:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:30.301 23:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:30.301 23:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:30.301 23:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:30.301 23:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:30.301 23:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:30.569 23:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:30.569 "name": "Existed_Raid", 00:24:30.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.569 "strip_size_kb": 0, 00:24:30.569 "state": "configuring", 00:24:30.569 "raid_level": "raid1", 00:24:30.569 "superblock": false, 00:24:30.569 "num_base_bdevs": 4, 00:24:30.569 "num_base_bdevs_discovered": 1, 00:24:30.569 "num_base_bdevs_operational": 4, 00:24:30.569 "base_bdevs_list": [ 00:24:30.569 { 00:24:30.569 "name": "BaseBdev1", 00:24:30.569 "uuid": "6a012de5-3518-41e9-b8f9-736e99f7c5c0", 00:24:30.570 "is_configured": true, 00:24:30.570 "data_offset": 0, 00:24:30.570 "data_size": 65536 00:24:30.570 }, 00:24:30.570 { 00:24:30.570 "name": "BaseBdev2", 00:24:30.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.570 "is_configured": false, 00:24:30.570 "data_offset": 0, 00:24:30.570 "data_size": 0 00:24:30.570 }, 00:24:30.570 { 00:24:30.570 "name": "BaseBdev3", 00:24:30.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.570 "is_configured": false, 00:24:30.570 "data_offset": 0, 00:24:30.570 "data_size": 0 00:24:30.570 }, 00:24:30.570 { 00:24:30.570 "name": "BaseBdev4", 00:24:30.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.570 "is_configured": false, 00:24:30.570 "data_offset": 0, 00:24:30.570 "data_size": 0 00:24:30.570 } 00:24:30.570 ] 00:24:30.570 }' 00:24:30.570 23:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:30.570 23:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:31.151 23:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:31.410 [2024-07-13 23:10:20.684745] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:31.410 [2024-07-13 23:10:20.685113] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:24:31.410 23:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:31.669 [2024-07-13 23:10:20.900836] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:31.669 [2024-07-13 23:10:20.903621] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:31.669 [2024-07-13 23:10:20.903863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:31.669 [2024-07-13 23:10:20.904031] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:31.669 [2024-07-13 23:10:20.904121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:31.669 [2024-07-13 23:10:20.904348] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:31.669 [2024-07-13 23:10:20.904429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:31.669 23:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:24:31.669 23:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:31.669 23:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:31.669 23:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:31.669 23:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:31.669 23:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:31.669 23:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:31.669 23:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:31.669 23:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:31.669 23:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:31.669 23:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:31.669 23:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:31.669 23:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.669 23:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:31.928 23:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:31.928 "name": "Existed_Raid", 00:24:31.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.928 "strip_size_kb": 0, 00:24:31.928 "state": "configuring", 00:24:31.928 "raid_level": "raid1", 00:24:31.928 "superblock": false, 00:24:31.928 "num_base_bdevs": 4, 00:24:31.928 "num_base_bdevs_discovered": 1, 00:24:31.928 "num_base_bdevs_operational": 4, 00:24:31.928 "base_bdevs_list": [ 00:24:31.928 { 00:24:31.928 "name": "BaseBdev1", 00:24:31.928 "uuid": "6a012de5-3518-41e9-b8f9-736e99f7c5c0", 00:24:31.928 "is_configured": true, 00:24:31.928 "data_offset": 0, 00:24:31.928 "data_size": 65536 00:24:31.928 }, 00:24:31.928 { 00:24:31.928 "name": "BaseBdev2", 00:24:31.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.928 "is_configured": false, 00:24:31.928 "data_offset": 0, 00:24:31.928 "data_size": 0 00:24:31.928 }, 00:24:31.928 { 00:24:31.928 "name": "BaseBdev3", 00:24:31.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.928 "is_configured": false, 00:24:31.928 "data_offset": 0, 00:24:31.928 "data_size": 0 00:24:31.928 }, 00:24:31.928 { 00:24:31.928 "name": "BaseBdev4", 00:24:31.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.928 "is_configured": false, 00:24:31.928 "data_offset": 0, 00:24:31.928 "data_size": 0 00:24:31.928 } 00:24:31.928 ] 00:24:31.928 }' 00:24:31.928 23:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:31.928 23:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:32.494 23:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:32.753 [2024-07-13 23:10:22.026660] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:32.753 BaseBdev2 00:24:32.753 23:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:24:32.753 23:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:24:32.753 23:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:32.753 23:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:32.753 23:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:32.753 23:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:32.753 23:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:33.010 23:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:33.269 [ 00:24:33.269 { 00:24:33.269 "name": "BaseBdev2", 00:24:33.269 "aliases": [ 00:24:33.269 "8e8625f9-5ae8-4154-b3de-7edcf272bf64" 00:24:33.269 ], 00:24:33.269 "product_name": "Malloc disk", 00:24:33.269 "block_size": 512, 00:24:33.269 "num_blocks": 65536, 00:24:33.269 "uuid": "8e8625f9-5ae8-4154-b3de-7edcf272bf64", 00:24:33.269 "assigned_rate_limits": { 00:24:33.269 "rw_ios_per_sec": 0, 00:24:33.269 "rw_mbytes_per_sec": 0, 00:24:33.269 "r_mbytes_per_sec": 0, 00:24:33.269 "w_mbytes_per_sec": 0 00:24:33.269 }, 00:24:33.269 "claimed": true, 00:24:33.269 "claim_type": "exclusive_write", 00:24:33.269 "zoned": false, 00:24:33.269 "supported_io_types": { 00:24:33.269 "read": true, 00:24:33.269 "write": true, 00:24:33.269 "unmap": true, 00:24:33.269 "flush": true, 00:24:33.269 "reset": true, 00:24:33.269 "nvme_admin": false, 00:24:33.269 "nvme_io": false, 00:24:33.269 "nvme_io_md": false, 00:24:33.269 "write_zeroes": true, 00:24:33.269 "zcopy": true, 00:24:33.269 "get_zone_info": false, 00:24:33.269 "zone_management": false, 00:24:33.269 "zone_append": false, 00:24:33.269 "compare": false, 00:24:33.269 "compare_and_write": false, 00:24:33.269 "abort": true, 00:24:33.269 "seek_hole": false, 00:24:33.269 "seek_data": false, 00:24:33.269 "copy": true, 00:24:33.269 "nvme_iov_md": false 00:24:33.269 }, 00:24:33.269 "memory_domains": [ 00:24:33.270 { 00:24:33.270 "dma_device_id": "system", 00:24:33.270 "dma_device_type": 1 00:24:33.270 }, 00:24:33.270 { 00:24:33.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:33.270 "dma_device_type": 2 00:24:33.270 } 00:24:33.270 ], 00:24:33.270 "driver_specific": {} 00:24:33.270 } 00:24:33.270 ] 00:24:33.270 23:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:33.270 23:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:33.270 23:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:33.270 23:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:33.270 23:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:33.270 23:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:33.270 23:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:33.270 23:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:33.270 23:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:33.270 23:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:33.270 23:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:33.270 23:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:33.270 23:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:33.270 23:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.270 23:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:33.528 23:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:33.528 "name": "Existed_Raid", 00:24:33.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.528 "strip_size_kb": 0, 00:24:33.528 "state": "configuring", 00:24:33.528 "raid_level": "raid1", 00:24:33.528 "superblock": false, 00:24:33.528 "num_base_bdevs": 4, 00:24:33.528 "num_base_bdevs_discovered": 2, 00:24:33.528 "num_base_bdevs_operational": 4, 00:24:33.528 "base_bdevs_list": [ 00:24:33.528 { 00:24:33.528 "name": "BaseBdev1", 00:24:33.528 "uuid": "6a012de5-3518-41e9-b8f9-736e99f7c5c0", 00:24:33.528 "is_configured": true, 00:24:33.528 "data_offset": 0, 00:24:33.528 "data_size": 65536 00:24:33.528 }, 00:24:33.528 { 00:24:33.528 "name": "BaseBdev2", 00:24:33.528 "uuid": "8e8625f9-5ae8-4154-b3de-7edcf272bf64", 00:24:33.528 "is_configured": true, 00:24:33.528 "data_offset": 0, 00:24:33.528 "data_size": 65536 00:24:33.528 }, 00:24:33.528 { 00:24:33.528 "name": "BaseBdev3", 00:24:33.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.528 "is_configured": false, 00:24:33.528 "data_offset": 0, 00:24:33.528 "data_size": 0 00:24:33.528 }, 00:24:33.528 { 00:24:33.528 "name": "BaseBdev4", 00:24:33.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.528 "is_configured": false, 00:24:33.528 "data_offset": 0, 00:24:33.528 "data_size": 0 00:24:33.528 } 00:24:33.528 ] 00:24:33.528 }' 00:24:33.528 23:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:33.528 23:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:34.094 23:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:34.351 [2024-07-13 23:10:23.659455] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:34.351 BaseBdev3 00:24:34.351 23:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:24:34.351 23:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:24:34.351 23:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:34.351 23:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:34.351 23:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:34.351 23:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:34.351 23:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:34.608 23:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:34.865 [ 00:24:34.865 { 00:24:34.865 "name": "BaseBdev3", 00:24:34.865 "aliases": [ 00:24:34.865 "539d2026-3b4d-4442-91bf-736d581d8c11" 00:24:34.865 ], 00:24:34.866 "product_name": "Malloc disk", 00:24:34.866 "block_size": 512, 00:24:34.866 "num_blocks": 65536, 00:24:34.866 "uuid": "539d2026-3b4d-4442-91bf-736d581d8c11", 00:24:34.866 "assigned_rate_limits": { 00:24:34.866 "rw_ios_per_sec": 0, 00:24:34.866 "rw_mbytes_per_sec": 0, 00:24:34.866 "r_mbytes_per_sec": 0, 00:24:34.866 "w_mbytes_per_sec": 0 00:24:34.866 }, 00:24:34.866 "claimed": true, 00:24:34.866 "claim_type": "exclusive_write", 00:24:34.866 "zoned": false, 00:24:34.866 "supported_io_types": { 00:24:34.866 "read": true, 00:24:34.866 "write": true, 00:24:34.866 "unmap": true, 00:24:34.866 "flush": true, 00:24:34.866 "reset": true, 00:24:34.866 "nvme_admin": false, 00:24:34.866 "nvme_io": false, 00:24:34.866 "nvme_io_md": false, 00:24:34.866 "write_zeroes": true, 00:24:34.866 "zcopy": true, 00:24:34.866 "get_zone_info": false, 00:24:34.866 "zone_management": false, 00:24:34.866 "zone_append": false, 00:24:34.866 "compare": false, 00:24:34.866 "compare_and_write": false, 00:24:34.866 "abort": true, 00:24:34.866 "seek_hole": false, 00:24:34.866 "seek_data": false, 00:24:34.866 "copy": true, 00:24:34.866 "nvme_iov_md": false 00:24:34.866 }, 00:24:34.866 "memory_domains": [ 00:24:34.866 { 00:24:34.866 "dma_device_id": "system", 00:24:34.866 "dma_device_type": 1 00:24:34.866 }, 00:24:34.866 { 00:24:34.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:34.866 "dma_device_type": 2 00:24:34.866 } 00:24:34.866 ], 00:24:34.866 "driver_specific": {} 00:24:34.866 } 00:24:34.866 ] 00:24:34.866 23:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:34.866 23:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:34.866 23:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:34.866 23:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:34.866 23:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:34.866 23:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:34.866 23:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:34.866 23:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:34.866 23:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:34.866 23:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:34.866 23:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:34.866 23:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:34.866 23:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:34.866 23:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:34.866 23:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:35.123 23:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:35.123 "name": "Existed_Raid", 00:24:35.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:35.124 "strip_size_kb": 0, 00:24:35.124 "state": "configuring", 00:24:35.124 "raid_level": "raid1", 00:24:35.124 "superblock": false, 00:24:35.124 "num_base_bdevs": 4, 00:24:35.124 "num_base_bdevs_discovered": 3, 00:24:35.124 "num_base_bdevs_operational": 4, 00:24:35.124 "base_bdevs_list": [ 00:24:35.124 { 00:24:35.124 "name": "BaseBdev1", 00:24:35.124 "uuid": "6a012de5-3518-41e9-b8f9-736e99f7c5c0", 00:24:35.124 "is_configured": true, 00:24:35.124 "data_offset": 0, 00:24:35.124 "data_size": 65536 00:24:35.124 }, 00:24:35.124 { 00:24:35.124 "name": "BaseBdev2", 00:24:35.124 "uuid": "8e8625f9-5ae8-4154-b3de-7edcf272bf64", 00:24:35.124 "is_configured": true, 00:24:35.124 "data_offset": 0, 00:24:35.124 "data_size": 65536 00:24:35.124 }, 00:24:35.124 { 00:24:35.124 "name": "BaseBdev3", 00:24:35.124 "uuid": "539d2026-3b4d-4442-91bf-736d581d8c11", 00:24:35.124 "is_configured": true, 00:24:35.124 "data_offset": 0, 00:24:35.124 "data_size": 65536 00:24:35.124 }, 00:24:35.124 { 00:24:35.124 "name": "BaseBdev4", 00:24:35.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:35.124 "is_configured": false, 00:24:35.124 "data_offset": 0, 00:24:35.124 "data_size": 0 00:24:35.124 } 00:24:35.124 ] 00:24:35.124 }' 00:24:35.124 23:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:35.124 23:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:35.689 23:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:35.946 [2024-07-13 23:10:25.328380] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:35.946 [2024-07-13 23:10:25.328751] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:24:35.946 [2024-07-13 23:10:25.328802] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:24:35.946 [2024-07-13 23:10:25.329113] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:24:35.946 [2024-07-13 23:10:25.329806] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:24:35.946 [2024-07-13 23:10:25.329957] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:24:35.946 [2024-07-13 23:10:25.330430] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:35.946 BaseBdev4 00:24:35.946 23:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:24:35.946 23:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:24:35.946 23:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:35.946 23:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:35.946 23:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:35.946 23:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:35.946 23:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:36.512 23:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:36.512 [ 00:24:36.512 { 00:24:36.512 "name": "BaseBdev4", 00:24:36.512 "aliases": [ 00:24:36.512 "a9da49ae-3c0c-4b37-97f8-cb5d27b0bbd4" 00:24:36.512 ], 00:24:36.512 "product_name": "Malloc disk", 00:24:36.512 "block_size": 512, 00:24:36.512 "num_blocks": 65536, 00:24:36.512 "uuid": "a9da49ae-3c0c-4b37-97f8-cb5d27b0bbd4", 00:24:36.512 "assigned_rate_limits": { 00:24:36.512 "rw_ios_per_sec": 0, 00:24:36.512 "rw_mbytes_per_sec": 0, 00:24:36.512 "r_mbytes_per_sec": 0, 00:24:36.512 "w_mbytes_per_sec": 0 00:24:36.512 }, 00:24:36.512 "claimed": true, 00:24:36.512 "claim_type": "exclusive_write", 00:24:36.512 "zoned": false, 00:24:36.512 "supported_io_types": { 00:24:36.512 "read": true, 00:24:36.512 "write": true, 00:24:36.512 "unmap": true, 00:24:36.512 "flush": true, 00:24:36.512 "reset": true, 00:24:36.512 "nvme_admin": false, 00:24:36.512 "nvme_io": false, 00:24:36.512 "nvme_io_md": false, 00:24:36.512 "write_zeroes": true, 00:24:36.512 "zcopy": true, 00:24:36.512 "get_zone_info": false, 00:24:36.512 "zone_management": false, 00:24:36.513 "zone_append": false, 00:24:36.513 "compare": false, 00:24:36.513 "compare_and_write": false, 00:24:36.513 "abort": true, 00:24:36.513 "seek_hole": false, 00:24:36.513 "seek_data": false, 00:24:36.513 "copy": true, 00:24:36.513 "nvme_iov_md": false 00:24:36.513 }, 00:24:36.513 "memory_domains": [ 00:24:36.513 { 00:24:36.513 "dma_device_id": "system", 00:24:36.513 "dma_device_type": 1 00:24:36.513 }, 00:24:36.513 { 00:24:36.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:36.513 "dma_device_type": 2 00:24:36.513 } 00:24:36.513 ], 00:24:36.513 "driver_specific": {} 00:24:36.513 } 00:24:36.513 ] 00:24:36.513 23:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:36.513 23:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:36.513 23:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:36.513 23:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:24:36.513 23:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:36.513 23:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:36.513 23:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:36.513 23:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:36.513 23:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:36.513 23:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:36.513 23:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:36.513 23:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:36.513 23:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:36.513 23:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:36.513 23:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:36.772 23:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:36.772 "name": "Existed_Raid", 00:24:36.772 "uuid": "4ac9777d-c123-4d96-b48c-4752c60b2ef3", 00:24:36.772 "strip_size_kb": 0, 00:24:36.772 "state": "online", 00:24:36.772 "raid_level": "raid1", 00:24:36.772 "superblock": false, 00:24:36.772 "num_base_bdevs": 4, 00:24:36.772 "num_base_bdevs_discovered": 4, 00:24:36.772 "num_base_bdevs_operational": 4, 00:24:36.772 "base_bdevs_list": [ 00:24:36.772 { 00:24:36.772 "name": "BaseBdev1", 00:24:36.772 "uuid": "6a012de5-3518-41e9-b8f9-736e99f7c5c0", 00:24:36.772 "is_configured": true, 00:24:36.772 "data_offset": 0, 00:24:36.772 "data_size": 65536 00:24:36.772 }, 00:24:36.772 { 00:24:36.772 "name": "BaseBdev2", 00:24:36.772 "uuid": "8e8625f9-5ae8-4154-b3de-7edcf272bf64", 00:24:36.772 "is_configured": true, 00:24:36.772 "data_offset": 0, 00:24:36.772 "data_size": 65536 00:24:36.772 }, 00:24:36.772 { 00:24:36.772 "name": "BaseBdev3", 00:24:36.772 "uuid": "539d2026-3b4d-4442-91bf-736d581d8c11", 00:24:36.772 "is_configured": true, 00:24:36.772 "data_offset": 0, 00:24:36.772 "data_size": 65536 00:24:36.772 }, 00:24:36.772 { 00:24:36.772 "name": "BaseBdev4", 00:24:36.772 "uuid": "a9da49ae-3c0c-4b37-97f8-cb5d27b0bbd4", 00:24:36.772 "is_configured": true, 00:24:36.772 "data_offset": 0, 00:24:36.772 "data_size": 65536 00:24:36.772 } 00:24:36.772 ] 00:24:36.772 }' 00:24:36.772 23:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:36.772 23:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:37.339 23:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:24:37.339 23:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:37.339 23:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:37.339 23:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:37.339 23:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:37.339 23:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:37.339 23:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:37.339 23:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:37.598 [2024-07-13 23:10:26.917262] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:37.598 23:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:37.598 "name": "Existed_Raid", 00:24:37.598 "aliases": [ 00:24:37.598 "4ac9777d-c123-4d96-b48c-4752c60b2ef3" 00:24:37.598 ], 00:24:37.598 "product_name": "Raid Volume", 00:24:37.598 "block_size": 512, 00:24:37.598 "num_blocks": 65536, 00:24:37.598 "uuid": "4ac9777d-c123-4d96-b48c-4752c60b2ef3", 00:24:37.598 "assigned_rate_limits": { 00:24:37.598 "rw_ios_per_sec": 0, 00:24:37.598 "rw_mbytes_per_sec": 0, 00:24:37.598 "r_mbytes_per_sec": 0, 00:24:37.598 "w_mbytes_per_sec": 0 00:24:37.598 }, 00:24:37.598 "claimed": false, 00:24:37.598 "zoned": false, 00:24:37.598 "supported_io_types": { 00:24:37.598 "read": true, 00:24:37.598 "write": true, 00:24:37.598 "unmap": false, 00:24:37.598 "flush": false, 00:24:37.598 "reset": true, 00:24:37.598 "nvme_admin": false, 00:24:37.598 "nvme_io": false, 00:24:37.598 "nvme_io_md": false, 00:24:37.598 "write_zeroes": true, 00:24:37.598 "zcopy": false, 00:24:37.598 "get_zone_info": false, 00:24:37.598 "zone_management": false, 00:24:37.598 "zone_append": false, 00:24:37.598 "compare": false, 00:24:37.599 "compare_and_write": false, 00:24:37.599 "abort": false, 00:24:37.599 "seek_hole": false, 00:24:37.599 "seek_data": false, 00:24:37.599 "copy": false, 00:24:37.599 "nvme_iov_md": false 00:24:37.599 }, 00:24:37.599 "memory_domains": [ 00:24:37.599 { 00:24:37.599 "dma_device_id": "system", 00:24:37.599 "dma_device_type": 1 00:24:37.599 }, 00:24:37.599 { 00:24:37.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:37.599 "dma_device_type": 2 00:24:37.599 }, 00:24:37.599 { 00:24:37.599 "dma_device_id": "system", 00:24:37.599 "dma_device_type": 1 00:24:37.599 }, 00:24:37.599 { 00:24:37.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:37.599 "dma_device_type": 2 00:24:37.599 }, 00:24:37.599 { 00:24:37.599 "dma_device_id": "system", 00:24:37.599 "dma_device_type": 1 00:24:37.599 }, 00:24:37.599 { 00:24:37.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:37.599 "dma_device_type": 2 00:24:37.599 }, 00:24:37.599 { 00:24:37.599 "dma_device_id": "system", 00:24:37.599 "dma_device_type": 1 00:24:37.599 }, 00:24:37.599 { 00:24:37.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:37.599 "dma_device_type": 2 00:24:37.599 } 00:24:37.599 ], 00:24:37.599 "driver_specific": { 00:24:37.599 "raid": { 00:24:37.599 "uuid": "4ac9777d-c123-4d96-b48c-4752c60b2ef3", 00:24:37.599 "strip_size_kb": 0, 00:24:37.599 "state": "online", 00:24:37.599 "raid_level": "raid1", 00:24:37.599 "superblock": false, 00:24:37.599 "num_base_bdevs": 4, 00:24:37.599 "num_base_bdevs_discovered": 4, 00:24:37.599 "num_base_bdevs_operational": 4, 00:24:37.599 "base_bdevs_list": [ 00:24:37.599 { 00:24:37.599 "name": "BaseBdev1", 00:24:37.599 "uuid": "6a012de5-3518-41e9-b8f9-736e99f7c5c0", 00:24:37.599 "is_configured": true, 00:24:37.599 "data_offset": 0, 00:24:37.599 "data_size": 65536 00:24:37.599 }, 00:24:37.599 { 00:24:37.599 "name": "BaseBdev2", 00:24:37.599 "uuid": "8e8625f9-5ae8-4154-b3de-7edcf272bf64", 00:24:37.599 "is_configured": true, 00:24:37.599 "data_offset": 0, 00:24:37.599 "data_size": 65536 00:24:37.599 }, 00:24:37.599 { 00:24:37.599 "name": "BaseBdev3", 00:24:37.599 "uuid": "539d2026-3b4d-4442-91bf-736d581d8c11", 00:24:37.599 "is_configured": true, 00:24:37.599 "data_offset": 0, 00:24:37.599 "data_size": 65536 00:24:37.599 }, 00:24:37.599 { 00:24:37.599 "name": "BaseBdev4", 00:24:37.599 "uuid": "a9da49ae-3c0c-4b37-97f8-cb5d27b0bbd4", 00:24:37.599 "is_configured": true, 00:24:37.599 "data_offset": 0, 00:24:37.599 "data_size": 65536 00:24:37.599 } 00:24:37.599 ] 00:24:37.599 } 00:24:37.599 } 00:24:37.599 }' 00:24:37.599 23:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:37.599 23:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:24:37.599 BaseBdev2 00:24:37.599 BaseBdev3 00:24:37.599 BaseBdev4' 00:24:37.599 23:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:37.599 23:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:24:37.599 23:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:37.858 23:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:37.858 "name": "BaseBdev1", 00:24:37.858 "aliases": [ 00:24:37.858 "6a012de5-3518-41e9-b8f9-736e99f7c5c0" 00:24:37.858 ], 00:24:37.858 "product_name": "Malloc disk", 00:24:37.858 "block_size": 512, 00:24:37.858 "num_blocks": 65536, 00:24:37.858 "uuid": "6a012de5-3518-41e9-b8f9-736e99f7c5c0", 00:24:37.858 "assigned_rate_limits": { 00:24:37.858 "rw_ios_per_sec": 0, 00:24:37.858 "rw_mbytes_per_sec": 0, 00:24:37.858 "r_mbytes_per_sec": 0, 00:24:37.858 "w_mbytes_per_sec": 0 00:24:37.858 }, 00:24:37.858 "claimed": true, 00:24:37.858 "claim_type": "exclusive_write", 00:24:37.858 "zoned": false, 00:24:37.858 "supported_io_types": { 00:24:37.858 "read": true, 00:24:37.858 "write": true, 00:24:37.858 "unmap": true, 00:24:37.858 "flush": true, 00:24:37.858 "reset": true, 00:24:37.858 "nvme_admin": false, 00:24:37.858 "nvme_io": false, 00:24:37.858 "nvme_io_md": false, 00:24:37.858 "write_zeroes": true, 00:24:37.858 "zcopy": true, 00:24:37.858 "get_zone_info": false, 00:24:37.858 "zone_management": false, 00:24:37.858 "zone_append": false, 00:24:37.858 "compare": false, 00:24:37.858 "compare_and_write": false, 00:24:37.858 "abort": true, 00:24:37.858 "seek_hole": false, 00:24:37.858 "seek_data": false, 00:24:37.858 "copy": true, 00:24:37.858 "nvme_iov_md": false 00:24:37.858 }, 00:24:37.858 "memory_domains": [ 00:24:37.858 { 00:24:37.858 "dma_device_id": "system", 00:24:37.858 "dma_device_type": 1 00:24:37.858 }, 00:24:37.858 { 00:24:37.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:37.858 "dma_device_type": 2 00:24:37.858 } 00:24:37.858 ], 00:24:37.858 "driver_specific": {} 00:24:37.858 }' 00:24:37.858 23:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:37.858 23:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:38.116 23:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:38.116 23:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:38.116 23:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:38.116 23:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:38.116 23:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:38.116 23:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:38.116 23:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:38.116 23:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:38.375 23:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:38.375 23:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:38.375 23:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:38.375 23:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:38.375 23:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:38.634 23:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:38.634 "name": "BaseBdev2", 00:24:38.634 "aliases": [ 00:24:38.634 "8e8625f9-5ae8-4154-b3de-7edcf272bf64" 00:24:38.634 ], 00:24:38.634 "product_name": "Malloc disk", 00:24:38.634 "block_size": 512, 00:24:38.634 "num_blocks": 65536, 00:24:38.634 "uuid": "8e8625f9-5ae8-4154-b3de-7edcf272bf64", 00:24:38.634 "assigned_rate_limits": { 00:24:38.634 "rw_ios_per_sec": 0, 00:24:38.634 "rw_mbytes_per_sec": 0, 00:24:38.634 "r_mbytes_per_sec": 0, 00:24:38.634 "w_mbytes_per_sec": 0 00:24:38.634 }, 00:24:38.634 "claimed": true, 00:24:38.634 "claim_type": "exclusive_write", 00:24:38.634 "zoned": false, 00:24:38.634 "supported_io_types": { 00:24:38.634 "read": true, 00:24:38.634 "write": true, 00:24:38.634 "unmap": true, 00:24:38.634 "flush": true, 00:24:38.634 "reset": true, 00:24:38.634 "nvme_admin": false, 00:24:38.634 "nvme_io": false, 00:24:38.634 "nvme_io_md": false, 00:24:38.634 "write_zeroes": true, 00:24:38.634 "zcopy": true, 00:24:38.634 "get_zone_info": false, 00:24:38.634 "zone_management": false, 00:24:38.634 "zone_append": false, 00:24:38.634 "compare": false, 00:24:38.634 "compare_and_write": false, 00:24:38.634 "abort": true, 00:24:38.634 "seek_hole": false, 00:24:38.634 "seek_data": false, 00:24:38.634 "copy": true, 00:24:38.634 "nvme_iov_md": false 00:24:38.634 }, 00:24:38.634 "memory_domains": [ 00:24:38.634 { 00:24:38.634 "dma_device_id": "system", 00:24:38.634 "dma_device_type": 1 00:24:38.634 }, 00:24:38.634 { 00:24:38.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:38.634 "dma_device_type": 2 00:24:38.634 } 00:24:38.634 ], 00:24:38.634 "driver_specific": {} 00:24:38.634 }' 00:24:38.634 23:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:38.634 23:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:38.634 23:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:38.634 23:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:38.893 23:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:38.893 23:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:38.893 23:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:38.893 23:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:38.893 23:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:38.893 23:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:38.893 23:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:39.151 23:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:39.151 23:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:39.151 23:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:39.151 23:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:39.410 23:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:39.410 "name": "BaseBdev3", 00:24:39.410 "aliases": [ 00:24:39.410 "539d2026-3b4d-4442-91bf-736d581d8c11" 00:24:39.410 ], 00:24:39.410 "product_name": "Malloc disk", 00:24:39.410 "block_size": 512, 00:24:39.410 "num_blocks": 65536, 00:24:39.410 "uuid": "539d2026-3b4d-4442-91bf-736d581d8c11", 00:24:39.410 "assigned_rate_limits": { 00:24:39.410 "rw_ios_per_sec": 0, 00:24:39.410 "rw_mbytes_per_sec": 0, 00:24:39.410 "r_mbytes_per_sec": 0, 00:24:39.410 "w_mbytes_per_sec": 0 00:24:39.410 }, 00:24:39.410 "claimed": true, 00:24:39.410 "claim_type": "exclusive_write", 00:24:39.410 "zoned": false, 00:24:39.410 "supported_io_types": { 00:24:39.410 "read": true, 00:24:39.410 "write": true, 00:24:39.410 "unmap": true, 00:24:39.410 "flush": true, 00:24:39.410 "reset": true, 00:24:39.410 "nvme_admin": false, 00:24:39.410 "nvme_io": false, 00:24:39.410 "nvme_io_md": false, 00:24:39.410 "write_zeroes": true, 00:24:39.410 "zcopy": true, 00:24:39.410 "get_zone_info": false, 00:24:39.410 "zone_management": false, 00:24:39.410 "zone_append": false, 00:24:39.410 "compare": false, 00:24:39.410 "compare_and_write": false, 00:24:39.410 "abort": true, 00:24:39.410 "seek_hole": false, 00:24:39.410 "seek_data": false, 00:24:39.410 "copy": true, 00:24:39.410 "nvme_iov_md": false 00:24:39.410 }, 00:24:39.410 "memory_domains": [ 00:24:39.410 { 00:24:39.410 "dma_device_id": "system", 00:24:39.410 "dma_device_type": 1 00:24:39.410 }, 00:24:39.410 { 00:24:39.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:39.410 "dma_device_type": 2 00:24:39.410 } 00:24:39.410 ], 00:24:39.410 "driver_specific": {} 00:24:39.410 }' 00:24:39.410 23:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:39.410 23:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:39.410 23:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:39.410 23:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:39.410 23:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:39.410 23:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:39.410 23:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:39.669 23:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:39.669 23:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:39.669 23:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:39.669 23:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:39.669 23:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:39.669 23:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:39.669 23:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:39.669 23:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:39.929 23:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:39.929 "name": "BaseBdev4", 00:24:39.929 "aliases": [ 00:24:39.929 "a9da49ae-3c0c-4b37-97f8-cb5d27b0bbd4" 00:24:39.930 ], 00:24:39.930 "product_name": "Malloc disk", 00:24:39.930 "block_size": 512, 00:24:39.930 "num_blocks": 65536, 00:24:39.930 "uuid": "a9da49ae-3c0c-4b37-97f8-cb5d27b0bbd4", 00:24:39.930 "assigned_rate_limits": { 00:24:39.930 "rw_ios_per_sec": 0, 00:24:39.930 "rw_mbytes_per_sec": 0, 00:24:39.930 "r_mbytes_per_sec": 0, 00:24:39.930 "w_mbytes_per_sec": 0 00:24:39.930 }, 00:24:39.930 "claimed": true, 00:24:39.930 "claim_type": "exclusive_write", 00:24:39.930 "zoned": false, 00:24:39.930 "supported_io_types": { 00:24:39.930 "read": true, 00:24:39.930 "write": true, 00:24:39.930 "unmap": true, 00:24:39.930 "flush": true, 00:24:39.930 "reset": true, 00:24:39.930 "nvme_admin": false, 00:24:39.930 "nvme_io": false, 00:24:39.930 "nvme_io_md": false, 00:24:39.930 "write_zeroes": true, 00:24:39.930 "zcopy": true, 00:24:39.930 "get_zone_info": false, 00:24:39.930 "zone_management": false, 00:24:39.930 "zone_append": false, 00:24:39.930 "compare": false, 00:24:39.930 "compare_and_write": false, 00:24:39.930 "abort": true, 00:24:39.930 "seek_hole": false, 00:24:39.930 "seek_data": false, 00:24:39.930 "copy": true, 00:24:39.930 "nvme_iov_md": false 00:24:39.930 }, 00:24:39.930 "memory_domains": [ 00:24:39.930 { 00:24:39.930 "dma_device_id": "system", 00:24:39.930 "dma_device_type": 1 00:24:39.930 }, 00:24:39.930 { 00:24:39.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:39.930 "dma_device_type": 2 00:24:39.930 } 00:24:39.930 ], 00:24:39.930 "driver_specific": {} 00:24:39.930 }' 00:24:39.930 23:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:39.930 23:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:40.201 23:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:40.201 23:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:40.201 23:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:40.201 23:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:40.201 23:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:40.201 23:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:40.201 23:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:40.201 23:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:40.201 23:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:40.470 23:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:40.470 23:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:40.729 [2024-07-13 23:10:29.914308] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:40.729 23:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:24:40.729 23:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:24:40.729 23:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:40.729 23:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:24:40.729 23:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:24:40.729 23:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:24:40.729 23:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:40.729 23:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:40.729 23:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:40.729 23:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:40.729 23:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:40.729 23:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:40.729 23:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:40.729 23:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:40.729 23:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:40.729 23:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.729 23:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:40.988 23:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:40.988 "name": "Existed_Raid", 00:24:40.988 "uuid": "4ac9777d-c123-4d96-b48c-4752c60b2ef3", 00:24:40.988 "strip_size_kb": 0, 00:24:40.988 "state": "online", 00:24:40.988 "raid_level": "raid1", 00:24:40.988 "superblock": false, 00:24:40.988 "num_base_bdevs": 4, 00:24:40.988 "num_base_bdevs_discovered": 3, 00:24:40.988 "num_base_bdevs_operational": 3, 00:24:40.988 "base_bdevs_list": [ 00:24:40.988 { 00:24:40.988 "name": null, 00:24:40.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:40.988 "is_configured": false, 00:24:40.988 "data_offset": 0, 00:24:40.988 "data_size": 65536 00:24:40.988 }, 00:24:40.988 { 00:24:40.988 "name": "BaseBdev2", 00:24:40.988 "uuid": "8e8625f9-5ae8-4154-b3de-7edcf272bf64", 00:24:40.988 "is_configured": true, 00:24:40.988 "data_offset": 0, 00:24:40.988 "data_size": 65536 00:24:40.988 }, 00:24:40.988 { 00:24:40.988 "name": "BaseBdev3", 00:24:40.988 "uuid": "539d2026-3b4d-4442-91bf-736d581d8c11", 00:24:40.988 "is_configured": true, 00:24:40.988 "data_offset": 0, 00:24:40.988 "data_size": 65536 00:24:40.988 }, 00:24:40.988 { 00:24:40.988 "name": "BaseBdev4", 00:24:40.988 "uuid": "a9da49ae-3c0c-4b37-97f8-cb5d27b0bbd4", 00:24:40.988 "is_configured": true, 00:24:40.988 "data_offset": 0, 00:24:40.988 "data_size": 65536 00:24:40.988 } 00:24:40.988 ] 00:24:40.988 }' 00:24:40.988 23:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:40.988 23:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:41.554 23:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:24:41.554 23:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:41.554 23:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.554 23:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:41.812 23:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:41.812 23:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:41.812 23:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:42.070 [2024-07-13 23:10:31.372174] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:42.070 23:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:42.070 23:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:42.070 23:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:42.070 23:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:42.328 23:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:42.328 23:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:42.328 23:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:42.587 [2024-07-13 23:10:31.861578] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:42.587 23:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:42.587 23:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:42.587 23:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:42.587 23:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:42.845 23:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:42.845 23:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:42.845 23:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:43.103 [2024-07-13 23:10:32.407220] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:43.103 [2024-07-13 23:10:32.407584] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:43.103 [2024-07-13 23:10:32.420904] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:43.103 [2024-07-13 23:10:32.421176] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:43.103 [2024-07-13 23:10:32.421326] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:24:43.103 23:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:43.103 23:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:43.103 23:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:43.103 23:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:24:43.361 23:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:24:43.361 23:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:24:43.361 23:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:24:43.361 23:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:24:43.361 23:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:43.361 23:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:43.620 BaseBdev2 00:24:43.620 23:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:24:43.620 23:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:24:43.620 23:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:43.620 23:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:43.620 23:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:43.620 23:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:43.620 23:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:43.880 23:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:44.137 [ 00:24:44.137 { 00:24:44.137 "name": "BaseBdev2", 00:24:44.137 "aliases": [ 00:24:44.137 "0157e6b5-ec0a-4793-89ea-c649d8c7af7f" 00:24:44.137 ], 00:24:44.137 "product_name": "Malloc disk", 00:24:44.137 "block_size": 512, 00:24:44.137 "num_blocks": 65536, 00:24:44.137 "uuid": "0157e6b5-ec0a-4793-89ea-c649d8c7af7f", 00:24:44.137 "assigned_rate_limits": { 00:24:44.137 "rw_ios_per_sec": 0, 00:24:44.137 "rw_mbytes_per_sec": 0, 00:24:44.137 "r_mbytes_per_sec": 0, 00:24:44.137 "w_mbytes_per_sec": 0 00:24:44.137 }, 00:24:44.137 "claimed": false, 00:24:44.137 "zoned": false, 00:24:44.137 "supported_io_types": { 00:24:44.137 "read": true, 00:24:44.137 "write": true, 00:24:44.137 "unmap": true, 00:24:44.137 "flush": true, 00:24:44.137 "reset": true, 00:24:44.137 "nvme_admin": false, 00:24:44.137 "nvme_io": false, 00:24:44.137 "nvme_io_md": false, 00:24:44.137 "write_zeroes": true, 00:24:44.137 "zcopy": true, 00:24:44.137 "get_zone_info": false, 00:24:44.137 "zone_management": false, 00:24:44.137 "zone_append": false, 00:24:44.137 "compare": false, 00:24:44.137 "compare_and_write": false, 00:24:44.137 "abort": true, 00:24:44.137 "seek_hole": false, 00:24:44.137 "seek_data": false, 00:24:44.137 "copy": true, 00:24:44.137 "nvme_iov_md": false 00:24:44.137 }, 00:24:44.137 "memory_domains": [ 00:24:44.137 { 00:24:44.137 "dma_device_id": "system", 00:24:44.137 "dma_device_type": 1 00:24:44.137 }, 00:24:44.137 { 00:24:44.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:44.137 "dma_device_type": 2 00:24:44.137 } 00:24:44.137 ], 00:24:44.137 "driver_specific": {} 00:24:44.137 } 00:24:44.137 ] 00:24:44.137 23:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:44.137 23:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:44.137 23:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:44.138 23:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:44.395 BaseBdev3 00:24:44.395 23:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:24:44.395 23:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:24:44.395 23:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:44.395 23:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:44.395 23:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:44.395 23:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:44.395 23:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:44.653 23:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:44.911 [ 00:24:44.911 { 00:24:44.911 "name": "BaseBdev3", 00:24:44.911 "aliases": [ 00:24:44.911 "ac303cc4-9348-4c8f-9f30-7a325c6887ed" 00:24:44.911 ], 00:24:44.911 "product_name": "Malloc disk", 00:24:44.911 "block_size": 512, 00:24:44.911 "num_blocks": 65536, 00:24:44.911 "uuid": "ac303cc4-9348-4c8f-9f30-7a325c6887ed", 00:24:44.911 "assigned_rate_limits": { 00:24:44.911 "rw_ios_per_sec": 0, 00:24:44.911 "rw_mbytes_per_sec": 0, 00:24:44.911 "r_mbytes_per_sec": 0, 00:24:44.911 "w_mbytes_per_sec": 0 00:24:44.911 }, 00:24:44.911 "claimed": false, 00:24:44.911 "zoned": false, 00:24:44.911 "supported_io_types": { 00:24:44.911 "read": true, 00:24:44.911 "write": true, 00:24:44.911 "unmap": true, 00:24:44.911 "flush": true, 00:24:44.911 "reset": true, 00:24:44.911 "nvme_admin": false, 00:24:44.911 "nvme_io": false, 00:24:44.911 "nvme_io_md": false, 00:24:44.911 "write_zeroes": true, 00:24:44.911 "zcopy": true, 00:24:44.911 "get_zone_info": false, 00:24:44.911 "zone_management": false, 00:24:44.911 "zone_append": false, 00:24:44.911 "compare": false, 00:24:44.911 "compare_and_write": false, 00:24:44.911 "abort": true, 00:24:44.911 "seek_hole": false, 00:24:44.911 "seek_data": false, 00:24:44.911 "copy": true, 00:24:44.911 "nvme_iov_md": false 00:24:44.911 }, 00:24:44.911 "memory_domains": [ 00:24:44.911 { 00:24:44.911 "dma_device_id": "system", 00:24:44.911 "dma_device_type": 1 00:24:44.911 }, 00:24:44.911 { 00:24:44.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:44.911 "dma_device_type": 2 00:24:44.911 } 00:24:44.911 ], 00:24:44.911 "driver_specific": {} 00:24:44.911 } 00:24:44.911 ] 00:24:44.911 23:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:44.911 23:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:44.911 23:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:44.911 23:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:44.911 BaseBdev4 00:24:44.911 23:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:24:44.911 23:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:24:44.911 23:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:44.911 23:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:44.911 23:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:44.911 23:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:44.911 23:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:45.170 23:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:45.427 [ 00:24:45.427 { 00:24:45.427 "name": "BaseBdev4", 00:24:45.427 "aliases": [ 00:24:45.427 "217629fa-3b61-4c1c-acbe-5f1cee02cdbb" 00:24:45.427 ], 00:24:45.427 "product_name": "Malloc disk", 00:24:45.427 "block_size": 512, 00:24:45.427 "num_blocks": 65536, 00:24:45.427 "uuid": "217629fa-3b61-4c1c-acbe-5f1cee02cdbb", 00:24:45.427 "assigned_rate_limits": { 00:24:45.427 "rw_ios_per_sec": 0, 00:24:45.427 "rw_mbytes_per_sec": 0, 00:24:45.427 "r_mbytes_per_sec": 0, 00:24:45.427 "w_mbytes_per_sec": 0 00:24:45.427 }, 00:24:45.427 "claimed": false, 00:24:45.427 "zoned": false, 00:24:45.427 "supported_io_types": { 00:24:45.427 "read": true, 00:24:45.427 "write": true, 00:24:45.427 "unmap": true, 00:24:45.427 "flush": true, 00:24:45.427 "reset": true, 00:24:45.427 "nvme_admin": false, 00:24:45.427 "nvme_io": false, 00:24:45.427 "nvme_io_md": false, 00:24:45.427 "write_zeroes": true, 00:24:45.427 "zcopy": true, 00:24:45.427 "get_zone_info": false, 00:24:45.427 "zone_management": false, 00:24:45.427 "zone_append": false, 00:24:45.427 "compare": false, 00:24:45.427 "compare_and_write": false, 00:24:45.427 "abort": true, 00:24:45.427 "seek_hole": false, 00:24:45.427 "seek_data": false, 00:24:45.427 "copy": true, 00:24:45.427 "nvme_iov_md": false 00:24:45.427 }, 00:24:45.427 "memory_domains": [ 00:24:45.428 { 00:24:45.428 "dma_device_id": "system", 00:24:45.428 "dma_device_type": 1 00:24:45.428 }, 00:24:45.428 { 00:24:45.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:45.428 "dma_device_type": 2 00:24:45.428 } 00:24:45.428 ], 00:24:45.428 "driver_specific": {} 00:24:45.428 } 00:24:45.428 ] 00:24:45.428 23:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:45.428 23:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:45.428 23:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:45.428 23:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:45.686 [2024-07-13 23:10:34.946902] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:45.686 [2024-07-13 23:10:34.947239] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:45.686 [2024-07-13 23:10:34.947387] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:45.686 [2024-07-13 23:10:34.949950] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:45.686 [2024-07-13 23:10:34.950226] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:45.686 23:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:45.686 23:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:45.686 23:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:45.686 23:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:45.686 23:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:45.686 23:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:45.686 23:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:45.686 23:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:45.686 23:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:45.686 23:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:45.686 23:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:45.686 23:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.945 23:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:45.945 "name": "Existed_Raid", 00:24:45.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:45.945 "strip_size_kb": 0, 00:24:45.945 "state": "configuring", 00:24:45.945 "raid_level": "raid1", 00:24:45.945 "superblock": false, 00:24:45.945 "num_base_bdevs": 4, 00:24:45.945 "num_base_bdevs_discovered": 3, 00:24:45.945 "num_base_bdevs_operational": 4, 00:24:45.945 "base_bdevs_list": [ 00:24:45.945 { 00:24:45.945 "name": "BaseBdev1", 00:24:45.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:45.945 "is_configured": false, 00:24:45.945 "data_offset": 0, 00:24:45.945 "data_size": 0 00:24:45.945 }, 00:24:45.945 { 00:24:45.945 "name": "BaseBdev2", 00:24:45.945 "uuid": "0157e6b5-ec0a-4793-89ea-c649d8c7af7f", 00:24:45.945 "is_configured": true, 00:24:45.945 "data_offset": 0, 00:24:45.945 "data_size": 65536 00:24:45.945 }, 00:24:45.945 { 00:24:45.945 "name": "BaseBdev3", 00:24:45.945 "uuid": "ac303cc4-9348-4c8f-9f30-7a325c6887ed", 00:24:45.945 "is_configured": true, 00:24:45.945 "data_offset": 0, 00:24:45.945 "data_size": 65536 00:24:45.945 }, 00:24:45.945 { 00:24:45.945 "name": "BaseBdev4", 00:24:45.945 "uuid": "217629fa-3b61-4c1c-acbe-5f1cee02cdbb", 00:24:45.945 "is_configured": true, 00:24:45.945 "data_offset": 0, 00:24:45.945 "data_size": 65536 00:24:45.945 } 00:24:45.945 ] 00:24:45.945 }' 00:24:45.945 23:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:45.945 23:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:46.512 23:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:46.770 [2024-07-13 23:10:36.159209] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:47.029 23:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:47.029 23:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:47.029 23:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:47.029 23:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:47.029 23:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:47.029 23:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:47.029 23:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:47.029 23:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:47.029 23:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:47.029 23:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:47.029 23:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:47.029 23:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:47.029 23:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:47.029 "name": "Existed_Raid", 00:24:47.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:47.029 "strip_size_kb": 0, 00:24:47.029 "state": "configuring", 00:24:47.029 "raid_level": "raid1", 00:24:47.029 "superblock": false, 00:24:47.029 "num_base_bdevs": 4, 00:24:47.029 "num_base_bdevs_discovered": 2, 00:24:47.029 "num_base_bdevs_operational": 4, 00:24:47.029 "base_bdevs_list": [ 00:24:47.029 { 00:24:47.029 "name": "BaseBdev1", 00:24:47.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:47.029 "is_configured": false, 00:24:47.029 "data_offset": 0, 00:24:47.029 "data_size": 0 00:24:47.029 }, 00:24:47.029 { 00:24:47.029 "name": null, 00:24:47.029 "uuid": "0157e6b5-ec0a-4793-89ea-c649d8c7af7f", 00:24:47.029 "is_configured": false, 00:24:47.029 "data_offset": 0, 00:24:47.029 "data_size": 65536 00:24:47.029 }, 00:24:47.029 { 00:24:47.029 "name": "BaseBdev3", 00:24:47.029 "uuid": "ac303cc4-9348-4c8f-9f30-7a325c6887ed", 00:24:47.029 "is_configured": true, 00:24:47.029 "data_offset": 0, 00:24:47.029 "data_size": 65536 00:24:47.029 }, 00:24:47.029 { 00:24:47.029 "name": "BaseBdev4", 00:24:47.029 "uuid": "217629fa-3b61-4c1c-acbe-5f1cee02cdbb", 00:24:47.029 "is_configured": true, 00:24:47.029 "data_offset": 0, 00:24:47.029 "data_size": 65536 00:24:47.029 } 00:24:47.029 ] 00:24:47.029 }' 00:24:47.029 23:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:47.029 23:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:47.963 23:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:47.963 23:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:47.963 23:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:24:47.963 23:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:48.222 [2024-07-13 23:10:37.511729] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:48.222 BaseBdev1 00:24:48.222 23:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:24:48.222 23:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:24:48.222 23:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:48.222 23:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:48.222 23:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:48.222 23:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:48.222 23:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:48.480 23:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:48.739 [ 00:24:48.739 { 00:24:48.739 "name": "BaseBdev1", 00:24:48.739 "aliases": [ 00:24:48.739 "5d1aa705-c86b-4be7-aabd-05934e5046f5" 00:24:48.739 ], 00:24:48.739 "product_name": "Malloc disk", 00:24:48.739 "block_size": 512, 00:24:48.739 "num_blocks": 65536, 00:24:48.739 "uuid": "5d1aa705-c86b-4be7-aabd-05934e5046f5", 00:24:48.739 "assigned_rate_limits": { 00:24:48.739 "rw_ios_per_sec": 0, 00:24:48.739 "rw_mbytes_per_sec": 0, 00:24:48.739 "r_mbytes_per_sec": 0, 00:24:48.739 "w_mbytes_per_sec": 0 00:24:48.739 }, 00:24:48.739 "claimed": true, 00:24:48.739 "claim_type": "exclusive_write", 00:24:48.739 "zoned": false, 00:24:48.739 "supported_io_types": { 00:24:48.739 "read": true, 00:24:48.739 "write": true, 00:24:48.739 "unmap": true, 00:24:48.739 "flush": true, 00:24:48.739 "reset": true, 00:24:48.739 "nvme_admin": false, 00:24:48.739 "nvme_io": false, 00:24:48.739 "nvme_io_md": false, 00:24:48.739 "write_zeroes": true, 00:24:48.739 "zcopy": true, 00:24:48.739 "get_zone_info": false, 00:24:48.739 "zone_management": false, 00:24:48.739 "zone_append": false, 00:24:48.739 "compare": false, 00:24:48.739 "compare_and_write": false, 00:24:48.739 "abort": true, 00:24:48.739 "seek_hole": false, 00:24:48.739 "seek_data": false, 00:24:48.739 "copy": true, 00:24:48.739 "nvme_iov_md": false 00:24:48.739 }, 00:24:48.739 "memory_domains": [ 00:24:48.739 { 00:24:48.739 "dma_device_id": "system", 00:24:48.739 "dma_device_type": 1 00:24:48.739 }, 00:24:48.739 { 00:24:48.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:48.739 "dma_device_type": 2 00:24:48.739 } 00:24:48.739 ], 00:24:48.739 "driver_specific": {} 00:24:48.739 } 00:24:48.739 ] 00:24:48.739 23:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:48.739 23:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:48.739 23:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:48.739 23:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:48.739 23:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:48.739 23:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:48.739 23:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:48.739 23:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:48.739 23:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:48.739 23:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:48.739 23:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:48.739 23:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.739 23:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:48.997 23:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:48.997 "name": "Existed_Raid", 00:24:48.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:48.997 "strip_size_kb": 0, 00:24:48.997 "state": "configuring", 00:24:48.998 "raid_level": "raid1", 00:24:48.998 "superblock": false, 00:24:48.998 "num_base_bdevs": 4, 00:24:48.998 "num_base_bdevs_discovered": 3, 00:24:48.998 "num_base_bdevs_operational": 4, 00:24:48.998 "base_bdevs_list": [ 00:24:48.998 { 00:24:48.998 "name": "BaseBdev1", 00:24:48.998 "uuid": "5d1aa705-c86b-4be7-aabd-05934e5046f5", 00:24:48.998 "is_configured": true, 00:24:48.998 "data_offset": 0, 00:24:48.998 "data_size": 65536 00:24:48.998 }, 00:24:48.998 { 00:24:48.998 "name": null, 00:24:48.998 "uuid": "0157e6b5-ec0a-4793-89ea-c649d8c7af7f", 00:24:48.998 "is_configured": false, 00:24:48.998 "data_offset": 0, 00:24:48.998 "data_size": 65536 00:24:48.998 }, 00:24:48.998 { 00:24:48.998 "name": "BaseBdev3", 00:24:48.998 "uuid": "ac303cc4-9348-4c8f-9f30-7a325c6887ed", 00:24:48.998 "is_configured": true, 00:24:48.998 "data_offset": 0, 00:24:48.998 "data_size": 65536 00:24:48.998 }, 00:24:48.998 { 00:24:48.998 "name": "BaseBdev4", 00:24:48.998 "uuid": "217629fa-3b61-4c1c-acbe-5f1cee02cdbb", 00:24:48.998 "is_configured": true, 00:24:48.998 "data_offset": 0, 00:24:48.998 "data_size": 65536 00:24:48.998 } 00:24:48.998 ] 00:24:48.998 }' 00:24:48.998 23:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:48.998 23:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.565 23:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:49.565 23:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:49.824 23:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:24:49.824 23:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:24:50.083 [2024-07-13 23:10:39.268268] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:50.083 23:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:50.083 23:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:50.083 23:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:50.083 23:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:50.083 23:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:50.083 23:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:50.083 23:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:50.083 23:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:50.083 23:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:50.083 23:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:50.083 23:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:50.083 23:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:50.342 23:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:50.342 "name": "Existed_Raid", 00:24:50.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:50.343 "strip_size_kb": 0, 00:24:50.343 "state": "configuring", 00:24:50.343 "raid_level": "raid1", 00:24:50.343 "superblock": false, 00:24:50.343 "num_base_bdevs": 4, 00:24:50.343 "num_base_bdevs_discovered": 2, 00:24:50.343 "num_base_bdevs_operational": 4, 00:24:50.343 "base_bdevs_list": [ 00:24:50.343 { 00:24:50.343 "name": "BaseBdev1", 00:24:50.343 "uuid": "5d1aa705-c86b-4be7-aabd-05934e5046f5", 00:24:50.343 "is_configured": true, 00:24:50.343 "data_offset": 0, 00:24:50.343 "data_size": 65536 00:24:50.343 }, 00:24:50.343 { 00:24:50.343 "name": null, 00:24:50.343 "uuid": "0157e6b5-ec0a-4793-89ea-c649d8c7af7f", 00:24:50.343 "is_configured": false, 00:24:50.343 "data_offset": 0, 00:24:50.343 "data_size": 65536 00:24:50.343 }, 00:24:50.343 { 00:24:50.343 "name": null, 00:24:50.343 "uuid": "ac303cc4-9348-4c8f-9f30-7a325c6887ed", 00:24:50.343 "is_configured": false, 00:24:50.343 "data_offset": 0, 00:24:50.343 "data_size": 65536 00:24:50.343 }, 00:24:50.343 { 00:24:50.343 "name": "BaseBdev4", 00:24:50.343 "uuid": "217629fa-3b61-4c1c-acbe-5f1cee02cdbb", 00:24:50.343 "is_configured": true, 00:24:50.343 "data_offset": 0, 00:24:50.343 "data_size": 65536 00:24:50.343 } 00:24:50.343 ] 00:24:50.343 }' 00:24:50.343 23:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:50.343 23:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:50.909 23:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:50.909 23:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:51.168 23:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:24:51.168 23:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:24:51.427 [2024-07-13 23:10:40.664660] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:51.427 23:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:51.427 23:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:51.427 23:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:51.427 23:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:51.427 23:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:51.427 23:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:51.427 23:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:51.427 23:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:51.427 23:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:51.427 23:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:51.427 23:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:51.427 23:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:51.686 23:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:51.686 "name": "Existed_Raid", 00:24:51.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:51.686 "strip_size_kb": 0, 00:24:51.686 "state": "configuring", 00:24:51.686 "raid_level": "raid1", 00:24:51.686 "superblock": false, 00:24:51.686 "num_base_bdevs": 4, 00:24:51.686 "num_base_bdevs_discovered": 3, 00:24:51.686 "num_base_bdevs_operational": 4, 00:24:51.686 "base_bdevs_list": [ 00:24:51.686 { 00:24:51.686 "name": "BaseBdev1", 00:24:51.686 "uuid": "5d1aa705-c86b-4be7-aabd-05934e5046f5", 00:24:51.686 "is_configured": true, 00:24:51.686 "data_offset": 0, 00:24:51.686 "data_size": 65536 00:24:51.686 }, 00:24:51.686 { 00:24:51.686 "name": null, 00:24:51.686 "uuid": "0157e6b5-ec0a-4793-89ea-c649d8c7af7f", 00:24:51.686 "is_configured": false, 00:24:51.686 "data_offset": 0, 00:24:51.686 "data_size": 65536 00:24:51.686 }, 00:24:51.686 { 00:24:51.686 "name": "BaseBdev3", 00:24:51.686 "uuid": "ac303cc4-9348-4c8f-9f30-7a325c6887ed", 00:24:51.686 "is_configured": true, 00:24:51.686 "data_offset": 0, 00:24:51.686 "data_size": 65536 00:24:51.686 }, 00:24:51.686 { 00:24:51.686 "name": "BaseBdev4", 00:24:51.686 "uuid": "217629fa-3b61-4c1c-acbe-5f1cee02cdbb", 00:24:51.686 "is_configured": true, 00:24:51.686 "data_offset": 0, 00:24:51.686 "data_size": 65536 00:24:51.686 } 00:24:51.686 ] 00:24:51.686 }' 00:24:51.686 23:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:51.686 23:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:52.269 23:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:52.269 23:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:52.527 23:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:24:52.527 23:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:52.785 [2024-07-13 23:10:41.957030] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:52.785 23:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:52.785 23:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:52.785 23:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:52.785 23:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:52.785 23:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:52.785 23:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:52.785 23:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:52.785 23:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:52.785 23:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:52.785 23:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:52.785 23:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:52.785 23:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:53.043 23:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:53.043 "name": "Existed_Raid", 00:24:53.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.043 "strip_size_kb": 0, 00:24:53.043 "state": "configuring", 00:24:53.043 "raid_level": "raid1", 00:24:53.043 "superblock": false, 00:24:53.043 "num_base_bdevs": 4, 00:24:53.043 "num_base_bdevs_discovered": 2, 00:24:53.043 "num_base_bdevs_operational": 4, 00:24:53.043 "base_bdevs_list": [ 00:24:53.044 { 00:24:53.044 "name": null, 00:24:53.044 "uuid": "5d1aa705-c86b-4be7-aabd-05934e5046f5", 00:24:53.044 "is_configured": false, 00:24:53.044 "data_offset": 0, 00:24:53.044 "data_size": 65536 00:24:53.044 }, 00:24:53.044 { 00:24:53.044 "name": null, 00:24:53.044 "uuid": "0157e6b5-ec0a-4793-89ea-c649d8c7af7f", 00:24:53.044 "is_configured": false, 00:24:53.044 "data_offset": 0, 00:24:53.044 "data_size": 65536 00:24:53.044 }, 00:24:53.044 { 00:24:53.044 "name": "BaseBdev3", 00:24:53.044 "uuid": "ac303cc4-9348-4c8f-9f30-7a325c6887ed", 00:24:53.044 "is_configured": true, 00:24:53.044 "data_offset": 0, 00:24:53.044 "data_size": 65536 00:24:53.044 }, 00:24:53.044 { 00:24:53.044 "name": "BaseBdev4", 00:24:53.044 "uuid": "217629fa-3b61-4c1c-acbe-5f1cee02cdbb", 00:24:53.044 "is_configured": true, 00:24:53.044 "data_offset": 0, 00:24:53.044 "data_size": 65536 00:24:53.044 } 00:24:53.044 ] 00:24:53.044 }' 00:24:53.044 23:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:53.044 23:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.610 23:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:53.610 23:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:53.869 23:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:24:53.869 23:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:24:54.127 [2024-07-13 23:10:43.398302] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:54.127 23:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:54.127 23:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:54.127 23:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:54.127 23:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:54.127 23:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:54.127 23:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:54.127 23:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:54.127 23:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:54.127 23:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:54.127 23:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:54.127 23:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.127 23:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:54.386 23:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:54.386 "name": "Existed_Raid", 00:24:54.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.386 "strip_size_kb": 0, 00:24:54.386 "state": "configuring", 00:24:54.386 "raid_level": "raid1", 00:24:54.386 "superblock": false, 00:24:54.386 "num_base_bdevs": 4, 00:24:54.386 "num_base_bdevs_discovered": 3, 00:24:54.386 "num_base_bdevs_operational": 4, 00:24:54.386 "base_bdevs_list": [ 00:24:54.386 { 00:24:54.386 "name": null, 00:24:54.386 "uuid": "5d1aa705-c86b-4be7-aabd-05934e5046f5", 00:24:54.386 "is_configured": false, 00:24:54.386 "data_offset": 0, 00:24:54.386 "data_size": 65536 00:24:54.386 }, 00:24:54.386 { 00:24:54.386 "name": "BaseBdev2", 00:24:54.386 "uuid": "0157e6b5-ec0a-4793-89ea-c649d8c7af7f", 00:24:54.386 "is_configured": true, 00:24:54.386 "data_offset": 0, 00:24:54.386 "data_size": 65536 00:24:54.386 }, 00:24:54.386 { 00:24:54.386 "name": "BaseBdev3", 00:24:54.386 "uuid": "ac303cc4-9348-4c8f-9f30-7a325c6887ed", 00:24:54.386 "is_configured": true, 00:24:54.386 "data_offset": 0, 00:24:54.386 "data_size": 65536 00:24:54.386 }, 00:24:54.386 { 00:24:54.386 "name": "BaseBdev4", 00:24:54.386 "uuid": "217629fa-3b61-4c1c-acbe-5f1cee02cdbb", 00:24:54.386 "is_configured": true, 00:24:54.386 "data_offset": 0, 00:24:54.386 "data_size": 65536 00:24:54.386 } 00:24:54.386 ] 00:24:54.386 }' 00:24:54.386 23:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:54.386 23:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:54.954 23:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.954 23:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:55.213 23:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:24:55.213 23:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.213 23:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:24:55.472 23:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 5d1aa705-c86b-4be7-aabd-05934e5046f5 00:24:55.731 [2024-07-13 23:10:45.095978] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:24:55.731 [2024-07-13 23:10:45.096371] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:24:55.731 [2024-07-13 23:10:45.096430] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:24:55.731 [2024-07-13 23:10:45.096704] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:24:55.731 [2024-07-13 23:10:45.097351] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:24:55.731 [2024-07-13 23:10:45.097527] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008180 00:24:55.731 [2024-07-13 23:10:45.097913] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:55.731 NewBaseBdev 00:24:55.731 23:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:24:55.731 23:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:24:55.731 23:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:55.731 23:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:55.731 23:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:55.731 23:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:55.731 23:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:55.989 23:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:24:56.248 [ 00:24:56.249 { 00:24:56.249 "name": "NewBaseBdev", 00:24:56.249 "aliases": [ 00:24:56.249 "5d1aa705-c86b-4be7-aabd-05934e5046f5" 00:24:56.249 ], 00:24:56.249 "product_name": "Malloc disk", 00:24:56.249 "block_size": 512, 00:24:56.249 "num_blocks": 65536, 00:24:56.249 "uuid": "5d1aa705-c86b-4be7-aabd-05934e5046f5", 00:24:56.249 "assigned_rate_limits": { 00:24:56.249 "rw_ios_per_sec": 0, 00:24:56.249 "rw_mbytes_per_sec": 0, 00:24:56.249 "r_mbytes_per_sec": 0, 00:24:56.249 "w_mbytes_per_sec": 0 00:24:56.249 }, 00:24:56.249 "claimed": true, 00:24:56.249 "claim_type": "exclusive_write", 00:24:56.249 "zoned": false, 00:24:56.249 "supported_io_types": { 00:24:56.249 "read": true, 00:24:56.249 "write": true, 00:24:56.249 "unmap": true, 00:24:56.249 "flush": true, 00:24:56.249 "reset": true, 00:24:56.249 "nvme_admin": false, 00:24:56.249 "nvme_io": false, 00:24:56.249 "nvme_io_md": false, 00:24:56.249 "write_zeroes": true, 00:24:56.249 "zcopy": true, 00:24:56.249 "get_zone_info": false, 00:24:56.249 "zone_management": false, 00:24:56.249 "zone_append": false, 00:24:56.249 "compare": false, 00:24:56.249 "compare_and_write": false, 00:24:56.249 "abort": true, 00:24:56.249 "seek_hole": false, 00:24:56.249 "seek_data": false, 00:24:56.249 "copy": true, 00:24:56.249 "nvme_iov_md": false 00:24:56.249 }, 00:24:56.249 "memory_domains": [ 00:24:56.249 { 00:24:56.249 "dma_device_id": "system", 00:24:56.249 "dma_device_type": 1 00:24:56.249 }, 00:24:56.249 { 00:24:56.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:56.249 "dma_device_type": 2 00:24:56.249 } 00:24:56.249 ], 00:24:56.249 "driver_specific": {} 00:24:56.249 } 00:24:56.249 ] 00:24:56.249 23:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:56.249 23:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:24:56.249 23:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:56.249 23:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:56.249 23:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:56.249 23:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:56.249 23:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:56.249 23:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:56.249 23:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:56.249 23:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:56.249 23:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:56.249 23:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:56.249 23:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:56.507 23:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:56.507 "name": "Existed_Raid", 00:24:56.507 "uuid": "53f5be1e-1923-4ae1-a6dc-0046973b69b9", 00:24:56.507 "strip_size_kb": 0, 00:24:56.507 "state": "online", 00:24:56.507 "raid_level": "raid1", 00:24:56.507 "superblock": false, 00:24:56.507 "num_base_bdevs": 4, 00:24:56.507 "num_base_bdevs_discovered": 4, 00:24:56.507 "num_base_bdevs_operational": 4, 00:24:56.507 "base_bdevs_list": [ 00:24:56.507 { 00:24:56.507 "name": "NewBaseBdev", 00:24:56.507 "uuid": "5d1aa705-c86b-4be7-aabd-05934e5046f5", 00:24:56.507 "is_configured": true, 00:24:56.507 "data_offset": 0, 00:24:56.507 "data_size": 65536 00:24:56.507 }, 00:24:56.507 { 00:24:56.507 "name": "BaseBdev2", 00:24:56.507 "uuid": "0157e6b5-ec0a-4793-89ea-c649d8c7af7f", 00:24:56.507 "is_configured": true, 00:24:56.507 "data_offset": 0, 00:24:56.507 "data_size": 65536 00:24:56.507 }, 00:24:56.507 { 00:24:56.507 "name": "BaseBdev3", 00:24:56.507 "uuid": "ac303cc4-9348-4c8f-9f30-7a325c6887ed", 00:24:56.507 "is_configured": true, 00:24:56.507 "data_offset": 0, 00:24:56.507 "data_size": 65536 00:24:56.507 }, 00:24:56.507 { 00:24:56.507 "name": "BaseBdev4", 00:24:56.507 "uuid": "217629fa-3b61-4c1c-acbe-5f1cee02cdbb", 00:24:56.507 "is_configured": true, 00:24:56.507 "data_offset": 0, 00:24:56.507 "data_size": 65536 00:24:56.507 } 00:24:56.507 ] 00:24:56.507 }' 00:24:56.507 23:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:56.507 23:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:57.071 23:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:24:57.071 23:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:57.071 23:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:57.071 23:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:57.071 23:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:57.071 23:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:57.071 23:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:57.071 23:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:57.329 [2024-07-13 23:10:46.693138] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:57.329 23:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:57.329 "name": "Existed_Raid", 00:24:57.329 "aliases": [ 00:24:57.329 "53f5be1e-1923-4ae1-a6dc-0046973b69b9" 00:24:57.329 ], 00:24:57.329 "product_name": "Raid Volume", 00:24:57.329 "block_size": 512, 00:24:57.329 "num_blocks": 65536, 00:24:57.329 "uuid": "53f5be1e-1923-4ae1-a6dc-0046973b69b9", 00:24:57.329 "assigned_rate_limits": { 00:24:57.329 "rw_ios_per_sec": 0, 00:24:57.329 "rw_mbytes_per_sec": 0, 00:24:57.329 "r_mbytes_per_sec": 0, 00:24:57.329 "w_mbytes_per_sec": 0 00:24:57.329 }, 00:24:57.329 "claimed": false, 00:24:57.329 "zoned": false, 00:24:57.329 "supported_io_types": { 00:24:57.329 "read": true, 00:24:57.329 "write": true, 00:24:57.329 "unmap": false, 00:24:57.329 "flush": false, 00:24:57.329 "reset": true, 00:24:57.329 "nvme_admin": false, 00:24:57.329 "nvme_io": false, 00:24:57.329 "nvme_io_md": false, 00:24:57.329 "write_zeroes": true, 00:24:57.329 "zcopy": false, 00:24:57.329 "get_zone_info": false, 00:24:57.329 "zone_management": false, 00:24:57.329 "zone_append": false, 00:24:57.329 "compare": false, 00:24:57.329 "compare_and_write": false, 00:24:57.329 "abort": false, 00:24:57.329 "seek_hole": false, 00:24:57.329 "seek_data": false, 00:24:57.329 "copy": false, 00:24:57.329 "nvme_iov_md": false 00:24:57.329 }, 00:24:57.329 "memory_domains": [ 00:24:57.329 { 00:24:57.329 "dma_device_id": "system", 00:24:57.329 "dma_device_type": 1 00:24:57.329 }, 00:24:57.329 { 00:24:57.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:57.329 "dma_device_type": 2 00:24:57.329 }, 00:24:57.329 { 00:24:57.329 "dma_device_id": "system", 00:24:57.329 "dma_device_type": 1 00:24:57.329 }, 00:24:57.329 { 00:24:57.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:57.329 "dma_device_type": 2 00:24:57.329 }, 00:24:57.329 { 00:24:57.329 "dma_device_id": "system", 00:24:57.329 "dma_device_type": 1 00:24:57.329 }, 00:24:57.329 { 00:24:57.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:57.329 "dma_device_type": 2 00:24:57.329 }, 00:24:57.329 { 00:24:57.329 "dma_device_id": "system", 00:24:57.329 "dma_device_type": 1 00:24:57.329 }, 00:24:57.329 { 00:24:57.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:57.329 "dma_device_type": 2 00:24:57.329 } 00:24:57.329 ], 00:24:57.329 "driver_specific": { 00:24:57.329 "raid": { 00:24:57.329 "uuid": "53f5be1e-1923-4ae1-a6dc-0046973b69b9", 00:24:57.329 "strip_size_kb": 0, 00:24:57.329 "state": "online", 00:24:57.329 "raid_level": "raid1", 00:24:57.329 "superblock": false, 00:24:57.329 "num_base_bdevs": 4, 00:24:57.329 "num_base_bdevs_discovered": 4, 00:24:57.329 "num_base_bdevs_operational": 4, 00:24:57.329 "base_bdevs_list": [ 00:24:57.329 { 00:24:57.329 "name": "NewBaseBdev", 00:24:57.329 "uuid": "5d1aa705-c86b-4be7-aabd-05934e5046f5", 00:24:57.329 "is_configured": true, 00:24:57.329 "data_offset": 0, 00:24:57.329 "data_size": 65536 00:24:57.329 }, 00:24:57.329 { 00:24:57.329 "name": "BaseBdev2", 00:24:57.329 "uuid": "0157e6b5-ec0a-4793-89ea-c649d8c7af7f", 00:24:57.329 "is_configured": true, 00:24:57.329 "data_offset": 0, 00:24:57.329 "data_size": 65536 00:24:57.329 }, 00:24:57.329 { 00:24:57.329 "name": "BaseBdev3", 00:24:57.329 "uuid": "ac303cc4-9348-4c8f-9f30-7a325c6887ed", 00:24:57.329 "is_configured": true, 00:24:57.330 "data_offset": 0, 00:24:57.330 "data_size": 65536 00:24:57.330 }, 00:24:57.330 { 00:24:57.330 "name": "BaseBdev4", 00:24:57.330 "uuid": "217629fa-3b61-4c1c-acbe-5f1cee02cdbb", 00:24:57.330 "is_configured": true, 00:24:57.330 "data_offset": 0, 00:24:57.330 "data_size": 65536 00:24:57.330 } 00:24:57.330 ] 00:24:57.330 } 00:24:57.330 } 00:24:57.330 }' 00:24:57.330 23:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:57.587 23:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:24:57.587 BaseBdev2 00:24:57.587 BaseBdev3 00:24:57.587 BaseBdev4' 00:24:57.587 23:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:57.587 23:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:24:57.587 23:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:57.845 23:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:57.845 "name": "NewBaseBdev", 00:24:57.845 "aliases": [ 00:24:57.845 "5d1aa705-c86b-4be7-aabd-05934e5046f5" 00:24:57.845 ], 00:24:57.845 "product_name": "Malloc disk", 00:24:57.845 "block_size": 512, 00:24:57.845 "num_blocks": 65536, 00:24:57.845 "uuid": "5d1aa705-c86b-4be7-aabd-05934e5046f5", 00:24:57.845 "assigned_rate_limits": { 00:24:57.845 "rw_ios_per_sec": 0, 00:24:57.845 "rw_mbytes_per_sec": 0, 00:24:57.845 "r_mbytes_per_sec": 0, 00:24:57.845 "w_mbytes_per_sec": 0 00:24:57.845 }, 00:24:57.845 "claimed": true, 00:24:57.845 "claim_type": "exclusive_write", 00:24:57.845 "zoned": false, 00:24:57.845 "supported_io_types": { 00:24:57.845 "read": true, 00:24:57.845 "write": true, 00:24:57.845 "unmap": true, 00:24:57.845 "flush": true, 00:24:57.845 "reset": true, 00:24:57.845 "nvme_admin": false, 00:24:57.845 "nvme_io": false, 00:24:57.845 "nvme_io_md": false, 00:24:57.845 "write_zeroes": true, 00:24:57.845 "zcopy": true, 00:24:57.845 "get_zone_info": false, 00:24:57.845 "zone_management": false, 00:24:57.845 "zone_append": false, 00:24:57.845 "compare": false, 00:24:57.845 "compare_and_write": false, 00:24:57.845 "abort": true, 00:24:57.845 "seek_hole": false, 00:24:57.845 "seek_data": false, 00:24:57.845 "copy": true, 00:24:57.845 "nvme_iov_md": false 00:24:57.845 }, 00:24:57.845 "memory_domains": [ 00:24:57.845 { 00:24:57.845 "dma_device_id": "system", 00:24:57.845 "dma_device_type": 1 00:24:57.845 }, 00:24:57.845 { 00:24:57.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:57.845 "dma_device_type": 2 00:24:57.845 } 00:24:57.845 ], 00:24:57.845 "driver_specific": {} 00:24:57.845 }' 00:24:57.845 23:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:57.845 23:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:57.845 23:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:57.845 23:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:57.845 23:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:57.845 23:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:57.845 23:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:58.103 23:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:58.103 23:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:58.103 23:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:58.103 23:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:58.103 23:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:58.103 23:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:58.103 23:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:58.103 23:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:58.361 23:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:58.361 "name": "BaseBdev2", 00:24:58.361 "aliases": [ 00:24:58.361 "0157e6b5-ec0a-4793-89ea-c649d8c7af7f" 00:24:58.361 ], 00:24:58.361 "product_name": "Malloc disk", 00:24:58.361 "block_size": 512, 00:24:58.361 "num_blocks": 65536, 00:24:58.361 "uuid": "0157e6b5-ec0a-4793-89ea-c649d8c7af7f", 00:24:58.361 "assigned_rate_limits": { 00:24:58.361 "rw_ios_per_sec": 0, 00:24:58.361 "rw_mbytes_per_sec": 0, 00:24:58.361 "r_mbytes_per_sec": 0, 00:24:58.361 "w_mbytes_per_sec": 0 00:24:58.361 }, 00:24:58.361 "claimed": true, 00:24:58.361 "claim_type": "exclusive_write", 00:24:58.361 "zoned": false, 00:24:58.361 "supported_io_types": { 00:24:58.361 "read": true, 00:24:58.361 "write": true, 00:24:58.361 "unmap": true, 00:24:58.361 "flush": true, 00:24:58.361 "reset": true, 00:24:58.361 "nvme_admin": false, 00:24:58.361 "nvme_io": false, 00:24:58.361 "nvme_io_md": false, 00:24:58.361 "write_zeroes": true, 00:24:58.361 "zcopy": true, 00:24:58.361 "get_zone_info": false, 00:24:58.361 "zone_management": false, 00:24:58.361 "zone_append": false, 00:24:58.361 "compare": false, 00:24:58.361 "compare_and_write": false, 00:24:58.361 "abort": true, 00:24:58.361 "seek_hole": false, 00:24:58.361 "seek_data": false, 00:24:58.361 "copy": true, 00:24:58.361 "nvme_iov_md": false 00:24:58.361 }, 00:24:58.361 "memory_domains": [ 00:24:58.361 { 00:24:58.361 "dma_device_id": "system", 00:24:58.361 "dma_device_type": 1 00:24:58.361 }, 00:24:58.361 { 00:24:58.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:58.361 "dma_device_type": 2 00:24:58.361 } 00:24:58.362 ], 00:24:58.362 "driver_specific": {} 00:24:58.362 }' 00:24:58.362 23:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:58.362 23:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:58.644 23:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:58.644 23:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:58.644 23:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:58.644 23:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:58.644 23:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:58.644 23:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:58.644 23:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:58.644 23:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:58.903 23:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:58.903 23:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:58.903 23:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:58.903 23:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:58.903 23:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:59.162 23:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:59.162 "name": "BaseBdev3", 00:24:59.162 "aliases": [ 00:24:59.162 "ac303cc4-9348-4c8f-9f30-7a325c6887ed" 00:24:59.162 ], 00:24:59.162 "product_name": "Malloc disk", 00:24:59.162 "block_size": 512, 00:24:59.162 "num_blocks": 65536, 00:24:59.162 "uuid": "ac303cc4-9348-4c8f-9f30-7a325c6887ed", 00:24:59.162 "assigned_rate_limits": { 00:24:59.162 "rw_ios_per_sec": 0, 00:24:59.162 "rw_mbytes_per_sec": 0, 00:24:59.162 "r_mbytes_per_sec": 0, 00:24:59.162 "w_mbytes_per_sec": 0 00:24:59.162 }, 00:24:59.162 "claimed": true, 00:24:59.162 "claim_type": "exclusive_write", 00:24:59.162 "zoned": false, 00:24:59.162 "supported_io_types": { 00:24:59.162 "read": true, 00:24:59.162 "write": true, 00:24:59.162 "unmap": true, 00:24:59.162 "flush": true, 00:24:59.162 "reset": true, 00:24:59.162 "nvme_admin": false, 00:24:59.162 "nvme_io": false, 00:24:59.162 "nvme_io_md": false, 00:24:59.162 "write_zeroes": true, 00:24:59.162 "zcopy": true, 00:24:59.162 "get_zone_info": false, 00:24:59.162 "zone_management": false, 00:24:59.162 "zone_append": false, 00:24:59.162 "compare": false, 00:24:59.162 "compare_and_write": false, 00:24:59.162 "abort": true, 00:24:59.162 "seek_hole": false, 00:24:59.162 "seek_data": false, 00:24:59.162 "copy": true, 00:24:59.162 "nvme_iov_md": false 00:24:59.162 }, 00:24:59.162 "memory_domains": [ 00:24:59.162 { 00:24:59.162 "dma_device_id": "system", 00:24:59.162 "dma_device_type": 1 00:24:59.162 }, 00:24:59.162 { 00:24:59.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:59.162 "dma_device_type": 2 00:24:59.162 } 00:24:59.162 ], 00:24:59.162 "driver_specific": {} 00:24:59.162 }' 00:24:59.162 23:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:59.163 23:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:59.163 23:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:59.163 23:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:59.163 23:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:59.421 23:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:59.421 23:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:59.421 23:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:59.421 23:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:59.421 23:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:59.421 23:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:59.421 23:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:59.421 23:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:59.421 23:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:59.421 23:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:59.680 23:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:59.680 "name": "BaseBdev4", 00:24:59.680 "aliases": [ 00:24:59.680 "217629fa-3b61-4c1c-acbe-5f1cee02cdbb" 00:24:59.680 ], 00:24:59.680 "product_name": "Malloc disk", 00:24:59.680 "block_size": 512, 00:24:59.680 "num_blocks": 65536, 00:24:59.680 "uuid": "217629fa-3b61-4c1c-acbe-5f1cee02cdbb", 00:24:59.680 "assigned_rate_limits": { 00:24:59.681 "rw_ios_per_sec": 0, 00:24:59.681 "rw_mbytes_per_sec": 0, 00:24:59.681 "r_mbytes_per_sec": 0, 00:24:59.681 "w_mbytes_per_sec": 0 00:24:59.681 }, 00:24:59.681 "claimed": true, 00:24:59.681 "claim_type": "exclusive_write", 00:24:59.681 "zoned": false, 00:24:59.681 "supported_io_types": { 00:24:59.681 "read": true, 00:24:59.681 "write": true, 00:24:59.681 "unmap": true, 00:24:59.681 "flush": true, 00:24:59.681 "reset": true, 00:24:59.681 "nvme_admin": false, 00:24:59.681 "nvme_io": false, 00:24:59.681 "nvme_io_md": false, 00:24:59.681 "write_zeroes": true, 00:24:59.681 "zcopy": true, 00:24:59.681 "get_zone_info": false, 00:24:59.681 "zone_management": false, 00:24:59.681 "zone_append": false, 00:24:59.681 "compare": false, 00:24:59.681 "compare_and_write": false, 00:24:59.681 "abort": true, 00:24:59.681 "seek_hole": false, 00:24:59.681 "seek_data": false, 00:24:59.681 "copy": true, 00:24:59.681 "nvme_iov_md": false 00:24:59.681 }, 00:24:59.681 "memory_domains": [ 00:24:59.681 { 00:24:59.681 "dma_device_id": "system", 00:24:59.681 "dma_device_type": 1 00:24:59.681 }, 00:24:59.681 { 00:24:59.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:59.681 "dma_device_type": 2 00:24:59.681 } 00:24:59.681 ], 00:24:59.681 "driver_specific": {} 00:24:59.681 }' 00:24:59.681 23:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:59.939 23:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:59.939 23:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:59.939 23:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:59.939 23:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:59.939 23:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:59.939 23:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:59.939 23:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:59.939 23:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:59.939 23:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:00.205 23:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:00.205 23:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:00.205 23:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:00.499 [2024-07-13 23:10:49.665420] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:00.499 [2024-07-13 23:10:49.665645] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:00.499 [2024-07-13 23:10:49.665900] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:00.499 [2024-07-13 23:10:49.666335] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:00.499 [2024-07-13 23:10:49.666486] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name Existed_Raid, state offline 00:25:00.499 23:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 150356 00:25:00.499 23:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 150356 ']' 00:25:00.499 23:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 150356 00:25:00.499 23:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:25:00.499 23:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:00.499 23:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 150356 00:25:00.499 killing process with pid 150356 00:25:00.499 23:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:00.499 23:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:00.499 23:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 150356' 00:25:00.499 23:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 150356 00:25:00.499 [2024-07-13 23:10:49.702004] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:00.499 23:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 150356 00:25:00.499 [2024-07-13 23:10:49.749933] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:00.763 ************************************ 00:25:00.763 END TEST raid_state_function_test 00:25:00.763 ************************************ 00:25:00.763 23:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:25:00.763 00:25:00.763 real 0m33.813s 00:25:00.763 user 1m4.321s 00:25:00.763 sys 0m3.988s 00:25:00.763 23:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:00.763 23:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.763 23:10:50 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:25:00.764 23:10:50 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:25:00.764 23:10:50 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:25:00.764 23:10:50 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:00.764 23:10:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:00.764 ************************************ 00:25:00.764 START TEST raid_state_function_test_sb 00:25:00.764 ************************************ 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 true 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=151451 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 151451' 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:00.764 Process raid pid: 151451 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 151451 /var/tmp/spdk-raid.sock 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 151451 ']' 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:00.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:00.764 23:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:01.022 [2024-07-13 23:10:50.188311] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:25:01.022 [2024-07-13 23:10:50.188712] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:01.022 [2024-07-13 23:10:50.335782] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.281 [2024-07-13 23:10:50.429978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.281 [2024-07-13 23:10:50.504210] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:01.849 23:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:01.849 23:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:25:01.849 23:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:02.107 [2024-07-13 23:10:51.351717] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:02.107 [2024-07-13 23:10:51.351973] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:02.107 [2024-07-13 23:10:51.352093] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:02.107 [2024-07-13 23:10:51.352165] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:02.107 [2024-07-13 23:10:51.352283] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:02.107 [2024-07-13 23:10:51.352382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:02.107 [2024-07-13 23:10:51.352657] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:02.107 [2024-07-13 23:10:51.352737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:02.107 23:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:02.107 23:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:02.107 23:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:02.107 23:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:02.107 23:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:02.108 23:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:02.108 23:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:02.108 23:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:02.108 23:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:02.108 23:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:02.108 23:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:02.108 23:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:02.366 23:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:02.366 "name": "Existed_Raid", 00:25:02.366 "uuid": "72c050e8-8e5e-4912-934a-ccb92d5a97fc", 00:25:02.366 "strip_size_kb": 0, 00:25:02.366 "state": "configuring", 00:25:02.366 "raid_level": "raid1", 00:25:02.366 "superblock": true, 00:25:02.366 "num_base_bdevs": 4, 00:25:02.366 "num_base_bdevs_discovered": 0, 00:25:02.366 "num_base_bdevs_operational": 4, 00:25:02.366 "base_bdevs_list": [ 00:25:02.366 { 00:25:02.366 "name": "BaseBdev1", 00:25:02.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.366 "is_configured": false, 00:25:02.366 "data_offset": 0, 00:25:02.366 "data_size": 0 00:25:02.366 }, 00:25:02.366 { 00:25:02.366 "name": "BaseBdev2", 00:25:02.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.366 "is_configured": false, 00:25:02.366 "data_offset": 0, 00:25:02.366 "data_size": 0 00:25:02.366 }, 00:25:02.366 { 00:25:02.366 "name": "BaseBdev3", 00:25:02.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.366 "is_configured": false, 00:25:02.366 "data_offset": 0, 00:25:02.366 "data_size": 0 00:25:02.366 }, 00:25:02.366 { 00:25:02.366 "name": "BaseBdev4", 00:25:02.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.366 "is_configured": false, 00:25:02.366 "data_offset": 0, 00:25:02.366 "data_size": 0 00:25:02.366 } 00:25:02.366 ] 00:25:02.366 }' 00:25:02.366 23:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:02.366 23:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:02.933 23:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:03.191 [2024-07-13 23:10:52.469351] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:03.191 [2024-07-13 23:10:52.469637] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:25:03.191 23:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:03.450 [2024-07-13 23:10:52.721394] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:03.450 [2024-07-13 23:10:52.721689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:03.450 [2024-07-13 23:10:52.721853] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:03.450 [2024-07-13 23:10:52.721927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:03.450 [2024-07-13 23:10:52.722178] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:03.450 [2024-07-13 23:10:52.722247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:03.450 [2024-07-13 23:10:52.722457] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:03.450 [2024-07-13 23:10:52.722556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:03.450 23:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:03.708 [2024-07-13 23:10:52.944125] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:03.708 BaseBdev1 00:25:03.708 23:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:25:03.708 23:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:25:03.708 23:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:03.708 23:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:03.708 23:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:03.708 23:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:03.708 23:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:03.968 23:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:04.225 [ 00:25:04.225 { 00:25:04.225 "name": "BaseBdev1", 00:25:04.225 "aliases": [ 00:25:04.225 "26bcec33-7639-429d-8997-f972351c7d3b" 00:25:04.225 ], 00:25:04.225 "product_name": "Malloc disk", 00:25:04.225 "block_size": 512, 00:25:04.225 "num_blocks": 65536, 00:25:04.225 "uuid": "26bcec33-7639-429d-8997-f972351c7d3b", 00:25:04.225 "assigned_rate_limits": { 00:25:04.225 "rw_ios_per_sec": 0, 00:25:04.225 "rw_mbytes_per_sec": 0, 00:25:04.225 "r_mbytes_per_sec": 0, 00:25:04.225 "w_mbytes_per_sec": 0 00:25:04.225 }, 00:25:04.225 "claimed": true, 00:25:04.225 "claim_type": "exclusive_write", 00:25:04.225 "zoned": false, 00:25:04.225 "supported_io_types": { 00:25:04.225 "read": true, 00:25:04.225 "write": true, 00:25:04.225 "unmap": true, 00:25:04.225 "flush": true, 00:25:04.225 "reset": true, 00:25:04.225 "nvme_admin": false, 00:25:04.225 "nvme_io": false, 00:25:04.225 "nvme_io_md": false, 00:25:04.225 "write_zeroes": true, 00:25:04.225 "zcopy": true, 00:25:04.225 "get_zone_info": false, 00:25:04.225 "zone_management": false, 00:25:04.225 "zone_append": false, 00:25:04.225 "compare": false, 00:25:04.225 "compare_and_write": false, 00:25:04.225 "abort": true, 00:25:04.225 "seek_hole": false, 00:25:04.225 "seek_data": false, 00:25:04.225 "copy": true, 00:25:04.225 "nvme_iov_md": false 00:25:04.225 }, 00:25:04.225 "memory_domains": [ 00:25:04.225 { 00:25:04.225 "dma_device_id": "system", 00:25:04.225 "dma_device_type": 1 00:25:04.225 }, 00:25:04.225 { 00:25:04.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:04.225 "dma_device_type": 2 00:25:04.225 } 00:25:04.225 ], 00:25:04.225 "driver_specific": {} 00:25:04.225 } 00:25:04.225 ] 00:25:04.225 23:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:04.225 23:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:04.225 23:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:04.225 23:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:04.225 23:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:04.225 23:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:04.225 23:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:04.225 23:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:04.225 23:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:04.225 23:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:04.225 23:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:04.225 23:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:04.225 23:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:04.225 23:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:04.225 "name": "Existed_Raid", 00:25:04.225 "uuid": "7ef3dce2-c72b-4f14-8a9d-7dedbf04650f", 00:25:04.225 "strip_size_kb": 0, 00:25:04.225 "state": "configuring", 00:25:04.225 "raid_level": "raid1", 00:25:04.225 "superblock": true, 00:25:04.225 "num_base_bdevs": 4, 00:25:04.225 "num_base_bdevs_discovered": 1, 00:25:04.225 "num_base_bdevs_operational": 4, 00:25:04.225 "base_bdevs_list": [ 00:25:04.225 { 00:25:04.225 "name": "BaseBdev1", 00:25:04.225 "uuid": "26bcec33-7639-429d-8997-f972351c7d3b", 00:25:04.225 "is_configured": true, 00:25:04.225 "data_offset": 2048, 00:25:04.225 "data_size": 63488 00:25:04.225 }, 00:25:04.225 { 00:25:04.225 "name": "BaseBdev2", 00:25:04.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.225 "is_configured": false, 00:25:04.225 "data_offset": 0, 00:25:04.225 "data_size": 0 00:25:04.225 }, 00:25:04.225 { 00:25:04.225 "name": "BaseBdev3", 00:25:04.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.225 "is_configured": false, 00:25:04.225 "data_offset": 0, 00:25:04.225 "data_size": 0 00:25:04.225 }, 00:25:04.225 { 00:25:04.225 "name": "BaseBdev4", 00:25:04.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.225 "is_configured": false, 00:25:04.225 "data_offset": 0, 00:25:04.225 "data_size": 0 00:25:04.225 } 00:25:04.225 ] 00:25:04.225 }' 00:25:04.225 23:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:04.225 23:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:04.789 23:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:05.047 [2024-07-13 23:10:54.409221] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:05.047 [2024-07-13 23:10:54.409734] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:25:05.047 23:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:05.305 [2024-07-13 23:10:54.645330] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:05.305 [2024-07-13 23:10:54.647973] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:05.305 [2024-07-13 23:10:54.648210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:05.305 [2024-07-13 23:10:54.648352] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:05.305 [2024-07-13 23:10:54.648429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:05.305 [2024-07-13 23:10:54.648639] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:05.305 [2024-07-13 23:10:54.648709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:05.305 23:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:25:05.305 23:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:05.305 23:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:05.305 23:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:05.305 23:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:05.305 23:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:05.305 23:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:05.305 23:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:05.305 23:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:05.305 23:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:05.305 23:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:05.305 23:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:05.305 23:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:05.305 23:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:05.564 23:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:05.564 "name": "Existed_Raid", 00:25:05.564 "uuid": "c93fba5d-d130-470c-96e4-e743f48648d0", 00:25:05.564 "strip_size_kb": 0, 00:25:05.564 "state": "configuring", 00:25:05.564 "raid_level": "raid1", 00:25:05.564 "superblock": true, 00:25:05.564 "num_base_bdevs": 4, 00:25:05.564 "num_base_bdevs_discovered": 1, 00:25:05.564 "num_base_bdevs_operational": 4, 00:25:05.564 "base_bdevs_list": [ 00:25:05.564 { 00:25:05.564 "name": "BaseBdev1", 00:25:05.564 "uuid": "26bcec33-7639-429d-8997-f972351c7d3b", 00:25:05.564 "is_configured": true, 00:25:05.564 "data_offset": 2048, 00:25:05.564 "data_size": 63488 00:25:05.564 }, 00:25:05.564 { 00:25:05.564 "name": "BaseBdev2", 00:25:05.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.564 "is_configured": false, 00:25:05.564 "data_offset": 0, 00:25:05.564 "data_size": 0 00:25:05.564 }, 00:25:05.564 { 00:25:05.564 "name": "BaseBdev3", 00:25:05.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.564 "is_configured": false, 00:25:05.564 "data_offset": 0, 00:25:05.564 "data_size": 0 00:25:05.564 }, 00:25:05.564 { 00:25:05.564 "name": "BaseBdev4", 00:25:05.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.564 "is_configured": false, 00:25:05.564 "data_offset": 0, 00:25:05.564 "data_size": 0 00:25:05.564 } 00:25:05.564 ] 00:25:05.564 }' 00:25:05.564 23:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:05.564 23:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:06.131 23:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:06.389 [2024-07-13 23:10:55.753098] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:06.389 BaseBdev2 00:25:06.389 23:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:25:06.389 23:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:25:06.389 23:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:06.389 23:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:06.389 23:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:06.389 23:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:06.389 23:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:06.647 23:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:06.905 [ 00:25:06.905 { 00:25:06.905 "name": "BaseBdev2", 00:25:06.905 "aliases": [ 00:25:06.905 "b43acf16-919e-44c9-8ea8-f32d5380f8e2" 00:25:06.905 ], 00:25:06.905 "product_name": "Malloc disk", 00:25:06.905 "block_size": 512, 00:25:06.905 "num_blocks": 65536, 00:25:06.905 "uuid": "b43acf16-919e-44c9-8ea8-f32d5380f8e2", 00:25:06.905 "assigned_rate_limits": { 00:25:06.905 "rw_ios_per_sec": 0, 00:25:06.905 "rw_mbytes_per_sec": 0, 00:25:06.905 "r_mbytes_per_sec": 0, 00:25:06.905 "w_mbytes_per_sec": 0 00:25:06.905 }, 00:25:06.905 "claimed": true, 00:25:06.905 "claim_type": "exclusive_write", 00:25:06.905 "zoned": false, 00:25:06.905 "supported_io_types": { 00:25:06.905 "read": true, 00:25:06.905 "write": true, 00:25:06.905 "unmap": true, 00:25:06.905 "flush": true, 00:25:06.905 "reset": true, 00:25:06.905 "nvme_admin": false, 00:25:06.905 "nvme_io": false, 00:25:06.905 "nvme_io_md": false, 00:25:06.905 "write_zeroes": true, 00:25:06.905 "zcopy": true, 00:25:06.905 "get_zone_info": false, 00:25:06.905 "zone_management": false, 00:25:06.905 "zone_append": false, 00:25:06.905 "compare": false, 00:25:06.905 "compare_and_write": false, 00:25:06.905 "abort": true, 00:25:06.905 "seek_hole": false, 00:25:06.905 "seek_data": false, 00:25:06.905 "copy": true, 00:25:06.905 "nvme_iov_md": false 00:25:06.905 }, 00:25:06.905 "memory_domains": [ 00:25:06.905 { 00:25:06.905 "dma_device_id": "system", 00:25:06.905 "dma_device_type": 1 00:25:06.905 }, 00:25:06.905 { 00:25:06.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:06.905 "dma_device_type": 2 00:25:06.905 } 00:25:06.905 ], 00:25:06.905 "driver_specific": {} 00:25:06.905 } 00:25:06.905 ] 00:25:06.905 23:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:06.905 23:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:06.905 23:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:06.905 23:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:06.905 23:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:06.905 23:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:06.905 23:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:06.905 23:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:06.905 23:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:06.905 23:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:06.905 23:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:06.905 23:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:06.905 23:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:06.905 23:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:06.905 23:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:07.163 23:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:07.163 "name": "Existed_Raid", 00:25:07.163 "uuid": "c93fba5d-d130-470c-96e4-e743f48648d0", 00:25:07.163 "strip_size_kb": 0, 00:25:07.163 "state": "configuring", 00:25:07.163 "raid_level": "raid1", 00:25:07.163 "superblock": true, 00:25:07.163 "num_base_bdevs": 4, 00:25:07.163 "num_base_bdevs_discovered": 2, 00:25:07.163 "num_base_bdevs_operational": 4, 00:25:07.163 "base_bdevs_list": [ 00:25:07.163 { 00:25:07.163 "name": "BaseBdev1", 00:25:07.163 "uuid": "26bcec33-7639-429d-8997-f972351c7d3b", 00:25:07.163 "is_configured": true, 00:25:07.163 "data_offset": 2048, 00:25:07.163 "data_size": 63488 00:25:07.163 }, 00:25:07.163 { 00:25:07.163 "name": "BaseBdev2", 00:25:07.163 "uuid": "b43acf16-919e-44c9-8ea8-f32d5380f8e2", 00:25:07.163 "is_configured": true, 00:25:07.163 "data_offset": 2048, 00:25:07.163 "data_size": 63488 00:25:07.163 }, 00:25:07.163 { 00:25:07.163 "name": "BaseBdev3", 00:25:07.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.163 "is_configured": false, 00:25:07.163 "data_offset": 0, 00:25:07.163 "data_size": 0 00:25:07.163 }, 00:25:07.163 { 00:25:07.163 "name": "BaseBdev4", 00:25:07.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.163 "is_configured": false, 00:25:07.163 "data_offset": 0, 00:25:07.163 "data_size": 0 00:25:07.163 } 00:25:07.163 ] 00:25:07.163 }' 00:25:07.163 23:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:07.163 23:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.730 23:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:07.988 [2024-07-13 23:10:57.295141] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:07.988 BaseBdev3 00:25:07.988 23:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:25:07.988 23:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:25:07.988 23:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:07.988 23:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:07.988 23:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:07.988 23:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:07.988 23:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:08.247 23:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:08.505 [ 00:25:08.505 { 00:25:08.505 "name": "BaseBdev3", 00:25:08.505 "aliases": [ 00:25:08.505 "db964dfb-a5a3-44af-b262-793a64ac73e8" 00:25:08.505 ], 00:25:08.505 "product_name": "Malloc disk", 00:25:08.505 "block_size": 512, 00:25:08.505 "num_blocks": 65536, 00:25:08.505 "uuid": "db964dfb-a5a3-44af-b262-793a64ac73e8", 00:25:08.505 "assigned_rate_limits": { 00:25:08.505 "rw_ios_per_sec": 0, 00:25:08.505 "rw_mbytes_per_sec": 0, 00:25:08.505 "r_mbytes_per_sec": 0, 00:25:08.505 "w_mbytes_per_sec": 0 00:25:08.505 }, 00:25:08.505 "claimed": true, 00:25:08.505 "claim_type": "exclusive_write", 00:25:08.505 "zoned": false, 00:25:08.505 "supported_io_types": { 00:25:08.505 "read": true, 00:25:08.505 "write": true, 00:25:08.505 "unmap": true, 00:25:08.505 "flush": true, 00:25:08.505 "reset": true, 00:25:08.505 "nvme_admin": false, 00:25:08.505 "nvme_io": false, 00:25:08.505 "nvme_io_md": false, 00:25:08.505 "write_zeroes": true, 00:25:08.505 "zcopy": true, 00:25:08.505 "get_zone_info": false, 00:25:08.506 "zone_management": false, 00:25:08.506 "zone_append": false, 00:25:08.506 "compare": false, 00:25:08.506 "compare_and_write": false, 00:25:08.506 "abort": true, 00:25:08.506 "seek_hole": false, 00:25:08.506 "seek_data": false, 00:25:08.506 "copy": true, 00:25:08.506 "nvme_iov_md": false 00:25:08.506 }, 00:25:08.506 "memory_domains": [ 00:25:08.506 { 00:25:08.506 "dma_device_id": "system", 00:25:08.506 "dma_device_type": 1 00:25:08.506 }, 00:25:08.506 { 00:25:08.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:08.506 "dma_device_type": 2 00:25:08.506 } 00:25:08.506 ], 00:25:08.506 "driver_specific": {} 00:25:08.506 } 00:25:08.506 ] 00:25:08.506 23:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:08.506 23:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:08.506 23:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:08.506 23:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:08.506 23:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:08.506 23:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:08.506 23:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:08.506 23:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:08.506 23:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:08.506 23:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:08.506 23:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:08.506 23:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:08.506 23:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:08.506 23:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:08.506 23:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:08.764 23:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:08.764 "name": "Existed_Raid", 00:25:08.764 "uuid": "c93fba5d-d130-470c-96e4-e743f48648d0", 00:25:08.764 "strip_size_kb": 0, 00:25:08.764 "state": "configuring", 00:25:08.764 "raid_level": "raid1", 00:25:08.764 "superblock": true, 00:25:08.764 "num_base_bdevs": 4, 00:25:08.764 "num_base_bdevs_discovered": 3, 00:25:08.764 "num_base_bdevs_operational": 4, 00:25:08.764 "base_bdevs_list": [ 00:25:08.764 { 00:25:08.764 "name": "BaseBdev1", 00:25:08.764 "uuid": "26bcec33-7639-429d-8997-f972351c7d3b", 00:25:08.764 "is_configured": true, 00:25:08.764 "data_offset": 2048, 00:25:08.764 "data_size": 63488 00:25:08.764 }, 00:25:08.764 { 00:25:08.764 "name": "BaseBdev2", 00:25:08.764 "uuid": "b43acf16-919e-44c9-8ea8-f32d5380f8e2", 00:25:08.764 "is_configured": true, 00:25:08.764 "data_offset": 2048, 00:25:08.764 "data_size": 63488 00:25:08.764 }, 00:25:08.764 { 00:25:08.764 "name": "BaseBdev3", 00:25:08.764 "uuid": "db964dfb-a5a3-44af-b262-793a64ac73e8", 00:25:08.764 "is_configured": true, 00:25:08.764 "data_offset": 2048, 00:25:08.764 "data_size": 63488 00:25:08.764 }, 00:25:08.764 { 00:25:08.764 "name": "BaseBdev4", 00:25:08.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:08.764 "is_configured": false, 00:25:08.764 "data_offset": 0, 00:25:08.764 "data_size": 0 00:25:08.765 } 00:25:08.765 ] 00:25:08.765 }' 00:25:08.765 23:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:08.765 23:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:09.330 23:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:09.589 [2024-07-13 23:10:58.852677] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:09.589 [2024-07-13 23:10:58.853294] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:25:09.589 [2024-07-13 23:10:58.853476] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:09.589 [2024-07-13 23:10:58.853671] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:25:09.589 BaseBdev4 00:25:09.589 [2024-07-13 23:10:58.854384] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:25:09.589 [2024-07-13 23:10:58.854534] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:25:09.589 [2024-07-13 23:10:58.854875] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:09.589 23:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:25:09.589 23:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:25:09.589 23:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:09.589 23:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:09.589 23:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:09.589 23:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:09.589 23:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:09.849 23:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:10.108 [ 00:25:10.108 { 00:25:10.108 "name": "BaseBdev4", 00:25:10.108 "aliases": [ 00:25:10.108 "b5a586ef-023f-4db3-8efc-a08bd6d3d0f6" 00:25:10.108 ], 00:25:10.108 "product_name": "Malloc disk", 00:25:10.108 "block_size": 512, 00:25:10.108 "num_blocks": 65536, 00:25:10.108 "uuid": "b5a586ef-023f-4db3-8efc-a08bd6d3d0f6", 00:25:10.108 "assigned_rate_limits": { 00:25:10.108 "rw_ios_per_sec": 0, 00:25:10.108 "rw_mbytes_per_sec": 0, 00:25:10.108 "r_mbytes_per_sec": 0, 00:25:10.108 "w_mbytes_per_sec": 0 00:25:10.108 }, 00:25:10.108 "claimed": true, 00:25:10.108 "claim_type": "exclusive_write", 00:25:10.108 "zoned": false, 00:25:10.108 "supported_io_types": { 00:25:10.108 "read": true, 00:25:10.108 "write": true, 00:25:10.108 "unmap": true, 00:25:10.108 "flush": true, 00:25:10.108 "reset": true, 00:25:10.108 "nvme_admin": false, 00:25:10.108 "nvme_io": false, 00:25:10.108 "nvme_io_md": false, 00:25:10.108 "write_zeroes": true, 00:25:10.108 "zcopy": true, 00:25:10.108 "get_zone_info": false, 00:25:10.108 "zone_management": false, 00:25:10.108 "zone_append": false, 00:25:10.108 "compare": false, 00:25:10.108 "compare_and_write": false, 00:25:10.108 "abort": true, 00:25:10.108 "seek_hole": false, 00:25:10.108 "seek_data": false, 00:25:10.108 "copy": true, 00:25:10.108 "nvme_iov_md": false 00:25:10.108 }, 00:25:10.108 "memory_domains": [ 00:25:10.108 { 00:25:10.108 "dma_device_id": "system", 00:25:10.108 "dma_device_type": 1 00:25:10.108 }, 00:25:10.108 { 00:25:10.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:10.108 "dma_device_type": 2 00:25:10.108 } 00:25:10.108 ], 00:25:10.108 "driver_specific": {} 00:25:10.108 } 00:25:10.108 ] 00:25:10.108 23:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:10.108 23:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:10.108 23:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:10.108 23:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:25:10.108 23:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:10.108 23:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:10.108 23:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:10.108 23:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:10.108 23:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:10.108 23:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:10.108 23:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:10.108 23:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:10.108 23:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:10.108 23:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.108 23:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:10.367 23:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:10.367 "name": "Existed_Raid", 00:25:10.367 "uuid": "c93fba5d-d130-470c-96e4-e743f48648d0", 00:25:10.367 "strip_size_kb": 0, 00:25:10.367 "state": "online", 00:25:10.367 "raid_level": "raid1", 00:25:10.367 "superblock": true, 00:25:10.367 "num_base_bdevs": 4, 00:25:10.367 "num_base_bdevs_discovered": 4, 00:25:10.367 "num_base_bdevs_operational": 4, 00:25:10.367 "base_bdevs_list": [ 00:25:10.367 { 00:25:10.367 "name": "BaseBdev1", 00:25:10.367 "uuid": "26bcec33-7639-429d-8997-f972351c7d3b", 00:25:10.367 "is_configured": true, 00:25:10.367 "data_offset": 2048, 00:25:10.367 "data_size": 63488 00:25:10.367 }, 00:25:10.367 { 00:25:10.367 "name": "BaseBdev2", 00:25:10.367 "uuid": "b43acf16-919e-44c9-8ea8-f32d5380f8e2", 00:25:10.367 "is_configured": true, 00:25:10.367 "data_offset": 2048, 00:25:10.367 "data_size": 63488 00:25:10.367 }, 00:25:10.367 { 00:25:10.367 "name": "BaseBdev3", 00:25:10.367 "uuid": "db964dfb-a5a3-44af-b262-793a64ac73e8", 00:25:10.367 "is_configured": true, 00:25:10.367 "data_offset": 2048, 00:25:10.367 "data_size": 63488 00:25:10.367 }, 00:25:10.367 { 00:25:10.367 "name": "BaseBdev4", 00:25:10.367 "uuid": "b5a586ef-023f-4db3-8efc-a08bd6d3d0f6", 00:25:10.367 "is_configured": true, 00:25:10.367 "data_offset": 2048, 00:25:10.367 "data_size": 63488 00:25:10.367 } 00:25:10.367 ] 00:25:10.367 }' 00:25:10.367 23:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:10.367 23:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:10.934 23:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:25:10.934 23:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:10.934 23:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:10.934 23:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:10.934 23:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:10.934 23:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:25:10.934 23:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:10.934 23:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:11.192 [2024-07-13 23:11:00.445480] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:11.192 23:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:11.192 "name": "Existed_Raid", 00:25:11.192 "aliases": [ 00:25:11.192 "c93fba5d-d130-470c-96e4-e743f48648d0" 00:25:11.192 ], 00:25:11.192 "product_name": "Raid Volume", 00:25:11.192 "block_size": 512, 00:25:11.192 "num_blocks": 63488, 00:25:11.192 "uuid": "c93fba5d-d130-470c-96e4-e743f48648d0", 00:25:11.192 "assigned_rate_limits": { 00:25:11.192 "rw_ios_per_sec": 0, 00:25:11.192 "rw_mbytes_per_sec": 0, 00:25:11.192 "r_mbytes_per_sec": 0, 00:25:11.192 "w_mbytes_per_sec": 0 00:25:11.192 }, 00:25:11.192 "claimed": false, 00:25:11.192 "zoned": false, 00:25:11.192 "supported_io_types": { 00:25:11.192 "read": true, 00:25:11.192 "write": true, 00:25:11.192 "unmap": false, 00:25:11.192 "flush": false, 00:25:11.192 "reset": true, 00:25:11.192 "nvme_admin": false, 00:25:11.192 "nvme_io": false, 00:25:11.192 "nvme_io_md": false, 00:25:11.192 "write_zeroes": true, 00:25:11.192 "zcopy": false, 00:25:11.192 "get_zone_info": false, 00:25:11.192 "zone_management": false, 00:25:11.192 "zone_append": false, 00:25:11.192 "compare": false, 00:25:11.192 "compare_and_write": false, 00:25:11.192 "abort": false, 00:25:11.192 "seek_hole": false, 00:25:11.192 "seek_data": false, 00:25:11.192 "copy": false, 00:25:11.192 "nvme_iov_md": false 00:25:11.192 }, 00:25:11.192 "memory_domains": [ 00:25:11.192 { 00:25:11.192 "dma_device_id": "system", 00:25:11.192 "dma_device_type": 1 00:25:11.192 }, 00:25:11.192 { 00:25:11.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.192 "dma_device_type": 2 00:25:11.192 }, 00:25:11.192 { 00:25:11.192 "dma_device_id": "system", 00:25:11.192 "dma_device_type": 1 00:25:11.192 }, 00:25:11.192 { 00:25:11.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.192 "dma_device_type": 2 00:25:11.192 }, 00:25:11.192 { 00:25:11.192 "dma_device_id": "system", 00:25:11.192 "dma_device_type": 1 00:25:11.192 }, 00:25:11.192 { 00:25:11.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.192 "dma_device_type": 2 00:25:11.192 }, 00:25:11.192 { 00:25:11.192 "dma_device_id": "system", 00:25:11.192 "dma_device_type": 1 00:25:11.192 }, 00:25:11.192 { 00:25:11.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.192 "dma_device_type": 2 00:25:11.192 } 00:25:11.192 ], 00:25:11.192 "driver_specific": { 00:25:11.192 "raid": { 00:25:11.192 "uuid": "c93fba5d-d130-470c-96e4-e743f48648d0", 00:25:11.192 "strip_size_kb": 0, 00:25:11.192 "state": "online", 00:25:11.192 "raid_level": "raid1", 00:25:11.193 "superblock": true, 00:25:11.193 "num_base_bdevs": 4, 00:25:11.193 "num_base_bdevs_discovered": 4, 00:25:11.193 "num_base_bdevs_operational": 4, 00:25:11.193 "base_bdevs_list": [ 00:25:11.193 { 00:25:11.193 "name": "BaseBdev1", 00:25:11.193 "uuid": "26bcec33-7639-429d-8997-f972351c7d3b", 00:25:11.193 "is_configured": true, 00:25:11.193 "data_offset": 2048, 00:25:11.193 "data_size": 63488 00:25:11.193 }, 00:25:11.193 { 00:25:11.193 "name": "BaseBdev2", 00:25:11.193 "uuid": "b43acf16-919e-44c9-8ea8-f32d5380f8e2", 00:25:11.193 "is_configured": true, 00:25:11.193 "data_offset": 2048, 00:25:11.193 "data_size": 63488 00:25:11.193 }, 00:25:11.193 { 00:25:11.193 "name": "BaseBdev3", 00:25:11.193 "uuid": "db964dfb-a5a3-44af-b262-793a64ac73e8", 00:25:11.193 "is_configured": true, 00:25:11.193 "data_offset": 2048, 00:25:11.193 "data_size": 63488 00:25:11.193 }, 00:25:11.193 { 00:25:11.193 "name": "BaseBdev4", 00:25:11.193 "uuid": "b5a586ef-023f-4db3-8efc-a08bd6d3d0f6", 00:25:11.193 "is_configured": true, 00:25:11.193 "data_offset": 2048, 00:25:11.193 "data_size": 63488 00:25:11.193 } 00:25:11.193 ] 00:25:11.193 } 00:25:11.193 } 00:25:11.193 }' 00:25:11.193 23:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:11.193 23:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:25:11.193 BaseBdev2 00:25:11.193 BaseBdev3 00:25:11.193 BaseBdev4' 00:25:11.193 23:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:11.193 23:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:11.193 23:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:25:11.451 23:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:11.451 "name": "BaseBdev1", 00:25:11.451 "aliases": [ 00:25:11.451 "26bcec33-7639-429d-8997-f972351c7d3b" 00:25:11.451 ], 00:25:11.451 "product_name": "Malloc disk", 00:25:11.451 "block_size": 512, 00:25:11.451 "num_blocks": 65536, 00:25:11.451 "uuid": "26bcec33-7639-429d-8997-f972351c7d3b", 00:25:11.451 "assigned_rate_limits": { 00:25:11.451 "rw_ios_per_sec": 0, 00:25:11.451 "rw_mbytes_per_sec": 0, 00:25:11.451 "r_mbytes_per_sec": 0, 00:25:11.451 "w_mbytes_per_sec": 0 00:25:11.451 }, 00:25:11.451 "claimed": true, 00:25:11.451 "claim_type": "exclusive_write", 00:25:11.451 "zoned": false, 00:25:11.451 "supported_io_types": { 00:25:11.451 "read": true, 00:25:11.451 "write": true, 00:25:11.451 "unmap": true, 00:25:11.451 "flush": true, 00:25:11.451 "reset": true, 00:25:11.451 "nvme_admin": false, 00:25:11.451 "nvme_io": false, 00:25:11.451 "nvme_io_md": false, 00:25:11.451 "write_zeroes": true, 00:25:11.451 "zcopy": true, 00:25:11.451 "get_zone_info": false, 00:25:11.451 "zone_management": false, 00:25:11.451 "zone_append": false, 00:25:11.451 "compare": false, 00:25:11.451 "compare_and_write": false, 00:25:11.451 "abort": true, 00:25:11.451 "seek_hole": false, 00:25:11.451 "seek_data": false, 00:25:11.451 "copy": true, 00:25:11.451 "nvme_iov_md": false 00:25:11.451 }, 00:25:11.451 "memory_domains": [ 00:25:11.451 { 00:25:11.451 "dma_device_id": "system", 00:25:11.451 "dma_device_type": 1 00:25:11.451 }, 00:25:11.451 { 00:25:11.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.451 "dma_device_type": 2 00:25:11.451 } 00:25:11.451 ], 00:25:11.451 "driver_specific": {} 00:25:11.451 }' 00:25:11.451 23:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:11.451 23:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:11.451 23:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:11.451 23:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:11.451 23:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:11.709 23:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:11.709 23:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:11.709 23:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:11.709 23:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:11.709 23:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:11.709 23:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:11.709 23:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:11.709 23:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:11.709 23:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:11.709 23:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:11.967 23:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:11.967 "name": "BaseBdev2", 00:25:11.967 "aliases": [ 00:25:11.967 "b43acf16-919e-44c9-8ea8-f32d5380f8e2" 00:25:11.967 ], 00:25:11.967 "product_name": "Malloc disk", 00:25:11.967 "block_size": 512, 00:25:11.967 "num_blocks": 65536, 00:25:11.967 "uuid": "b43acf16-919e-44c9-8ea8-f32d5380f8e2", 00:25:11.967 "assigned_rate_limits": { 00:25:11.967 "rw_ios_per_sec": 0, 00:25:11.967 "rw_mbytes_per_sec": 0, 00:25:11.967 "r_mbytes_per_sec": 0, 00:25:11.967 "w_mbytes_per_sec": 0 00:25:11.967 }, 00:25:11.967 "claimed": true, 00:25:11.967 "claim_type": "exclusive_write", 00:25:11.967 "zoned": false, 00:25:11.967 "supported_io_types": { 00:25:11.967 "read": true, 00:25:11.967 "write": true, 00:25:11.967 "unmap": true, 00:25:11.967 "flush": true, 00:25:11.967 "reset": true, 00:25:11.967 "nvme_admin": false, 00:25:11.967 "nvme_io": false, 00:25:11.967 "nvme_io_md": false, 00:25:11.967 "write_zeroes": true, 00:25:11.967 "zcopy": true, 00:25:11.967 "get_zone_info": false, 00:25:11.967 "zone_management": false, 00:25:11.967 "zone_append": false, 00:25:11.967 "compare": false, 00:25:11.967 "compare_and_write": false, 00:25:11.967 "abort": true, 00:25:11.967 "seek_hole": false, 00:25:11.967 "seek_data": false, 00:25:11.967 "copy": true, 00:25:11.967 "nvme_iov_md": false 00:25:11.967 }, 00:25:11.967 "memory_domains": [ 00:25:11.967 { 00:25:11.967 "dma_device_id": "system", 00:25:11.967 "dma_device_type": 1 00:25:11.967 }, 00:25:11.967 { 00:25:11.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.967 "dma_device_type": 2 00:25:11.967 } 00:25:11.967 ], 00:25:11.967 "driver_specific": {} 00:25:11.967 }' 00:25:11.967 23:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:12.225 23:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:12.225 23:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:12.225 23:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:12.225 23:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:12.225 23:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:12.225 23:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:12.225 23:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:12.225 23:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:12.225 23:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:12.484 23:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:12.484 23:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:12.484 23:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:12.484 23:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:12.484 23:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:12.754 23:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:12.754 "name": "BaseBdev3", 00:25:12.754 "aliases": [ 00:25:12.754 "db964dfb-a5a3-44af-b262-793a64ac73e8" 00:25:12.754 ], 00:25:12.754 "product_name": "Malloc disk", 00:25:12.754 "block_size": 512, 00:25:12.754 "num_blocks": 65536, 00:25:12.754 "uuid": "db964dfb-a5a3-44af-b262-793a64ac73e8", 00:25:12.754 "assigned_rate_limits": { 00:25:12.754 "rw_ios_per_sec": 0, 00:25:12.754 "rw_mbytes_per_sec": 0, 00:25:12.754 "r_mbytes_per_sec": 0, 00:25:12.754 "w_mbytes_per_sec": 0 00:25:12.754 }, 00:25:12.754 "claimed": true, 00:25:12.754 "claim_type": "exclusive_write", 00:25:12.754 "zoned": false, 00:25:12.754 "supported_io_types": { 00:25:12.754 "read": true, 00:25:12.754 "write": true, 00:25:12.754 "unmap": true, 00:25:12.754 "flush": true, 00:25:12.754 "reset": true, 00:25:12.754 "nvme_admin": false, 00:25:12.754 "nvme_io": false, 00:25:12.754 "nvme_io_md": false, 00:25:12.754 "write_zeroes": true, 00:25:12.754 "zcopy": true, 00:25:12.754 "get_zone_info": false, 00:25:12.754 "zone_management": false, 00:25:12.754 "zone_append": false, 00:25:12.754 "compare": false, 00:25:12.754 "compare_and_write": false, 00:25:12.754 "abort": true, 00:25:12.754 "seek_hole": false, 00:25:12.754 "seek_data": false, 00:25:12.754 "copy": true, 00:25:12.754 "nvme_iov_md": false 00:25:12.754 }, 00:25:12.754 "memory_domains": [ 00:25:12.754 { 00:25:12.754 "dma_device_id": "system", 00:25:12.754 "dma_device_type": 1 00:25:12.754 }, 00:25:12.754 { 00:25:12.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.755 "dma_device_type": 2 00:25:12.755 } 00:25:12.755 ], 00:25:12.755 "driver_specific": {} 00:25:12.755 }' 00:25:12.755 23:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:12.755 23:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:12.755 23:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:12.755 23:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:12.755 23:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:13.026 23:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:13.026 23:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:13.026 23:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:13.026 23:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:13.026 23:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:13.026 23:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:13.026 23:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:13.026 23:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:13.026 23:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:13.026 23:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:13.284 23:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:13.284 "name": "BaseBdev4", 00:25:13.284 "aliases": [ 00:25:13.284 "b5a586ef-023f-4db3-8efc-a08bd6d3d0f6" 00:25:13.284 ], 00:25:13.284 "product_name": "Malloc disk", 00:25:13.284 "block_size": 512, 00:25:13.284 "num_blocks": 65536, 00:25:13.284 "uuid": "b5a586ef-023f-4db3-8efc-a08bd6d3d0f6", 00:25:13.284 "assigned_rate_limits": { 00:25:13.284 "rw_ios_per_sec": 0, 00:25:13.284 "rw_mbytes_per_sec": 0, 00:25:13.284 "r_mbytes_per_sec": 0, 00:25:13.284 "w_mbytes_per_sec": 0 00:25:13.284 }, 00:25:13.284 "claimed": true, 00:25:13.284 "claim_type": "exclusive_write", 00:25:13.284 "zoned": false, 00:25:13.284 "supported_io_types": { 00:25:13.284 "read": true, 00:25:13.284 "write": true, 00:25:13.284 "unmap": true, 00:25:13.284 "flush": true, 00:25:13.284 "reset": true, 00:25:13.284 "nvme_admin": false, 00:25:13.284 "nvme_io": false, 00:25:13.284 "nvme_io_md": false, 00:25:13.284 "write_zeroes": true, 00:25:13.284 "zcopy": true, 00:25:13.284 "get_zone_info": false, 00:25:13.284 "zone_management": false, 00:25:13.284 "zone_append": false, 00:25:13.284 "compare": false, 00:25:13.284 "compare_and_write": false, 00:25:13.284 "abort": true, 00:25:13.284 "seek_hole": false, 00:25:13.284 "seek_data": false, 00:25:13.284 "copy": true, 00:25:13.284 "nvme_iov_md": false 00:25:13.284 }, 00:25:13.284 "memory_domains": [ 00:25:13.284 { 00:25:13.284 "dma_device_id": "system", 00:25:13.284 "dma_device_type": 1 00:25:13.284 }, 00:25:13.284 { 00:25:13.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.284 "dma_device_type": 2 00:25:13.284 } 00:25:13.284 ], 00:25:13.284 "driver_specific": {} 00:25:13.284 }' 00:25:13.284 23:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:13.542 23:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:13.542 23:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:13.542 23:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:13.542 23:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:13.542 23:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:13.542 23:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:13.542 23:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:13.801 23:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:13.801 23:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:13.801 23:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:13.801 23:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:13.801 23:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:14.059 [2024-07-13 23:11:03.278467] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:14.059 23:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:25:14.059 23:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:25:14.059 23:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:14.059 23:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:25:14.059 23:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:25:14.059 23:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:25:14.059 23:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:14.059 23:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:14.059 23:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:14.059 23:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:14.059 23:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:14.059 23:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:14.059 23:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:14.059 23:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:14.059 23:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:14.059 23:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:14.059 23:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:14.317 23:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:14.317 "name": "Existed_Raid", 00:25:14.317 "uuid": "c93fba5d-d130-470c-96e4-e743f48648d0", 00:25:14.317 "strip_size_kb": 0, 00:25:14.317 "state": "online", 00:25:14.317 "raid_level": "raid1", 00:25:14.317 "superblock": true, 00:25:14.317 "num_base_bdevs": 4, 00:25:14.317 "num_base_bdevs_discovered": 3, 00:25:14.317 "num_base_bdevs_operational": 3, 00:25:14.317 "base_bdevs_list": [ 00:25:14.317 { 00:25:14.317 "name": null, 00:25:14.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:14.317 "is_configured": false, 00:25:14.317 "data_offset": 2048, 00:25:14.317 "data_size": 63488 00:25:14.317 }, 00:25:14.317 { 00:25:14.317 "name": "BaseBdev2", 00:25:14.317 "uuid": "b43acf16-919e-44c9-8ea8-f32d5380f8e2", 00:25:14.317 "is_configured": true, 00:25:14.317 "data_offset": 2048, 00:25:14.317 "data_size": 63488 00:25:14.317 }, 00:25:14.317 { 00:25:14.317 "name": "BaseBdev3", 00:25:14.317 "uuid": "db964dfb-a5a3-44af-b262-793a64ac73e8", 00:25:14.317 "is_configured": true, 00:25:14.317 "data_offset": 2048, 00:25:14.317 "data_size": 63488 00:25:14.317 }, 00:25:14.317 { 00:25:14.317 "name": "BaseBdev4", 00:25:14.317 "uuid": "b5a586ef-023f-4db3-8efc-a08bd6d3d0f6", 00:25:14.317 "is_configured": true, 00:25:14.317 "data_offset": 2048, 00:25:14.317 "data_size": 63488 00:25:14.317 } 00:25:14.317 ] 00:25:14.317 }' 00:25:14.317 23:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:14.317 23:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:14.883 23:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:25:14.883 23:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:14.883 23:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:14.883 23:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:15.142 23:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:15.142 23:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:15.142 23:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:15.401 [2024-07-13 23:11:04.589731] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:15.401 23:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:15.401 23:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:15.401 23:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.401 23:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:15.660 23:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:15.660 23:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:15.660 23:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:25:15.660 [2024-07-13 23:11:05.032564] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:15.660 23:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:15.660 23:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:15.660 23:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.660 23:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:15.918 23:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:15.918 23:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:15.918 23:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:25:16.177 [2024-07-13 23:11:05.511699] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:25:16.177 [2024-07-13 23:11:05.513042] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:16.177 [2024-07-13 23:11:05.523255] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:16.177 [2024-07-13 23:11:05.523541] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:16.177 [2024-07-13 23:11:05.523813] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:25:16.177 23:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:16.177 23:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:16.177 23:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:16.177 23:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:25:16.436 23:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:25:16.436 23:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:25:16.436 23:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:25:16.436 23:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:25:16.436 23:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:16.436 23:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:16.695 BaseBdev2 00:25:16.695 23:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:25:16.695 23:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:25:16.695 23:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:16.695 23:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:16.695 23:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:16.695 23:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:16.695 23:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:16.954 23:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:17.212 [ 00:25:17.212 { 00:25:17.213 "name": "BaseBdev2", 00:25:17.213 "aliases": [ 00:25:17.213 "42b4c47f-7b41-4361-8e09-cd4e406a7f59" 00:25:17.213 ], 00:25:17.213 "product_name": "Malloc disk", 00:25:17.213 "block_size": 512, 00:25:17.213 "num_blocks": 65536, 00:25:17.213 "uuid": "42b4c47f-7b41-4361-8e09-cd4e406a7f59", 00:25:17.213 "assigned_rate_limits": { 00:25:17.213 "rw_ios_per_sec": 0, 00:25:17.213 "rw_mbytes_per_sec": 0, 00:25:17.213 "r_mbytes_per_sec": 0, 00:25:17.213 "w_mbytes_per_sec": 0 00:25:17.213 }, 00:25:17.213 "claimed": false, 00:25:17.213 "zoned": false, 00:25:17.213 "supported_io_types": { 00:25:17.213 "read": true, 00:25:17.213 "write": true, 00:25:17.213 "unmap": true, 00:25:17.213 "flush": true, 00:25:17.213 "reset": true, 00:25:17.213 "nvme_admin": false, 00:25:17.213 "nvme_io": false, 00:25:17.213 "nvme_io_md": false, 00:25:17.213 "write_zeroes": true, 00:25:17.213 "zcopy": true, 00:25:17.213 "get_zone_info": false, 00:25:17.213 "zone_management": false, 00:25:17.213 "zone_append": false, 00:25:17.213 "compare": false, 00:25:17.213 "compare_and_write": false, 00:25:17.213 "abort": true, 00:25:17.213 "seek_hole": false, 00:25:17.213 "seek_data": false, 00:25:17.213 "copy": true, 00:25:17.213 "nvme_iov_md": false 00:25:17.213 }, 00:25:17.213 "memory_domains": [ 00:25:17.213 { 00:25:17.213 "dma_device_id": "system", 00:25:17.213 "dma_device_type": 1 00:25:17.213 }, 00:25:17.213 { 00:25:17.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:17.213 "dma_device_type": 2 00:25:17.213 } 00:25:17.213 ], 00:25:17.213 "driver_specific": {} 00:25:17.213 } 00:25:17.213 ] 00:25:17.213 23:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:17.213 23:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:17.213 23:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:17.213 23:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:17.471 BaseBdev3 00:25:17.472 23:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:25:17.472 23:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:25:17.472 23:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:17.472 23:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:17.472 23:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:17.472 23:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:17.472 23:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:17.731 23:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:17.990 [ 00:25:17.990 { 00:25:17.990 "name": "BaseBdev3", 00:25:17.990 "aliases": [ 00:25:17.990 "8486e1b7-75f5-41e3-9032-80cceaec115a" 00:25:17.990 ], 00:25:17.990 "product_name": "Malloc disk", 00:25:17.990 "block_size": 512, 00:25:17.990 "num_blocks": 65536, 00:25:17.990 "uuid": "8486e1b7-75f5-41e3-9032-80cceaec115a", 00:25:17.990 "assigned_rate_limits": { 00:25:17.990 "rw_ios_per_sec": 0, 00:25:17.990 "rw_mbytes_per_sec": 0, 00:25:17.990 "r_mbytes_per_sec": 0, 00:25:17.990 "w_mbytes_per_sec": 0 00:25:17.990 }, 00:25:17.990 "claimed": false, 00:25:17.990 "zoned": false, 00:25:17.990 "supported_io_types": { 00:25:17.990 "read": true, 00:25:17.990 "write": true, 00:25:17.990 "unmap": true, 00:25:17.990 "flush": true, 00:25:17.990 "reset": true, 00:25:17.990 "nvme_admin": false, 00:25:17.990 "nvme_io": false, 00:25:17.990 "nvme_io_md": false, 00:25:17.990 "write_zeroes": true, 00:25:17.990 "zcopy": true, 00:25:17.990 "get_zone_info": false, 00:25:17.990 "zone_management": false, 00:25:17.990 "zone_append": false, 00:25:17.990 "compare": false, 00:25:17.990 "compare_and_write": false, 00:25:17.990 "abort": true, 00:25:17.990 "seek_hole": false, 00:25:17.990 "seek_data": false, 00:25:17.990 "copy": true, 00:25:17.990 "nvme_iov_md": false 00:25:17.990 }, 00:25:17.990 "memory_domains": [ 00:25:17.990 { 00:25:17.990 "dma_device_id": "system", 00:25:17.990 "dma_device_type": 1 00:25:17.990 }, 00:25:17.990 { 00:25:17.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:17.990 "dma_device_type": 2 00:25:17.990 } 00:25:17.990 ], 00:25:17.990 "driver_specific": {} 00:25:17.990 } 00:25:17.990 ] 00:25:17.990 23:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:17.990 23:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:17.990 23:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:17.990 23:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:18.249 BaseBdev4 00:25:18.249 23:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:25:18.249 23:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:25:18.249 23:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:18.249 23:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:18.249 23:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:18.249 23:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:18.249 23:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:18.507 23:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:18.766 [ 00:25:18.766 { 00:25:18.766 "name": "BaseBdev4", 00:25:18.766 "aliases": [ 00:25:18.766 "e7716d17-d525-49b4-9d28-9cda3b4a1d23" 00:25:18.766 ], 00:25:18.766 "product_name": "Malloc disk", 00:25:18.766 "block_size": 512, 00:25:18.766 "num_blocks": 65536, 00:25:18.766 "uuid": "e7716d17-d525-49b4-9d28-9cda3b4a1d23", 00:25:18.766 "assigned_rate_limits": { 00:25:18.766 "rw_ios_per_sec": 0, 00:25:18.766 "rw_mbytes_per_sec": 0, 00:25:18.766 "r_mbytes_per_sec": 0, 00:25:18.766 "w_mbytes_per_sec": 0 00:25:18.766 }, 00:25:18.766 "claimed": false, 00:25:18.766 "zoned": false, 00:25:18.766 "supported_io_types": { 00:25:18.766 "read": true, 00:25:18.766 "write": true, 00:25:18.766 "unmap": true, 00:25:18.766 "flush": true, 00:25:18.766 "reset": true, 00:25:18.766 "nvme_admin": false, 00:25:18.766 "nvme_io": false, 00:25:18.766 "nvme_io_md": false, 00:25:18.766 "write_zeroes": true, 00:25:18.766 "zcopy": true, 00:25:18.766 "get_zone_info": false, 00:25:18.766 "zone_management": false, 00:25:18.766 "zone_append": false, 00:25:18.766 "compare": false, 00:25:18.766 "compare_and_write": false, 00:25:18.766 "abort": true, 00:25:18.766 "seek_hole": false, 00:25:18.766 "seek_data": false, 00:25:18.766 "copy": true, 00:25:18.766 "nvme_iov_md": false 00:25:18.766 }, 00:25:18.766 "memory_domains": [ 00:25:18.766 { 00:25:18.766 "dma_device_id": "system", 00:25:18.766 "dma_device_type": 1 00:25:18.766 }, 00:25:18.767 { 00:25:18.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:18.767 "dma_device_type": 2 00:25:18.767 } 00:25:18.767 ], 00:25:18.767 "driver_specific": {} 00:25:18.767 } 00:25:18.767 ] 00:25:18.767 23:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:18.767 23:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:18.767 23:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:18.767 23:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:18.767 [2024-07-13 23:11:08.126844] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:18.767 [2024-07-13 23:11:08.127377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:18.767 [2024-07-13 23:11:08.127611] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:18.767 [2024-07-13 23:11:08.130283] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:18.767 [2024-07-13 23:11:08.130606] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:18.767 23:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:18.767 23:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:18.767 23:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:18.767 23:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:18.767 23:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:18.767 23:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:18.767 23:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:18.767 23:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:18.767 23:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:18.767 23:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:18.767 23:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.767 23:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:19.026 23:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:19.026 "name": "Existed_Raid", 00:25:19.026 "uuid": "4cb249e0-b31f-4a82-b98f-013b8a586141", 00:25:19.026 "strip_size_kb": 0, 00:25:19.026 "state": "configuring", 00:25:19.026 "raid_level": "raid1", 00:25:19.026 "superblock": true, 00:25:19.026 "num_base_bdevs": 4, 00:25:19.026 "num_base_bdevs_discovered": 3, 00:25:19.026 "num_base_bdevs_operational": 4, 00:25:19.026 "base_bdevs_list": [ 00:25:19.026 { 00:25:19.026 "name": "BaseBdev1", 00:25:19.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.026 "is_configured": false, 00:25:19.026 "data_offset": 0, 00:25:19.026 "data_size": 0 00:25:19.026 }, 00:25:19.026 { 00:25:19.026 "name": "BaseBdev2", 00:25:19.026 "uuid": "42b4c47f-7b41-4361-8e09-cd4e406a7f59", 00:25:19.026 "is_configured": true, 00:25:19.026 "data_offset": 2048, 00:25:19.026 "data_size": 63488 00:25:19.026 }, 00:25:19.026 { 00:25:19.026 "name": "BaseBdev3", 00:25:19.026 "uuid": "8486e1b7-75f5-41e3-9032-80cceaec115a", 00:25:19.026 "is_configured": true, 00:25:19.026 "data_offset": 2048, 00:25:19.027 "data_size": 63488 00:25:19.027 }, 00:25:19.027 { 00:25:19.027 "name": "BaseBdev4", 00:25:19.027 "uuid": "e7716d17-d525-49b4-9d28-9cda3b4a1d23", 00:25:19.027 "is_configured": true, 00:25:19.027 "data_offset": 2048, 00:25:19.027 "data_size": 63488 00:25:19.027 } 00:25:19.027 ] 00:25:19.027 }' 00:25:19.027 23:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:19.027 23:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:19.962 23:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:19.962 [2024-07-13 23:11:09.263618] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:19.962 23:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:19.962 23:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:19.962 23:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:19.962 23:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:19.962 23:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:19.962 23:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:19.962 23:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:19.962 23:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:19.962 23:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:19.962 23:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:19.962 23:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:19.962 23:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:20.221 23:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:20.221 "name": "Existed_Raid", 00:25:20.221 "uuid": "4cb249e0-b31f-4a82-b98f-013b8a586141", 00:25:20.221 "strip_size_kb": 0, 00:25:20.221 "state": "configuring", 00:25:20.221 "raid_level": "raid1", 00:25:20.221 "superblock": true, 00:25:20.221 "num_base_bdevs": 4, 00:25:20.221 "num_base_bdevs_discovered": 2, 00:25:20.221 "num_base_bdevs_operational": 4, 00:25:20.221 "base_bdevs_list": [ 00:25:20.221 { 00:25:20.221 "name": "BaseBdev1", 00:25:20.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.221 "is_configured": false, 00:25:20.221 "data_offset": 0, 00:25:20.221 "data_size": 0 00:25:20.221 }, 00:25:20.221 { 00:25:20.221 "name": null, 00:25:20.221 "uuid": "42b4c47f-7b41-4361-8e09-cd4e406a7f59", 00:25:20.221 "is_configured": false, 00:25:20.221 "data_offset": 2048, 00:25:20.221 "data_size": 63488 00:25:20.221 }, 00:25:20.221 { 00:25:20.221 "name": "BaseBdev3", 00:25:20.221 "uuid": "8486e1b7-75f5-41e3-9032-80cceaec115a", 00:25:20.221 "is_configured": true, 00:25:20.221 "data_offset": 2048, 00:25:20.221 "data_size": 63488 00:25:20.221 }, 00:25:20.221 { 00:25:20.221 "name": "BaseBdev4", 00:25:20.221 "uuid": "e7716d17-d525-49b4-9d28-9cda3b4a1d23", 00:25:20.221 "is_configured": true, 00:25:20.221 "data_offset": 2048, 00:25:20.221 "data_size": 63488 00:25:20.221 } 00:25:20.221 ] 00:25:20.221 }' 00:25:20.221 23:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:20.221 23:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:21.157 23:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:21.157 23:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:21.157 23:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:25:21.157 23:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:21.416 [2024-07-13 23:11:10.686670] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:21.416 BaseBdev1 00:25:21.416 23:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:25:21.416 23:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:25:21.416 23:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:21.416 23:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:21.416 23:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:21.416 23:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:21.416 23:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:21.675 23:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:21.934 [ 00:25:21.934 { 00:25:21.934 "name": "BaseBdev1", 00:25:21.934 "aliases": [ 00:25:21.934 "1c3aca0d-90ed-4091-83c4-1d78e171cf4e" 00:25:21.934 ], 00:25:21.934 "product_name": "Malloc disk", 00:25:21.934 "block_size": 512, 00:25:21.934 "num_blocks": 65536, 00:25:21.934 "uuid": "1c3aca0d-90ed-4091-83c4-1d78e171cf4e", 00:25:21.934 "assigned_rate_limits": { 00:25:21.934 "rw_ios_per_sec": 0, 00:25:21.934 "rw_mbytes_per_sec": 0, 00:25:21.934 "r_mbytes_per_sec": 0, 00:25:21.934 "w_mbytes_per_sec": 0 00:25:21.934 }, 00:25:21.934 "claimed": true, 00:25:21.934 "claim_type": "exclusive_write", 00:25:21.934 "zoned": false, 00:25:21.934 "supported_io_types": { 00:25:21.934 "read": true, 00:25:21.934 "write": true, 00:25:21.934 "unmap": true, 00:25:21.934 "flush": true, 00:25:21.934 "reset": true, 00:25:21.934 "nvme_admin": false, 00:25:21.934 "nvme_io": false, 00:25:21.934 "nvme_io_md": false, 00:25:21.934 "write_zeroes": true, 00:25:21.934 "zcopy": true, 00:25:21.934 "get_zone_info": false, 00:25:21.934 "zone_management": false, 00:25:21.934 "zone_append": false, 00:25:21.934 "compare": false, 00:25:21.934 "compare_and_write": false, 00:25:21.934 "abort": true, 00:25:21.934 "seek_hole": false, 00:25:21.934 "seek_data": false, 00:25:21.934 "copy": true, 00:25:21.934 "nvme_iov_md": false 00:25:21.934 }, 00:25:21.934 "memory_domains": [ 00:25:21.934 { 00:25:21.934 "dma_device_id": "system", 00:25:21.934 "dma_device_type": 1 00:25:21.934 }, 00:25:21.934 { 00:25:21.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:21.934 "dma_device_type": 2 00:25:21.934 } 00:25:21.934 ], 00:25:21.934 "driver_specific": {} 00:25:21.934 } 00:25:21.934 ] 00:25:21.934 23:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:21.934 23:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:21.934 23:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:21.934 23:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:21.934 23:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:21.934 23:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:21.934 23:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:21.934 23:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:21.934 23:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:21.934 23:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:21.934 23:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:21.934 23:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:21.934 23:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:22.193 23:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:22.193 "name": "Existed_Raid", 00:25:22.193 "uuid": "4cb249e0-b31f-4a82-b98f-013b8a586141", 00:25:22.193 "strip_size_kb": 0, 00:25:22.193 "state": "configuring", 00:25:22.193 "raid_level": "raid1", 00:25:22.193 "superblock": true, 00:25:22.193 "num_base_bdevs": 4, 00:25:22.193 "num_base_bdevs_discovered": 3, 00:25:22.193 "num_base_bdevs_operational": 4, 00:25:22.193 "base_bdevs_list": [ 00:25:22.193 { 00:25:22.193 "name": "BaseBdev1", 00:25:22.193 "uuid": "1c3aca0d-90ed-4091-83c4-1d78e171cf4e", 00:25:22.193 "is_configured": true, 00:25:22.193 "data_offset": 2048, 00:25:22.193 "data_size": 63488 00:25:22.193 }, 00:25:22.193 { 00:25:22.193 "name": null, 00:25:22.193 "uuid": "42b4c47f-7b41-4361-8e09-cd4e406a7f59", 00:25:22.193 "is_configured": false, 00:25:22.193 "data_offset": 2048, 00:25:22.193 "data_size": 63488 00:25:22.193 }, 00:25:22.193 { 00:25:22.193 "name": "BaseBdev3", 00:25:22.193 "uuid": "8486e1b7-75f5-41e3-9032-80cceaec115a", 00:25:22.193 "is_configured": true, 00:25:22.193 "data_offset": 2048, 00:25:22.193 "data_size": 63488 00:25:22.193 }, 00:25:22.193 { 00:25:22.193 "name": "BaseBdev4", 00:25:22.193 "uuid": "e7716d17-d525-49b4-9d28-9cda3b4a1d23", 00:25:22.193 "is_configured": true, 00:25:22.193 "data_offset": 2048, 00:25:22.193 "data_size": 63488 00:25:22.193 } 00:25:22.193 ] 00:25:22.193 }' 00:25:22.193 23:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:22.193 23:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:22.761 23:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.762 23:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:23.020 23:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:25:23.020 23:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:25:23.279 [2024-07-13 23:11:12.576447] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:23.279 23:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:23.279 23:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:23.279 23:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:23.279 23:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:23.279 23:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:23.279 23:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:23.279 23:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:23.279 23:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:23.279 23:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:23.279 23:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:23.279 23:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:23.279 23:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:23.538 23:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:23.538 "name": "Existed_Raid", 00:25:23.538 "uuid": "4cb249e0-b31f-4a82-b98f-013b8a586141", 00:25:23.538 "strip_size_kb": 0, 00:25:23.538 "state": "configuring", 00:25:23.538 "raid_level": "raid1", 00:25:23.538 "superblock": true, 00:25:23.538 "num_base_bdevs": 4, 00:25:23.538 "num_base_bdevs_discovered": 2, 00:25:23.538 "num_base_bdevs_operational": 4, 00:25:23.538 "base_bdevs_list": [ 00:25:23.538 { 00:25:23.538 "name": "BaseBdev1", 00:25:23.538 "uuid": "1c3aca0d-90ed-4091-83c4-1d78e171cf4e", 00:25:23.538 "is_configured": true, 00:25:23.538 "data_offset": 2048, 00:25:23.538 "data_size": 63488 00:25:23.538 }, 00:25:23.538 { 00:25:23.538 "name": null, 00:25:23.538 "uuid": "42b4c47f-7b41-4361-8e09-cd4e406a7f59", 00:25:23.538 "is_configured": false, 00:25:23.538 "data_offset": 2048, 00:25:23.538 "data_size": 63488 00:25:23.538 }, 00:25:23.538 { 00:25:23.538 "name": null, 00:25:23.538 "uuid": "8486e1b7-75f5-41e3-9032-80cceaec115a", 00:25:23.538 "is_configured": false, 00:25:23.538 "data_offset": 2048, 00:25:23.538 "data_size": 63488 00:25:23.538 }, 00:25:23.538 { 00:25:23.538 "name": "BaseBdev4", 00:25:23.538 "uuid": "e7716d17-d525-49b4-9d28-9cda3b4a1d23", 00:25:23.538 "is_configured": true, 00:25:23.538 "data_offset": 2048, 00:25:23.538 "data_size": 63488 00:25:23.538 } 00:25:23.538 ] 00:25:23.538 }' 00:25:23.538 23:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:23.538 23:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:24.104 23:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:24.362 23:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:24.620 23:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:25:24.620 23:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:24.620 [2024-07-13 23:11:13.989102] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:24.620 23:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:24.620 23:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:24.620 23:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:24.620 23:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:24.620 23:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:24.620 23:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:24.620 23:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:24.620 23:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:24.620 23:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:24.620 23:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:24.620 23:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:24.620 23:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:24.879 23:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:24.879 "name": "Existed_Raid", 00:25:24.879 "uuid": "4cb249e0-b31f-4a82-b98f-013b8a586141", 00:25:24.879 "strip_size_kb": 0, 00:25:24.879 "state": "configuring", 00:25:24.879 "raid_level": "raid1", 00:25:24.879 "superblock": true, 00:25:24.879 "num_base_bdevs": 4, 00:25:24.879 "num_base_bdevs_discovered": 3, 00:25:24.879 "num_base_bdevs_operational": 4, 00:25:24.879 "base_bdevs_list": [ 00:25:24.879 { 00:25:24.879 "name": "BaseBdev1", 00:25:24.879 "uuid": "1c3aca0d-90ed-4091-83c4-1d78e171cf4e", 00:25:24.879 "is_configured": true, 00:25:24.879 "data_offset": 2048, 00:25:24.879 "data_size": 63488 00:25:24.879 }, 00:25:24.879 { 00:25:24.879 "name": null, 00:25:24.879 "uuid": "42b4c47f-7b41-4361-8e09-cd4e406a7f59", 00:25:24.879 "is_configured": false, 00:25:24.879 "data_offset": 2048, 00:25:24.879 "data_size": 63488 00:25:24.879 }, 00:25:24.879 { 00:25:24.879 "name": "BaseBdev3", 00:25:24.879 "uuid": "8486e1b7-75f5-41e3-9032-80cceaec115a", 00:25:24.879 "is_configured": true, 00:25:24.879 "data_offset": 2048, 00:25:24.879 "data_size": 63488 00:25:24.879 }, 00:25:24.879 { 00:25:24.879 "name": "BaseBdev4", 00:25:24.879 "uuid": "e7716d17-d525-49b4-9d28-9cda3b4a1d23", 00:25:24.879 "is_configured": true, 00:25:24.879 "data_offset": 2048, 00:25:24.879 "data_size": 63488 00:25:24.879 } 00:25:24.879 ] 00:25:24.879 }' 00:25:24.879 23:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:24.879 23:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:25.814 23:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.814 23:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:25.814 23:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:25:25.814 23:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:26.073 [2024-07-13 23:11:15.441608] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:26.073 23:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:26.073 23:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:26.073 23:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:26.073 23:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:26.073 23:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:26.073 23:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:26.073 23:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:26.073 23:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:26.073 23:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:26.073 23:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:26.073 23:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.073 23:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:26.332 23:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:26.332 "name": "Existed_Raid", 00:25:26.332 "uuid": "4cb249e0-b31f-4a82-b98f-013b8a586141", 00:25:26.332 "strip_size_kb": 0, 00:25:26.332 "state": "configuring", 00:25:26.332 "raid_level": "raid1", 00:25:26.332 "superblock": true, 00:25:26.332 "num_base_bdevs": 4, 00:25:26.332 "num_base_bdevs_discovered": 2, 00:25:26.332 "num_base_bdevs_operational": 4, 00:25:26.332 "base_bdevs_list": [ 00:25:26.332 { 00:25:26.332 "name": null, 00:25:26.332 "uuid": "1c3aca0d-90ed-4091-83c4-1d78e171cf4e", 00:25:26.332 "is_configured": false, 00:25:26.332 "data_offset": 2048, 00:25:26.332 "data_size": 63488 00:25:26.332 }, 00:25:26.332 { 00:25:26.332 "name": null, 00:25:26.332 "uuid": "42b4c47f-7b41-4361-8e09-cd4e406a7f59", 00:25:26.332 "is_configured": false, 00:25:26.332 "data_offset": 2048, 00:25:26.332 "data_size": 63488 00:25:26.332 }, 00:25:26.332 { 00:25:26.332 "name": "BaseBdev3", 00:25:26.332 "uuid": "8486e1b7-75f5-41e3-9032-80cceaec115a", 00:25:26.332 "is_configured": true, 00:25:26.332 "data_offset": 2048, 00:25:26.332 "data_size": 63488 00:25:26.332 }, 00:25:26.332 { 00:25:26.332 "name": "BaseBdev4", 00:25:26.332 "uuid": "e7716d17-d525-49b4-9d28-9cda3b4a1d23", 00:25:26.332 "is_configured": true, 00:25:26.332 "data_offset": 2048, 00:25:26.332 "data_size": 63488 00:25:26.332 } 00:25:26.332 ] 00:25:26.332 }' 00:25:26.332 23:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:26.332 23:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:27.268 23:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:27.268 23:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:27.268 23:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:25:27.268 23:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:27.526 [2024-07-13 23:11:16.822922] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:27.526 23:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:27.527 23:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:27.527 23:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:27.527 23:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:27.527 23:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:27.527 23:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:27.527 23:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:27.527 23:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:27.527 23:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:27.527 23:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:27.527 23:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:27.527 23:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:27.785 23:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:27.785 "name": "Existed_Raid", 00:25:27.785 "uuid": "4cb249e0-b31f-4a82-b98f-013b8a586141", 00:25:27.785 "strip_size_kb": 0, 00:25:27.785 "state": "configuring", 00:25:27.785 "raid_level": "raid1", 00:25:27.785 "superblock": true, 00:25:27.785 "num_base_bdevs": 4, 00:25:27.785 "num_base_bdevs_discovered": 3, 00:25:27.785 "num_base_bdevs_operational": 4, 00:25:27.785 "base_bdevs_list": [ 00:25:27.785 { 00:25:27.785 "name": null, 00:25:27.785 "uuid": "1c3aca0d-90ed-4091-83c4-1d78e171cf4e", 00:25:27.785 "is_configured": false, 00:25:27.785 "data_offset": 2048, 00:25:27.785 "data_size": 63488 00:25:27.785 }, 00:25:27.785 { 00:25:27.785 "name": "BaseBdev2", 00:25:27.785 "uuid": "42b4c47f-7b41-4361-8e09-cd4e406a7f59", 00:25:27.785 "is_configured": true, 00:25:27.785 "data_offset": 2048, 00:25:27.785 "data_size": 63488 00:25:27.785 }, 00:25:27.785 { 00:25:27.785 "name": "BaseBdev3", 00:25:27.785 "uuid": "8486e1b7-75f5-41e3-9032-80cceaec115a", 00:25:27.785 "is_configured": true, 00:25:27.785 "data_offset": 2048, 00:25:27.785 "data_size": 63488 00:25:27.785 }, 00:25:27.785 { 00:25:27.785 "name": "BaseBdev4", 00:25:27.785 "uuid": "e7716d17-d525-49b4-9d28-9cda3b4a1d23", 00:25:27.785 "is_configured": true, 00:25:27.785 "data_offset": 2048, 00:25:27.785 "data_size": 63488 00:25:27.785 } 00:25:27.785 ] 00:25:27.785 }' 00:25:27.785 23:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:27.785 23:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:28.353 23:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:28.353 23:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:28.611 23:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:25:28.611 23:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:28.611 23:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:28.870 23:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 1c3aca0d-90ed-4091-83c4-1d78e171cf4e 00:25:29.129 [2024-07-13 23:11:18.480199] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:29.129 [2024-07-13 23:11:18.480612] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:25:29.129 [2024-07-13 23:11:18.480741] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:29.129 [2024-07-13 23:11:18.480875] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:25:29.129 NewBaseBdev 00:25:29.129 [2024-07-13 23:11:18.481492] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:25:29.129 [2024-07-13 23:11:18.481508] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008180 00:25:29.129 [2024-07-13 23:11:18.481631] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:29.129 23:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:25:29.129 23:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:25:29.129 23:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:29.129 23:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:29.129 23:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:29.129 23:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:29.129 23:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:29.388 23:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:29.647 [ 00:25:29.647 { 00:25:29.647 "name": "NewBaseBdev", 00:25:29.647 "aliases": [ 00:25:29.647 "1c3aca0d-90ed-4091-83c4-1d78e171cf4e" 00:25:29.647 ], 00:25:29.647 "product_name": "Malloc disk", 00:25:29.647 "block_size": 512, 00:25:29.647 "num_blocks": 65536, 00:25:29.647 "uuid": "1c3aca0d-90ed-4091-83c4-1d78e171cf4e", 00:25:29.647 "assigned_rate_limits": { 00:25:29.647 "rw_ios_per_sec": 0, 00:25:29.647 "rw_mbytes_per_sec": 0, 00:25:29.647 "r_mbytes_per_sec": 0, 00:25:29.647 "w_mbytes_per_sec": 0 00:25:29.647 }, 00:25:29.647 "claimed": true, 00:25:29.647 "claim_type": "exclusive_write", 00:25:29.647 "zoned": false, 00:25:29.647 "supported_io_types": { 00:25:29.647 "read": true, 00:25:29.647 "write": true, 00:25:29.647 "unmap": true, 00:25:29.647 "flush": true, 00:25:29.647 "reset": true, 00:25:29.647 "nvme_admin": false, 00:25:29.647 "nvme_io": false, 00:25:29.647 "nvme_io_md": false, 00:25:29.647 "write_zeroes": true, 00:25:29.647 "zcopy": true, 00:25:29.647 "get_zone_info": false, 00:25:29.647 "zone_management": false, 00:25:29.647 "zone_append": false, 00:25:29.647 "compare": false, 00:25:29.647 "compare_and_write": false, 00:25:29.647 "abort": true, 00:25:29.647 "seek_hole": false, 00:25:29.647 "seek_data": false, 00:25:29.647 "copy": true, 00:25:29.648 "nvme_iov_md": false 00:25:29.648 }, 00:25:29.648 "memory_domains": [ 00:25:29.648 { 00:25:29.648 "dma_device_id": "system", 00:25:29.648 "dma_device_type": 1 00:25:29.648 }, 00:25:29.648 { 00:25:29.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:29.648 "dma_device_type": 2 00:25:29.648 } 00:25:29.648 ], 00:25:29.648 "driver_specific": {} 00:25:29.648 } 00:25:29.648 ] 00:25:29.648 23:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:29.648 23:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:25:29.648 23:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:29.648 23:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:29.648 23:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:29.648 23:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:29.648 23:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:29.648 23:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:29.648 23:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:29.648 23:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:29.648 23:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:29.648 23:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:29.648 23:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:29.907 23:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:29.907 "name": "Existed_Raid", 00:25:29.907 "uuid": "4cb249e0-b31f-4a82-b98f-013b8a586141", 00:25:29.907 "strip_size_kb": 0, 00:25:29.907 "state": "online", 00:25:29.907 "raid_level": "raid1", 00:25:29.907 "superblock": true, 00:25:29.907 "num_base_bdevs": 4, 00:25:29.907 "num_base_bdevs_discovered": 4, 00:25:29.907 "num_base_bdevs_operational": 4, 00:25:29.907 "base_bdevs_list": [ 00:25:29.907 { 00:25:29.907 "name": "NewBaseBdev", 00:25:29.907 "uuid": "1c3aca0d-90ed-4091-83c4-1d78e171cf4e", 00:25:29.907 "is_configured": true, 00:25:29.907 "data_offset": 2048, 00:25:29.907 "data_size": 63488 00:25:29.907 }, 00:25:29.907 { 00:25:29.907 "name": "BaseBdev2", 00:25:29.907 "uuid": "42b4c47f-7b41-4361-8e09-cd4e406a7f59", 00:25:29.907 "is_configured": true, 00:25:29.907 "data_offset": 2048, 00:25:29.907 "data_size": 63488 00:25:29.907 }, 00:25:29.907 { 00:25:29.907 "name": "BaseBdev3", 00:25:29.907 "uuid": "8486e1b7-75f5-41e3-9032-80cceaec115a", 00:25:29.907 "is_configured": true, 00:25:29.907 "data_offset": 2048, 00:25:29.907 "data_size": 63488 00:25:29.908 }, 00:25:29.908 { 00:25:29.908 "name": "BaseBdev4", 00:25:29.908 "uuid": "e7716d17-d525-49b4-9d28-9cda3b4a1d23", 00:25:29.908 "is_configured": true, 00:25:29.908 "data_offset": 2048, 00:25:29.908 "data_size": 63488 00:25:29.908 } 00:25:29.908 ] 00:25:29.908 }' 00:25:29.908 23:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:29.908 23:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:30.475 23:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:25:30.475 23:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:30.475 23:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:30.475 23:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:30.475 23:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:30.475 23:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:25:30.475 23:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:30.475 23:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:30.734 [2024-07-13 23:11:20.024914] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:30.734 23:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:30.734 "name": "Existed_Raid", 00:25:30.734 "aliases": [ 00:25:30.734 "4cb249e0-b31f-4a82-b98f-013b8a586141" 00:25:30.734 ], 00:25:30.734 "product_name": "Raid Volume", 00:25:30.734 "block_size": 512, 00:25:30.734 "num_blocks": 63488, 00:25:30.734 "uuid": "4cb249e0-b31f-4a82-b98f-013b8a586141", 00:25:30.734 "assigned_rate_limits": { 00:25:30.734 "rw_ios_per_sec": 0, 00:25:30.734 "rw_mbytes_per_sec": 0, 00:25:30.734 "r_mbytes_per_sec": 0, 00:25:30.734 "w_mbytes_per_sec": 0 00:25:30.734 }, 00:25:30.734 "claimed": false, 00:25:30.734 "zoned": false, 00:25:30.734 "supported_io_types": { 00:25:30.734 "read": true, 00:25:30.734 "write": true, 00:25:30.734 "unmap": false, 00:25:30.734 "flush": false, 00:25:30.734 "reset": true, 00:25:30.734 "nvme_admin": false, 00:25:30.734 "nvme_io": false, 00:25:30.734 "nvme_io_md": false, 00:25:30.734 "write_zeroes": true, 00:25:30.734 "zcopy": false, 00:25:30.734 "get_zone_info": false, 00:25:30.734 "zone_management": false, 00:25:30.734 "zone_append": false, 00:25:30.734 "compare": false, 00:25:30.734 "compare_and_write": false, 00:25:30.734 "abort": false, 00:25:30.734 "seek_hole": false, 00:25:30.734 "seek_data": false, 00:25:30.734 "copy": false, 00:25:30.734 "nvme_iov_md": false 00:25:30.734 }, 00:25:30.734 "memory_domains": [ 00:25:30.734 { 00:25:30.734 "dma_device_id": "system", 00:25:30.734 "dma_device_type": 1 00:25:30.734 }, 00:25:30.734 { 00:25:30.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:30.734 "dma_device_type": 2 00:25:30.734 }, 00:25:30.734 { 00:25:30.734 "dma_device_id": "system", 00:25:30.734 "dma_device_type": 1 00:25:30.734 }, 00:25:30.734 { 00:25:30.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:30.734 "dma_device_type": 2 00:25:30.734 }, 00:25:30.734 { 00:25:30.734 "dma_device_id": "system", 00:25:30.734 "dma_device_type": 1 00:25:30.734 }, 00:25:30.734 { 00:25:30.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:30.734 "dma_device_type": 2 00:25:30.734 }, 00:25:30.734 { 00:25:30.734 "dma_device_id": "system", 00:25:30.734 "dma_device_type": 1 00:25:30.734 }, 00:25:30.734 { 00:25:30.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:30.734 "dma_device_type": 2 00:25:30.734 } 00:25:30.734 ], 00:25:30.734 "driver_specific": { 00:25:30.734 "raid": { 00:25:30.734 "uuid": "4cb249e0-b31f-4a82-b98f-013b8a586141", 00:25:30.734 "strip_size_kb": 0, 00:25:30.734 "state": "online", 00:25:30.734 "raid_level": "raid1", 00:25:30.734 "superblock": true, 00:25:30.734 "num_base_bdevs": 4, 00:25:30.734 "num_base_bdevs_discovered": 4, 00:25:30.734 "num_base_bdevs_operational": 4, 00:25:30.734 "base_bdevs_list": [ 00:25:30.734 { 00:25:30.734 "name": "NewBaseBdev", 00:25:30.734 "uuid": "1c3aca0d-90ed-4091-83c4-1d78e171cf4e", 00:25:30.734 "is_configured": true, 00:25:30.734 "data_offset": 2048, 00:25:30.734 "data_size": 63488 00:25:30.734 }, 00:25:30.734 { 00:25:30.734 "name": "BaseBdev2", 00:25:30.734 "uuid": "42b4c47f-7b41-4361-8e09-cd4e406a7f59", 00:25:30.734 "is_configured": true, 00:25:30.734 "data_offset": 2048, 00:25:30.734 "data_size": 63488 00:25:30.734 }, 00:25:30.734 { 00:25:30.734 "name": "BaseBdev3", 00:25:30.734 "uuid": "8486e1b7-75f5-41e3-9032-80cceaec115a", 00:25:30.734 "is_configured": true, 00:25:30.734 "data_offset": 2048, 00:25:30.734 "data_size": 63488 00:25:30.734 }, 00:25:30.734 { 00:25:30.734 "name": "BaseBdev4", 00:25:30.734 "uuid": "e7716d17-d525-49b4-9d28-9cda3b4a1d23", 00:25:30.734 "is_configured": true, 00:25:30.734 "data_offset": 2048, 00:25:30.734 "data_size": 63488 00:25:30.734 } 00:25:30.734 ] 00:25:30.734 } 00:25:30.734 } 00:25:30.734 }' 00:25:30.734 23:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:30.734 23:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:25:30.734 BaseBdev2 00:25:30.734 BaseBdev3 00:25:30.734 BaseBdev4' 00:25:30.734 23:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:30.734 23:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:25:30.734 23:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:30.994 23:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:30.994 "name": "NewBaseBdev", 00:25:30.994 "aliases": [ 00:25:30.994 "1c3aca0d-90ed-4091-83c4-1d78e171cf4e" 00:25:30.994 ], 00:25:30.994 "product_name": "Malloc disk", 00:25:30.994 "block_size": 512, 00:25:30.994 "num_blocks": 65536, 00:25:30.994 "uuid": "1c3aca0d-90ed-4091-83c4-1d78e171cf4e", 00:25:30.994 "assigned_rate_limits": { 00:25:30.994 "rw_ios_per_sec": 0, 00:25:30.994 "rw_mbytes_per_sec": 0, 00:25:30.994 "r_mbytes_per_sec": 0, 00:25:30.994 "w_mbytes_per_sec": 0 00:25:30.994 }, 00:25:30.994 "claimed": true, 00:25:30.994 "claim_type": "exclusive_write", 00:25:30.994 "zoned": false, 00:25:30.994 "supported_io_types": { 00:25:30.994 "read": true, 00:25:30.994 "write": true, 00:25:30.994 "unmap": true, 00:25:30.994 "flush": true, 00:25:30.994 "reset": true, 00:25:30.994 "nvme_admin": false, 00:25:30.994 "nvme_io": false, 00:25:30.994 "nvme_io_md": false, 00:25:30.994 "write_zeroes": true, 00:25:30.994 "zcopy": true, 00:25:30.994 "get_zone_info": false, 00:25:30.994 "zone_management": false, 00:25:30.994 "zone_append": false, 00:25:30.994 "compare": false, 00:25:30.994 "compare_and_write": false, 00:25:30.994 "abort": true, 00:25:30.994 "seek_hole": false, 00:25:30.994 "seek_data": false, 00:25:30.994 "copy": true, 00:25:30.994 "nvme_iov_md": false 00:25:30.994 }, 00:25:30.994 "memory_domains": [ 00:25:30.994 { 00:25:30.994 "dma_device_id": "system", 00:25:30.994 "dma_device_type": 1 00:25:30.994 }, 00:25:30.994 { 00:25:30.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:30.994 "dma_device_type": 2 00:25:30.994 } 00:25:30.994 ], 00:25:30.994 "driver_specific": {} 00:25:30.994 }' 00:25:30.994 23:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:30.994 23:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:31.267 23:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:31.267 23:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:31.267 23:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:31.267 23:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:31.267 23:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:31.267 23:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:31.267 23:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:31.267 23:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:31.267 23:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:31.540 23:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:31.540 23:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:31.540 23:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:31.540 23:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:31.540 23:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:31.540 "name": "BaseBdev2", 00:25:31.540 "aliases": [ 00:25:31.540 "42b4c47f-7b41-4361-8e09-cd4e406a7f59" 00:25:31.540 ], 00:25:31.540 "product_name": "Malloc disk", 00:25:31.540 "block_size": 512, 00:25:31.540 "num_blocks": 65536, 00:25:31.540 "uuid": "42b4c47f-7b41-4361-8e09-cd4e406a7f59", 00:25:31.540 "assigned_rate_limits": { 00:25:31.540 "rw_ios_per_sec": 0, 00:25:31.540 "rw_mbytes_per_sec": 0, 00:25:31.540 "r_mbytes_per_sec": 0, 00:25:31.540 "w_mbytes_per_sec": 0 00:25:31.540 }, 00:25:31.540 "claimed": true, 00:25:31.540 "claim_type": "exclusive_write", 00:25:31.540 "zoned": false, 00:25:31.540 "supported_io_types": { 00:25:31.540 "read": true, 00:25:31.540 "write": true, 00:25:31.540 "unmap": true, 00:25:31.540 "flush": true, 00:25:31.540 "reset": true, 00:25:31.540 "nvme_admin": false, 00:25:31.540 "nvme_io": false, 00:25:31.540 "nvme_io_md": false, 00:25:31.540 "write_zeroes": true, 00:25:31.540 "zcopy": true, 00:25:31.540 "get_zone_info": false, 00:25:31.540 "zone_management": false, 00:25:31.540 "zone_append": false, 00:25:31.540 "compare": false, 00:25:31.540 "compare_and_write": false, 00:25:31.540 "abort": true, 00:25:31.540 "seek_hole": false, 00:25:31.540 "seek_data": false, 00:25:31.540 "copy": true, 00:25:31.540 "nvme_iov_md": false 00:25:31.540 }, 00:25:31.540 "memory_domains": [ 00:25:31.540 { 00:25:31.540 "dma_device_id": "system", 00:25:31.540 "dma_device_type": 1 00:25:31.540 }, 00:25:31.540 { 00:25:31.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:31.540 "dma_device_type": 2 00:25:31.540 } 00:25:31.540 ], 00:25:31.540 "driver_specific": {} 00:25:31.540 }' 00:25:31.540 23:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:31.798 23:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:31.798 23:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:31.798 23:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:31.798 23:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:31.798 23:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:31.798 23:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:31.798 23:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:32.055 23:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:32.055 23:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:32.055 23:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:32.055 23:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:32.055 23:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:32.055 23:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:32.055 23:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:32.313 23:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:32.313 "name": "BaseBdev3", 00:25:32.313 "aliases": [ 00:25:32.313 "8486e1b7-75f5-41e3-9032-80cceaec115a" 00:25:32.313 ], 00:25:32.313 "product_name": "Malloc disk", 00:25:32.313 "block_size": 512, 00:25:32.313 "num_blocks": 65536, 00:25:32.313 "uuid": "8486e1b7-75f5-41e3-9032-80cceaec115a", 00:25:32.313 "assigned_rate_limits": { 00:25:32.313 "rw_ios_per_sec": 0, 00:25:32.313 "rw_mbytes_per_sec": 0, 00:25:32.313 "r_mbytes_per_sec": 0, 00:25:32.313 "w_mbytes_per_sec": 0 00:25:32.313 }, 00:25:32.313 "claimed": true, 00:25:32.313 "claim_type": "exclusive_write", 00:25:32.313 "zoned": false, 00:25:32.313 "supported_io_types": { 00:25:32.313 "read": true, 00:25:32.313 "write": true, 00:25:32.313 "unmap": true, 00:25:32.313 "flush": true, 00:25:32.313 "reset": true, 00:25:32.313 "nvme_admin": false, 00:25:32.313 "nvme_io": false, 00:25:32.313 "nvme_io_md": false, 00:25:32.313 "write_zeroes": true, 00:25:32.313 "zcopy": true, 00:25:32.313 "get_zone_info": false, 00:25:32.313 "zone_management": false, 00:25:32.313 "zone_append": false, 00:25:32.313 "compare": false, 00:25:32.313 "compare_and_write": false, 00:25:32.313 "abort": true, 00:25:32.313 "seek_hole": false, 00:25:32.313 "seek_data": false, 00:25:32.313 "copy": true, 00:25:32.313 "nvme_iov_md": false 00:25:32.313 }, 00:25:32.313 "memory_domains": [ 00:25:32.313 { 00:25:32.313 "dma_device_id": "system", 00:25:32.313 "dma_device_type": 1 00:25:32.313 }, 00:25:32.313 { 00:25:32.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:32.313 "dma_device_type": 2 00:25:32.313 } 00:25:32.313 ], 00:25:32.313 "driver_specific": {} 00:25:32.313 }' 00:25:32.313 23:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:32.313 23:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:32.313 23:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:32.313 23:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:32.570 23:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:32.570 23:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:32.570 23:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:32.570 23:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:32.570 23:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:32.570 23:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:32.570 23:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:32.570 23:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:32.570 23:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:32.828 23:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:32.828 23:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:33.085 23:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:33.085 "name": "BaseBdev4", 00:25:33.085 "aliases": [ 00:25:33.085 "e7716d17-d525-49b4-9d28-9cda3b4a1d23" 00:25:33.085 ], 00:25:33.085 "product_name": "Malloc disk", 00:25:33.085 "block_size": 512, 00:25:33.085 "num_blocks": 65536, 00:25:33.085 "uuid": "e7716d17-d525-49b4-9d28-9cda3b4a1d23", 00:25:33.085 "assigned_rate_limits": { 00:25:33.085 "rw_ios_per_sec": 0, 00:25:33.085 "rw_mbytes_per_sec": 0, 00:25:33.085 "r_mbytes_per_sec": 0, 00:25:33.085 "w_mbytes_per_sec": 0 00:25:33.085 }, 00:25:33.085 "claimed": true, 00:25:33.085 "claim_type": "exclusive_write", 00:25:33.085 "zoned": false, 00:25:33.085 "supported_io_types": { 00:25:33.085 "read": true, 00:25:33.085 "write": true, 00:25:33.085 "unmap": true, 00:25:33.085 "flush": true, 00:25:33.085 "reset": true, 00:25:33.085 "nvme_admin": false, 00:25:33.085 "nvme_io": false, 00:25:33.085 "nvme_io_md": false, 00:25:33.085 "write_zeroes": true, 00:25:33.085 "zcopy": true, 00:25:33.085 "get_zone_info": false, 00:25:33.085 "zone_management": false, 00:25:33.085 "zone_append": false, 00:25:33.085 "compare": false, 00:25:33.085 "compare_and_write": false, 00:25:33.085 "abort": true, 00:25:33.085 "seek_hole": false, 00:25:33.085 "seek_data": false, 00:25:33.085 "copy": true, 00:25:33.085 "nvme_iov_md": false 00:25:33.085 }, 00:25:33.085 "memory_domains": [ 00:25:33.085 { 00:25:33.085 "dma_device_id": "system", 00:25:33.085 "dma_device_type": 1 00:25:33.085 }, 00:25:33.085 { 00:25:33.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:33.085 "dma_device_type": 2 00:25:33.085 } 00:25:33.085 ], 00:25:33.085 "driver_specific": {} 00:25:33.085 }' 00:25:33.085 23:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:33.085 23:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:33.085 23:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:33.085 23:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:33.085 23:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:33.085 23:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:33.085 23:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:33.085 23:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:33.343 23:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:33.343 23:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:33.343 23:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:33.343 23:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:33.343 23:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:33.601 [2024-07-13 23:11:22.913405] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:33.601 [2024-07-13 23:11:22.914740] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:33.601 [2024-07-13 23:11:22.915001] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:33.601 [2024-07-13 23:11:22.915377] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:33.601 [2024-07-13 23:11:22.915526] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name Existed_Raid, state offline 00:25:33.601 23:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 151451 00:25:33.601 23:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 151451 ']' 00:25:33.601 23:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 151451 00:25:33.601 23:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:25:33.601 23:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:33.601 23:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 151451 00:25:33.601 23:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:33.601 23:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:33.601 23:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 151451' 00:25:33.601 killing process with pid 151451 00:25:33.601 23:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 151451 00:25:33.601 [2024-07-13 23:11:22.959649] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:33.601 23:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 151451 00:25:33.601 [2024-07-13 23:11:22.995383] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:33.859 23:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:25:33.859 00:25:33.859 real 0m33.100s 00:25:33.859 user 1m3.022s 00:25:33.859 sys 0m3.977s 00:25:33.859 23:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:33.859 23:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:33.859 ************************************ 00:25:33.859 END TEST raid_state_function_test_sb 00:25:33.859 ************************************ 00:25:34.116 23:11:23 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:25:34.116 23:11:23 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:25:34.116 23:11:23 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:25:34.116 23:11:23 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:34.116 23:11:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:34.116 ************************************ 00:25:34.116 START TEST raid_superblock_test 00:25:34.116 ************************************ 00:25:34.116 23:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 4 00:25:34.116 23:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:25:34.116 23:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:25:34.117 23:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:25:34.117 23:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:25:34.117 23:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:25:34.117 23:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:25:34.117 23:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:25:34.117 23:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:25:34.117 23:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:25:34.117 23:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:25:34.117 23:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:25:34.117 23:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:25:34.117 23:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:25:34.117 23:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:25:34.117 23:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:25:34.117 23:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=152544 00:25:34.117 23:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:25:34.117 23:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 152544 /var/tmp/spdk-raid.sock 00:25:34.117 23:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 152544 ']' 00:25:34.117 23:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:34.117 23:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:34.117 23:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:34.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:34.117 23:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:34.117 23:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:34.117 [2024-07-13 23:11:23.340372] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:25:34.117 [2024-07-13 23:11:23.340778] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152544 ] 00:25:34.117 [2024-07-13 23:11:23.482770] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.374 [2024-07-13 23:11:23.563096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.374 [2024-07-13 23:11:23.621814] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:34.374 23:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:34.374 23:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:25:34.374 23:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:25:34.374 23:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:34.374 23:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:25:34.374 23:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:25:34.374 23:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:34.374 23:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:34.374 23:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:34.374 23:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:34.374 23:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:25:34.632 malloc1 00:25:34.632 23:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:34.890 [2024-07-13 23:11:24.168074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:34.890 [2024-07-13 23:11:24.168456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:34.890 [2024-07-13 23:11:24.168618] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:25:34.890 [2024-07-13 23:11:24.168780] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:34.890 [2024-07-13 23:11:24.171573] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:34.890 [2024-07-13 23:11:24.171791] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:34.890 pt1 00:25:34.890 23:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:34.890 23:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:34.890 23:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:25:34.890 23:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:25:34.890 23:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:34.890 23:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:34.890 23:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:34.890 23:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:34.890 23:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:25:35.148 malloc2 00:25:35.148 23:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:35.405 [2024-07-13 23:11:24.675393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:35.405 [2024-07-13 23:11:24.675509] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:35.405 [2024-07-13 23:11:24.675562] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:25:35.405 [2024-07-13 23:11:24.675616] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:35.405 [2024-07-13 23:11:24.678000] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:35.405 [2024-07-13 23:11:24.678094] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:35.405 pt2 00:25:35.405 23:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:35.405 23:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:35.405 23:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:25:35.405 23:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:25:35.405 23:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:25:35.405 23:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:35.405 23:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:35.405 23:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:35.405 23:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:25:35.662 malloc3 00:25:35.662 23:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:35.920 [2024-07-13 23:11:25.136284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:35.920 [2024-07-13 23:11:25.136378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:35.920 [2024-07-13 23:11:25.136424] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:35.920 [2024-07-13 23:11:25.136470] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:35.920 [2024-07-13 23:11:25.139118] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:35.920 [2024-07-13 23:11:25.139186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:35.920 pt3 00:25:35.920 23:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:35.920 23:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:35.920 23:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:25:35.920 23:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:25:35.920 23:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:25:35.920 23:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:35.920 23:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:35.920 23:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:35.920 23:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:25:36.178 malloc4 00:25:36.178 23:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:36.435 [2024-07-13 23:11:25.643706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:36.435 [2024-07-13 23:11:25.643859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:36.435 [2024-07-13 23:11:25.643899] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:36.435 [2024-07-13 23:11:25.643941] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:36.435 [2024-07-13 23:11:25.646720] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:36.435 [2024-07-13 23:11:25.646789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:36.435 pt4 00:25:36.435 23:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:36.435 23:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:36.435 23:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:25:36.694 [2024-07-13 23:11:25.895793] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:36.694 [2024-07-13 23:11:25.898071] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:36.694 [2024-07-13 23:11:25.898164] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:36.694 [2024-07-13 23:11:25.898228] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:36.694 [2024-07-13 23:11:25.898536] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:25:36.694 [2024-07-13 23:11:25.898563] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:36.694 [2024-07-13 23:11:25.898742] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:25:36.694 [2024-07-13 23:11:25.899238] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:25:36.694 [2024-07-13 23:11:25.899264] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:25:36.694 [2024-07-13 23:11:25.899454] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:36.694 23:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:25:36.694 23:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:36.694 23:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:36.694 23:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:36.694 23:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:36.694 23:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:36.694 23:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:36.694 23:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:36.694 23:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:36.694 23:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:36.694 23:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.694 23:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:36.951 23:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:36.951 "name": "raid_bdev1", 00:25:36.951 "uuid": "41af6e22-cd2c-47c1-bc5d-027bc0e8ba3a", 00:25:36.951 "strip_size_kb": 0, 00:25:36.951 "state": "online", 00:25:36.951 "raid_level": "raid1", 00:25:36.951 "superblock": true, 00:25:36.951 "num_base_bdevs": 4, 00:25:36.951 "num_base_bdevs_discovered": 4, 00:25:36.951 "num_base_bdevs_operational": 4, 00:25:36.951 "base_bdevs_list": [ 00:25:36.951 { 00:25:36.951 "name": "pt1", 00:25:36.951 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:36.951 "is_configured": true, 00:25:36.951 "data_offset": 2048, 00:25:36.951 "data_size": 63488 00:25:36.951 }, 00:25:36.951 { 00:25:36.951 "name": "pt2", 00:25:36.951 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:36.951 "is_configured": true, 00:25:36.951 "data_offset": 2048, 00:25:36.951 "data_size": 63488 00:25:36.951 }, 00:25:36.951 { 00:25:36.951 "name": "pt3", 00:25:36.951 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:36.951 "is_configured": true, 00:25:36.951 "data_offset": 2048, 00:25:36.951 "data_size": 63488 00:25:36.951 }, 00:25:36.951 { 00:25:36.951 "name": "pt4", 00:25:36.951 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:36.951 "is_configured": true, 00:25:36.951 "data_offset": 2048, 00:25:36.951 "data_size": 63488 00:25:36.951 } 00:25:36.951 ] 00:25:36.951 }' 00:25:36.951 23:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:36.951 23:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.516 23:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:25:37.516 23:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:25:37.516 23:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:37.516 23:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:37.516 23:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:37.516 23:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:37.516 23:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:37.516 23:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:37.773 [2024-07-13 23:11:26.960297] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:37.773 23:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:37.773 "name": "raid_bdev1", 00:25:37.773 "aliases": [ 00:25:37.773 "41af6e22-cd2c-47c1-bc5d-027bc0e8ba3a" 00:25:37.773 ], 00:25:37.773 "product_name": "Raid Volume", 00:25:37.773 "block_size": 512, 00:25:37.773 "num_blocks": 63488, 00:25:37.773 "uuid": "41af6e22-cd2c-47c1-bc5d-027bc0e8ba3a", 00:25:37.773 "assigned_rate_limits": { 00:25:37.773 "rw_ios_per_sec": 0, 00:25:37.773 "rw_mbytes_per_sec": 0, 00:25:37.773 "r_mbytes_per_sec": 0, 00:25:37.773 "w_mbytes_per_sec": 0 00:25:37.773 }, 00:25:37.773 "claimed": false, 00:25:37.773 "zoned": false, 00:25:37.773 "supported_io_types": { 00:25:37.773 "read": true, 00:25:37.773 "write": true, 00:25:37.773 "unmap": false, 00:25:37.773 "flush": false, 00:25:37.773 "reset": true, 00:25:37.773 "nvme_admin": false, 00:25:37.773 "nvme_io": false, 00:25:37.773 "nvme_io_md": false, 00:25:37.773 "write_zeroes": true, 00:25:37.773 "zcopy": false, 00:25:37.773 "get_zone_info": false, 00:25:37.773 "zone_management": false, 00:25:37.773 "zone_append": false, 00:25:37.773 "compare": false, 00:25:37.773 "compare_and_write": false, 00:25:37.773 "abort": false, 00:25:37.773 "seek_hole": false, 00:25:37.773 "seek_data": false, 00:25:37.773 "copy": false, 00:25:37.773 "nvme_iov_md": false 00:25:37.773 }, 00:25:37.773 "memory_domains": [ 00:25:37.773 { 00:25:37.773 "dma_device_id": "system", 00:25:37.773 "dma_device_type": 1 00:25:37.773 }, 00:25:37.773 { 00:25:37.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:37.773 "dma_device_type": 2 00:25:37.773 }, 00:25:37.773 { 00:25:37.773 "dma_device_id": "system", 00:25:37.773 "dma_device_type": 1 00:25:37.773 }, 00:25:37.773 { 00:25:37.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:37.773 "dma_device_type": 2 00:25:37.773 }, 00:25:37.773 { 00:25:37.773 "dma_device_id": "system", 00:25:37.773 "dma_device_type": 1 00:25:37.773 }, 00:25:37.773 { 00:25:37.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:37.773 "dma_device_type": 2 00:25:37.773 }, 00:25:37.774 { 00:25:37.774 "dma_device_id": "system", 00:25:37.774 "dma_device_type": 1 00:25:37.774 }, 00:25:37.774 { 00:25:37.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:37.774 "dma_device_type": 2 00:25:37.774 } 00:25:37.774 ], 00:25:37.774 "driver_specific": { 00:25:37.774 "raid": { 00:25:37.774 "uuid": "41af6e22-cd2c-47c1-bc5d-027bc0e8ba3a", 00:25:37.774 "strip_size_kb": 0, 00:25:37.774 "state": "online", 00:25:37.774 "raid_level": "raid1", 00:25:37.774 "superblock": true, 00:25:37.774 "num_base_bdevs": 4, 00:25:37.774 "num_base_bdevs_discovered": 4, 00:25:37.774 "num_base_bdevs_operational": 4, 00:25:37.774 "base_bdevs_list": [ 00:25:37.774 { 00:25:37.774 "name": "pt1", 00:25:37.774 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:37.774 "is_configured": true, 00:25:37.774 "data_offset": 2048, 00:25:37.774 "data_size": 63488 00:25:37.774 }, 00:25:37.774 { 00:25:37.774 "name": "pt2", 00:25:37.774 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:37.774 "is_configured": true, 00:25:37.774 "data_offset": 2048, 00:25:37.774 "data_size": 63488 00:25:37.774 }, 00:25:37.774 { 00:25:37.774 "name": "pt3", 00:25:37.774 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:37.774 "is_configured": true, 00:25:37.774 "data_offset": 2048, 00:25:37.774 "data_size": 63488 00:25:37.774 }, 00:25:37.774 { 00:25:37.774 "name": "pt4", 00:25:37.774 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:37.774 "is_configured": true, 00:25:37.774 "data_offset": 2048, 00:25:37.774 "data_size": 63488 00:25:37.774 } 00:25:37.774 ] 00:25:37.774 } 00:25:37.774 } 00:25:37.774 }' 00:25:37.774 23:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:37.774 23:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:25:37.774 pt2 00:25:37.774 pt3 00:25:37.774 pt4' 00:25:37.774 23:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:37.774 23:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:25:37.774 23:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:38.032 23:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:38.032 "name": "pt1", 00:25:38.032 "aliases": [ 00:25:38.032 "00000000-0000-0000-0000-000000000001" 00:25:38.032 ], 00:25:38.032 "product_name": "passthru", 00:25:38.032 "block_size": 512, 00:25:38.032 "num_blocks": 65536, 00:25:38.032 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:38.032 "assigned_rate_limits": { 00:25:38.032 "rw_ios_per_sec": 0, 00:25:38.032 "rw_mbytes_per_sec": 0, 00:25:38.032 "r_mbytes_per_sec": 0, 00:25:38.032 "w_mbytes_per_sec": 0 00:25:38.032 }, 00:25:38.032 "claimed": true, 00:25:38.032 "claim_type": "exclusive_write", 00:25:38.032 "zoned": false, 00:25:38.032 "supported_io_types": { 00:25:38.032 "read": true, 00:25:38.032 "write": true, 00:25:38.032 "unmap": true, 00:25:38.032 "flush": true, 00:25:38.032 "reset": true, 00:25:38.032 "nvme_admin": false, 00:25:38.032 "nvme_io": false, 00:25:38.032 "nvme_io_md": false, 00:25:38.032 "write_zeroes": true, 00:25:38.032 "zcopy": true, 00:25:38.032 "get_zone_info": false, 00:25:38.032 "zone_management": false, 00:25:38.032 "zone_append": false, 00:25:38.032 "compare": false, 00:25:38.032 "compare_and_write": false, 00:25:38.032 "abort": true, 00:25:38.032 "seek_hole": false, 00:25:38.032 "seek_data": false, 00:25:38.032 "copy": true, 00:25:38.032 "nvme_iov_md": false 00:25:38.032 }, 00:25:38.032 "memory_domains": [ 00:25:38.032 { 00:25:38.032 "dma_device_id": "system", 00:25:38.032 "dma_device_type": 1 00:25:38.032 }, 00:25:38.032 { 00:25:38.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:38.032 "dma_device_type": 2 00:25:38.032 } 00:25:38.032 ], 00:25:38.032 "driver_specific": { 00:25:38.032 "passthru": { 00:25:38.032 "name": "pt1", 00:25:38.032 "base_bdev_name": "malloc1" 00:25:38.032 } 00:25:38.032 } 00:25:38.032 }' 00:25:38.032 23:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:38.032 23:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:38.032 23:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:38.032 23:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:38.032 23:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:38.032 23:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:38.032 23:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:38.290 23:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:38.290 23:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:38.290 23:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:38.290 23:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:38.290 23:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:38.290 23:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:38.290 23:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:25:38.290 23:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:38.550 23:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:38.550 "name": "pt2", 00:25:38.550 "aliases": [ 00:25:38.550 "00000000-0000-0000-0000-000000000002" 00:25:38.550 ], 00:25:38.550 "product_name": "passthru", 00:25:38.550 "block_size": 512, 00:25:38.550 "num_blocks": 65536, 00:25:38.550 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:38.550 "assigned_rate_limits": { 00:25:38.550 "rw_ios_per_sec": 0, 00:25:38.550 "rw_mbytes_per_sec": 0, 00:25:38.550 "r_mbytes_per_sec": 0, 00:25:38.550 "w_mbytes_per_sec": 0 00:25:38.550 }, 00:25:38.550 "claimed": true, 00:25:38.550 "claim_type": "exclusive_write", 00:25:38.550 "zoned": false, 00:25:38.550 "supported_io_types": { 00:25:38.550 "read": true, 00:25:38.550 "write": true, 00:25:38.550 "unmap": true, 00:25:38.550 "flush": true, 00:25:38.550 "reset": true, 00:25:38.550 "nvme_admin": false, 00:25:38.550 "nvme_io": false, 00:25:38.550 "nvme_io_md": false, 00:25:38.550 "write_zeroes": true, 00:25:38.550 "zcopy": true, 00:25:38.550 "get_zone_info": false, 00:25:38.550 "zone_management": false, 00:25:38.550 "zone_append": false, 00:25:38.550 "compare": false, 00:25:38.550 "compare_and_write": false, 00:25:38.550 "abort": true, 00:25:38.550 "seek_hole": false, 00:25:38.550 "seek_data": false, 00:25:38.550 "copy": true, 00:25:38.550 "nvme_iov_md": false 00:25:38.550 }, 00:25:38.550 "memory_domains": [ 00:25:38.550 { 00:25:38.550 "dma_device_id": "system", 00:25:38.550 "dma_device_type": 1 00:25:38.550 }, 00:25:38.550 { 00:25:38.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:38.550 "dma_device_type": 2 00:25:38.550 } 00:25:38.550 ], 00:25:38.550 "driver_specific": { 00:25:38.550 "passthru": { 00:25:38.550 "name": "pt2", 00:25:38.550 "base_bdev_name": "malloc2" 00:25:38.550 } 00:25:38.550 } 00:25:38.550 }' 00:25:38.550 23:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:38.550 23:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:38.550 23:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:38.550 23:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:38.811 23:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:38.811 23:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:38.811 23:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:38.811 23:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:38.811 23:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:38.811 23:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:38.811 23:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:38.811 23:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:38.811 23:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:38.811 23:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:25:38.811 23:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:39.070 23:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:39.070 "name": "pt3", 00:25:39.070 "aliases": [ 00:25:39.070 "00000000-0000-0000-0000-000000000003" 00:25:39.070 ], 00:25:39.070 "product_name": "passthru", 00:25:39.070 "block_size": 512, 00:25:39.070 "num_blocks": 65536, 00:25:39.070 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:39.070 "assigned_rate_limits": { 00:25:39.070 "rw_ios_per_sec": 0, 00:25:39.070 "rw_mbytes_per_sec": 0, 00:25:39.070 "r_mbytes_per_sec": 0, 00:25:39.070 "w_mbytes_per_sec": 0 00:25:39.070 }, 00:25:39.070 "claimed": true, 00:25:39.070 "claim_type": "exclusive_write", 00:25:39.070 "zoned": false, 00:25:39.070 "supported_io_types": { 00:25:39.070 "read": true, 00:25:39.070 "write": true, 00:25:39.070 "unmap": true, 00:25:39.070 "flush": true, 00:25:39.070 "reset": true, 00:25:39.070 "nvme_admin": false, 00:25:39.070 "nvme_io": false, 00:25:39.070 "nvme_io_md": false, 00:25:39.070 "write_zeroes": true, 00:25:39.070 "zcopy": true, 00:25:39.070 "get_zone_info": false, 00:25:39.070 "zone_management": false, 00:25:39.070 "zone_append": false, 00:25:39.070 "compare": false, 00:25:39.070 "compare_and_write": false, 00:25:39.070 "abort": true, 00:25:39.070 "seek_hole": false, 00:25:39.070 "seek_data": false, 00:25:39.070 "copy": true, 00:25:39.070 "nvme_iov_md": false 00:25:39.070 }, 00:25:39.070 "memory_domains": [ 00:25:39.070 { 00:25:39.070 "dma_device_id": "system", 00:25:39.070 "dma_device_type": 1 00:25:39.070 }, 00:25:39.070 { 00:25:39.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:39.070 "dma_device_type": 2 00:25:39.070 } 00:25:39.070 ], 00:25:39.070 "driver_specific": { 00:25:39.070 "passthru": { 00:25:39.070 "name": "pt3", 00:25:39.070 "base_bdev_name": "malloc3" 00:25:39.070 } 00:25:39.070 } 00:25:39.070 }' 00:25:39.329 23:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:39.329 23:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:39.329 23:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:39.329 23:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:39.329 23:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:39.329 23:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:39.329 23:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:39.329 23:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:39.588 23:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:39.588 23:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:39.588 23:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:39.588 23:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:39.588 23:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:39.588 23:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:25:39.588 23:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:39.846 23:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:39.846 "name": "pt4", 00:25:39.846 "aliases": [ 00:25:39.846 "00000000-0000-0000-0000-000000000004" 00:25:39.846 ], 00:25:39.846 "product_name": "passthru", 00:25:39.846 "block_size": 512, 00:25:39.846 "num_blocks": 65536, 00:25:39.846 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:39.846 "assigned_rate_limits": { 00:25:39.846 "rw_ios_per_sec": 0, 00:25:39.846 "rw_mbytes_per_sec": 0, 00:25:39.846 "r_mbytes_per_sec": 0, 00:25:39.846 "w_mbytes_per_sec": 0 00:25:39.846 }, 00:25:39.846 "claimed": true, 00:25:39.846 "claim_type": "exclusive_write", 00:25:39.846 "zoned": false, 00:25:39.846 "supported_io_types": { 00:25:39.846 "read": true, 00:25:39.846 "write": true, 00:25:39.846 "unmap": true, 00:25:39.846 "flush": true, 00:25:39.846 "reset": true, 00:25:39.846 "nvme_admin": false, 00:25:39.846 "nvme_io": false, 00:25:39.846 "nvme_io_md": false, 00:25:39.846 "write_zeroes": true, 00:25:39.846 "zcopy": true, 00:25:39.846 "get_zone_info": false, 00:25:39.846 "zone_management": false, 00:25:39.846 "zone_append": false, 00:25:39.846 "compare": false, 00:25:39.846 "compare_and_write": false, 00:25:39.846 "abort": true, 00:25:39.846 "seek_hole": false, 00:25:39.846 "seek_data": false, 00:25:39.846 "copy": true, 00:25:39.847 "nvme_iov_md": false 00:25:39.847 }, 00:25:39.847 "memory_domains": [ 00:25:39.847 { 00:25:39.847 "dma_device_id": "system", 00:25:39.847 "dma_device_type": 1 00:25:39.847 }, 00:25:39.847 { 00:25:39.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:39.847 "dma_device_type": 2 00:25:39.847 } 00:25:39.847 ], 00:25:39.847 "driver_specific": { 00:25:39.847 "passthru": { 00:25:39.847 "name": "pt4", 00:25:39.847 "base_bdev_name": "malloc4" 00:25:39.847 } 00:25:39.847 } 00:25:39.847 }' 00:25:39.847 23:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:39.847 23:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:39.847 23:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:39.847 23:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:40.105 23:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:40.105 23:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:40.105 23:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:40.105 23:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:40.105 23:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:40.105 23:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:40.105 23:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:40.105 23:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:40.105 23:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:40.105 23:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:25:40.363 [2024-07-13 23:11:29.732950] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:40.363 23:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=41af6e22-cd2c-47c1-bc5d-027bc0e8ba3a 00:25:40.363 23:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 41af6e22-cd2c-47c1-bc5d-027bc0e8ba3a ']' 00:25:40.363 23:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:40.621 [2024-07-13 23:11:29.964704] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:40.621 [2024-07-13 23:11:29.964743] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:40.621 [2024-07-13 23:11:29.964868] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:40.621 [2024-07-13 23:11:29.965013] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:40.621 [2024-07-13 23:11:29.965045] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:25:40.621 23:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:40.621 23:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:25:40.879 23:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:25:40.879 23:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:25:40.879 23:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:40.879 23:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:41.138 23:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:41.138 23:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:41.396 23:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:41.396 23:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:41.654 23:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:41.654 23:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:41.912 23:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:25:41.912 23:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:42.171 23:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:25:42.171 23:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:42.171 23:11:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:25:42.171 23:11:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:42.171 23:11:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:42.171 23:11:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:42.171 23:11:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:42.171 23:11:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:42.171 23:11:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:42.171 23:11:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:42.171 23:11:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:42.171 23:11:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:25:42.171 23:11:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:42.429 [2024-07-13 23:11:31.709940] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:42.429 [2024-07-13 23:11:31.712113] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:42.429 [2024-07-13 23:11:31.712191] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:25:42.429 [2024-07-13 23:11:31.712232] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:25:42.429 [2024-07-13 23:11:31.712288] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:42.429 [2024-07-13 23:11:31.712408] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:42.429 [2024-07-13 23:11:31.712520] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:25:42.429 [2024-07-13 23:11:31.712581] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:25:42.429 [2024-07-13 23:11:31.712611] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:42.429 [2024-07-13 23:11:31.712624] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state configuring 00:25:42.429 request: 00:25:42.430 { 00:25:42.430 "name": "raid_bdev1", 00:25:42.430 "raid_level": "raid1", 00:25:42.430 "base_bdevs": [ 00:25:42.430 "malloc1", 00:25:42.430 "malloc2", 00:25:42.430 "malloc3", 00:25:42.430 "malloc4" 00:25:42.430 ], 00:25:42.430 "superblock": false, 00:25:42.430 "method": "bdev_raid_create", 00:25:42.430 "req_id": 1 00:25:42.430 } 00:25:42.430 Got JSON-RPC error response 00:25:42.430 response: 00:25:42.430 { 00:25:42.430 "code": -17, 00:25:42.430 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:42.430 } 00:25:42.430 23:11:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:25:42.430 23:11:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:42.430 23:11:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:42.430 23:11:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:42.430 23:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.430 23:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:25:42.687 23:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:25:42.687 23:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:25:42.687 23:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:42.946 [2024-07-13 23:11:32.189974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:42.946 [2024-07-13 23:11:32.190130] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:42.946 [2024-07-13 23:11:32.190177] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:42.946 [2024-07-13 23:11:32.190207] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:42.946 [2024-07-13 23:11:32.192659] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:42.946 [2024-07-13 23:11:32.192750] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:42.946 [2024-07-13 23:11:32.192862] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:42.946 [2024-07-13 23:11:32.192972] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:42.946 pt1 00:25:42.946 23:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:25:42.946 23:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:42.946 23:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:42.946 23:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:42.946 23:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:42.946 23:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:42.946 23:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:42.946 23:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:42.946 23:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:42.946 23:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:42.946 23:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.946 23:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:43.205 23:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:43.205 "name": "raid_bdev1", 00:25:43.205 "uuid": "41af6e22-cd2c-47c1-bc5d-027bc0e8ba3a", 00:25:43.205 "strip_size_kb": 0, 00:25:43.205 "state": "configuring", 00:25:43.205 "raid_level": "raid1", 00:25:43.205 "superblock": true, 00:25:43.205 "num_base_bdevs": 4, 00:25:43.205 "num_base_bdevs_discovered": 1, 00:25:43.205 "num_base_bdevs_operational": 4, 00:25:43.205 "base_bdevs_list": [ 00:25:43.205 { 00:25:43.205 "name": "pt1", 00:25:43.205 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:43.205 "is_configured": true, 00:25:43.205 "data_offset": 2048, 00:25:43.205 "data_size": 63488 00:25:43.205 }, 00:25:43.205 { 00:25:43.205 "name": null, 00:25:43.205 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:43.205 "is_configured": false, 00:25:43.205 "data_offset": 2048, 00:25:43.205 "data_size": 63488 00:25:43.205 }, 00:25:43.205 { 00:25:43.205 "name": null, 00:25:43.205 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:43.205 "is_configured": false, 00:25:43.205 "data_offset": 2048, 00:25:43.205 "data_size": 63488 00:25:43.205 }, 00:25:43.205 { 00:25:43.205 "name": null, 00:25:43.205 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:43.205 "is_configured": false, 00:25:43.205 "data_offset": 2048, 00:25:43.205 "data_size": 63488 00:25:43.205 } 00:25:43.205 ] 00:25:43.205 }' 00:25:43.205 23:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:43.205 23:11:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:43.771 23:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:25:43.771 23:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:44.041 [2024-07-13 23:11:33.358267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:44.041 [2024-07-13 23:11:33.358415] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:44.041 [2024-07-13 23:11:33.358484] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:25:44.041 [2024-07-13 23:11:33.358507] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:44.041 [2024-07-13 23:11:33.359039] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:44.041 [2024-07-13 23:11:33.359122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:44.041 [2024-07-13 23:11:33.359268] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:44.041 [2024-07-13 23:11:33.359298] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:44.041 pt2 00:25:44.041 23:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:44.299 [2024-07-13 23:11:33.578374] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:25:44.299 23:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:25:44.299 23:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:44.299 23:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:44.299 23:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:44.299 23:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:44.299 23:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:44.299 23:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:44.299 23:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:44.299 23:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:44.299 23:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:44.299 23:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:44.299 23:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:44.555 23:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:44.555 "name": "raid_bdev1", 00:25:44.555 "uuid": "41af6e22-cd2c-47c1-bc5d-027bc0e8ba3a", 00:25:44.555 "strip_size_kb": 0, 00:25:44.555 "state": "configuring", 00:25:44.555 "raid_level": "raid1", 00:25:44.555 "superblock": true, 00:25:44.555 "num_base_bdevs": 4, 00:25:44.555 "num_base_bdevs_discovered": 1, 00:25:44.555 "num_base_bdevs_operational": 4, 00:25:44.555 "base_bdevs_list": [ 00:25:44.555 { 00:25:44.555 "name": "pt1", 00:25:44.555 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:44.555 "is_configured": true, 00:25:44.555 "data_offset": 2048, 00:25:44.555 "data_size": 63488 00:25:44.555 }, 00:25:44.555 { 00:25:44.555 "name": null, 00:25:44.555 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:44.555 "is_configured": false, 00:25:44.555 "data_offset": 2048, 00:25:44.555 "data_size": 63488 00:25:44.555 }, 00:25:44.555 { 00:25:44.555 "name": null, 00:25:44.555 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:44.555 "is_configured": false, 00:25:44.555 "data_offset": 2048, 00:25:44.555 "data_size": 63488 00:25:44.555 }, 00:25:44.555 { 00:25:44.555 "name": null, 00:25:44.555 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:44.555 "is_configured": false, 00:25:44.555 "data_offset": 2048, 00:25:44.555 "data_size": 63488 00:25:44.555 } 00:25:44.555 ] 00:25:44.555 }' 00:25:44.555 23:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:44.555 23:11:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.119 23:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:25:45.119 23:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:45.119 23:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:45.376 [2024-07-13 23:11:34.686648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:45.376 [2024-07-13 23:11:34.686787] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:45.376 [2024-07-13 23:11:34.686843] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:25:45.376 [2024-07-13 23:11:34.686867] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:45.376 [2024-07-13 23:11:34.687426] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:45.376 [2024-07-13 23:11:34.687513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:45.376 [2024-07-13 23:11:34.687627] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:45.376 [2024-07-13 23:11:34.687656] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:45.376 pt2 00:25:45.376 23:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:25:45.376 23:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:45.376 23:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:45.634 [2024-07-13 23:11:34.960673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:45.634 [2024-07-13 23:11:34.960795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:45.634 [2024-07-13 23:11:34.960833] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:25:45.634 [2024-07-13 23:11:34.960864] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:45.634 [2024-07-13 23:11:34.961377] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:45.634 [2024-07-13 23:11:34.961598] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:45.634 [2024-07-13 23:11:34.961820] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:25:45.634 [2024-07-13 23:11:34.961957] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:45.634 pt3 00:25:45.634 23:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:25:45.634 23:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:45.634 23:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:45.891 [2024-07-13 23:11:35.176672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:45.891 [2024-07-13 23:11:35.176972] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:45.891 [2024-07-13 23:11:35.177134] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:25:45.891 [2024-07-13 23:11:35.177272] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:45.891 [2024-07-13 23:11:35.177809] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:45.891 [2024-07-13 23:11:35.178049] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:45.891 [2024-07-13 23:11:35.178262] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:25:45.891 [2024-07-13 23:11:35.178446] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:45.891 [2024-07-13 23:11:35.178755] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:25:45.891 [2024-07-13 23:11:35.178871] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:45.892 [2024-07-13 23:11:35.178991] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:25:45.892 [2024-07-13 23:11:35.179453] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:25:45.892 [2024-07-13 23:11:35.179583] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:25:45.892 [2024-07-13 23:11:35.179778] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:45.892 pt4 00:25:45.892 23:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:25:45.892 23:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:45.892 23:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:25:45.892 23:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:45.892 23:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:45.892 23:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:45.892 23:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:45.892 23:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:45.892 23:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:45.892 23:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:45.892 23:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:45.892 23:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:45.892 23:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:45.892 23:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:46.149 23:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:46.149 "name": "raid_bdev1", 00:25:46.149 "uuid": "41af6e22-cd2c-47c1-bc5d-027bc0e8ba3a", 00:25:46.149 "strip_size_kb": 0, 00:25:46.149 "state": "online", 00:25:46.149 "raid_level": "raid1", 00:25:46.149 "superblock": true, 00:25:46.149 "num_base_bdevs": 4, 00:25:46.149 "num_base_bdevs_discovered": 4, 00:25:46.149 "num_base_bdevs_operational": 4, 00:25:46.149 "base_bdevs_list": [ 00:25:46.149 { 00:25:46.149 "name": "pt1", 00:25:46.149 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:46.149 "is_configured": true, 00:25:46.149 "data_offset": 2048, 00:25:46.149 "data_size": 63488 00:25:46.149 }, 00:25:46.149 { 00:25:46.149 "name": "pt2", 00:25:46.149 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:46.149 "is_configured": true, 00:25:46.149 "data_offset": 2048, 00:25:46.149 "data_size": 63488 00:25:46.149 }, 00:25:46.149 { 00:25:46.149 "name": "pt3", 00:25:46.149 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:46.149 "is_configured": true, 00:25:46.149 "data_offset": 2048, 00:25:46.149 "data_size": 63488 00:25:46.149 }, 00:25:46.149 { 00:25:46.149 "name": "pt4", 00:25:46.149 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:46.149 "is_configured": true, 00:25:46.149 "data_offset": 2048, 00:25:46.149 "data_size": 63488 00:25:46.149 } 00:25:46.149 ] 00:25:46.149 }' 00:25:46.149 23:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:46.149 23:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:46.715 23:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:25:46.715 23:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:25:46.715 23:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:46.715 23:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:46.715 23:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:46.715 23:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:46.715 23:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:46.715 23:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:46.973 [2024-07-13 23:11:36.337290] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:46.973 23:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:46.973 "name": "raid_bdev1", 00:25:46.973 "aliases": [ 00:25:46.973 "41af6e22-cd2c-47c1-bc5d-027bc0e8ba3a" 00:25:46.973 ], 00:25:46.973 "product_name": "Raid Volume", 00:25:46.973 "block_size": 512, 00:25:46.973 "num_blocks": 63488, 00:25:46.973 "uuid": "41af6e22-cd2c-47c1-bc5d-027bc0e8ba3a", 00:25:46.973 "assigned_rate_limits": { 00:25:46.973 "rw_ios_per_sec": 0, 00:25:46.973 "rw_mbytes_per_sec": 0, 00:25:46.973 "r_mbytes_per_sec": 0, 00:25:46.973 "w_mbytes_per_sec": 0 00:25:46.973 }, 00:25:46.973 "claimed": false, 00:25:46.973 "zoned": false, 00:25:46.973 "supported_io_types": { 00:25:46.973 "read": true, 00:25:46.973 "write": true, 00:25:46.973 "unmap": false, 00:25:46.973 "flush": false, 00:25:46.973 "reset": true, 00:25:46.973 "nvme_admin": false, 00:25:46.973 "nvme_io": false, 00:25:46.973 "nvme_io_md": false, 00:25:46.973 "write_zeroes": true, 00:25:46.973 "zcopy": false, 00:25:46.973 "get_zone_info": false, 00:25:46.973 "zone_management": false, 00:25:46.973 "zone_append": false, 00:25:46.973 "compare": false, 00:25:46.973 "compare_and_write": false, 00:25:46.973 "abort": false, 00:25:46.973 "seek_hole": false, 00:25:46.973 "seek_data": false, 00:25:46.973 "copy": false, 00:25:46.973 "nvme_iov_md": false 00:25:46.973 }, 00:25:46.973 "memory_domains": [ 00:25:46.973 { 00:25:46.973 "dma_device_id": "system", 00:25:46.973 "dma_device_type": 1 00:25:46.973 }, 00:25:46.973 { 00:25:46.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:46.973 "dma_device_type": 2 00:25:46.973 }, 00:25:46.973 { 00:25:46.973 "dma_device_id": "system", 00:25:46.973 "dma_device_type": 1 00:25:46.973 }, 00:25:46.973 { 00:25:46.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:46.974 "dma_device_type": 2 00:25:46.974 }, 00:25:46.974 { 00:25:46.974 "dma_device_id": "system", 00:25:46.974 "dma_device_type": 1 00:25:46.974 }, 00:25:46.974 { 00:25:46.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:46.974 "dma_device_type": 2 00:25:46.974 }, 00:25:46.974 { 00:25:46.974 "dma_device_id": "system", 00:25:46.974 "dma_device_type": 1 00:25:46.974 }, 00:25:46.974 { 00:25:46.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:46.974 "dma_device_type": 2 00:25:46.974 } 00:25:46.974 ], 00:25:46.974 "driver_specific": { 00:25:46.974 "raid": { 00:25:46.974 "uuid": "41af6e22-cd2c-47c1-bc5d-027bc0e8ba3a", 00:25:46.974 "strip_size_kb": 0, 00:25:46.974 "state": "online", 00:25:46.974 "raid_level": "raid1", 00:25:46.974 "superblock": true, 00:25:46.974 "num_base_bdevs": 4, 00:25:46.974 "num_base_bdevs_discovered": 4, 00:25:46.974 "num_base_bdevs_operational": 4, 00:25:46.974 "base_bdevs_list": [ 00:25:46.974 { 00:25:46.974 "name": "pt1", 00:25:46.974 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:46.974 "is_configured": true, 00:25:46.974 "data_offset": 2048, 00:25:46.974 "data_size": 63488 00:25:46.974 }, 00:25:46.974 { 00:25:46.974 "name": "pt2", 00:25:46.974 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:46.974 "is_configured": true, 00:25:46.974 "data_offset": 2048, 00:25:46.974 "data_size": 63488 00:25:46.974 }, 00:25:46.974 { 00:25:46.974 "name": "pt3", 00:25:46.974 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:46.974 "is_configured": true, 00:25:46.974 "data_offset": 2048, 00:25:46.974 "data_size": 63488 00:25:46.974 }, 00:25:46.974 { 00:25:46.974 "name": "pt4", 00:25:46.974 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:46.974 "is_configured": true, 00:25:46.974 "data_offset": 2048, 00:25:46.974 "data_size": 63488 00:25:46.974 } 00:25:46.974 ] 00:25:46.974 } 00:25:46.974 } 00:25:46.974 }' 00:25:46.974 23:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:47.232 23:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:25:47.232 pt2 00:25:47.232 pt3 00:25:47.232 pt4' 00:25:47.232 23:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:47.232 23:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:25:47.232 23:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:47.490 23:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:47.490 "name": "pt1", 00:25:47.490 "aliases": [ 00:25:47.490 "00000000-0000-0000-0000-000000000001" 00:25:47.490 ], 00:25:47.490 "product_name": "passthru", 00:25:47.490 "block_size": 512, 00:25:47.490 "num_blocks": 65536, 00:25:47.490 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:47.490 "assigned_rate_limits": { 00:25:47.490 "rw_ios_per_sec": 0, 00:25:47.490 "rw_mbytes_per_sec": 0, 00:25:47.490 "r_mbytes_per_sec": 0, 00:25:47.490 "w_mbytes_per_sec": 0 00:25:47.490 }, 00:25:47.490 "claimed": true, 00:25:47.490 "claim_type": "exclusive_write", 00:25:47.490 "zoned": false, 00:25:47.490 "supported_io_types": { 00:25:47.490 "read": true, 00:25:47.490 "write": true, 00:25:47.490 "unmap": true, 00:25:47.490 "flush": true, 00:25:47.490 "reset": true, 00:25:47.490 "nvme_admin": false, 00:25:47.490 "nvme_io": false, 00:25:47.490 "nvme_io_md": false, 00:25:47.490 "write_zeroes": true, 00:25:47.490 "zcopy": true, 00:25:47.490 "get_zone_info": false, 00:25:47.490 "zone_management": false, 00:25:47.490 "zone_append": false, 00:25:47.490 "compare": false, 00:25:47.490 "compare_and_write": false, 00:25:47.490 "abort": true, 00:25:47.490 "seek_hole": false, 00:25:47.490 "seek_data": false, 00:25:47.490 "copy": true, 00:25:47.490 "nvme_iov_md": false 00:25:47.490 }, 00:25:47.490 "memory_domains": [ 00:25:47.490 { 00:25:47.490 "dma_device_id": "system", 00:25:47.490 "dma_device_type": 1 00:25:47.490 }, 00:25:47.490 { 00:25:47.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:47.490 "dma_device_type": 2 00:25:47.490 } 00:25:47.490 ], 00:25:47.490 "driver_specific": { 00:25:47.490 "passthru": { 00:25:47.490 "name": "pt1", 00:25:47.490 "base_bdev_name": "malloc1" 00:25:47.490 } 00:25:47.490 } 00:25:47.490 }' 00:25:47.490 23:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:47.490 23:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:47.490 23:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:47.490 23:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:47.490 23:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:47.490 23:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:47.490 23:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:47.748 23:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:47.748 23:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:47.748 23:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:47.748 23:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:47.748 23:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:47.748 23:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:47.748 23:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:47.748 23:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:25:48.028 23:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:48.028 "name": "pt2", 00:25:48.028 "aliases": [ 00:25:48.028 "00000000-0000-0000-0000-000000000002" 00:25:48.028 ], 00:25:48.028 "product_name": "passthru", 00:25:48.028 "block_size": 512, 00:25:48.028 "num_blocks": 65536, 00:25:48.028 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:48.028 "assigned_rate_limits": { 00:25:48.028 "rw_ios_per_sec": 0, 00:25:48.028 "rw_mbytes_per_sec": 0, 00:25:48.028 "r_mbytes_per_sec": 0, 00:25:48.028 "w_mbytes_per_sec": 0 00:25:48.028 }, 00:25:48.028 "claimed": true, 00:25:48.028 "claim_type": "exclusive_write", 00:25:48.028 "zoned": false, 00:25:48.028 "supported_io_types": { 00:25:48.028 "read": true, 00:25:48.028 "write": true, 00:25:48.028 "unmap": true, 00:25:48.028 "flush": true, 00:25:48.028 "reset": true, 00:25:48.028 "nvme_admin": false, 00:25:48.028 "nvme_io": false, 00:25:48.028 "nvme_io_md": false, 00:25:48.028 "write_zeroes": true, 00:25:48.028 "zcopy": true, 00:25:48.028 "get_zone_info": false, 00:25:48.028 "zone_management": false, 00:25:48.028 "zone_append": false, 00:25:48.028 "compare": false, 00:25:48.028 "compare_and_write": false, 00:25:48.028 "abort": true, 00:25:48.028 "seek_hole": false, 00:25:48.028 "seek_data": false, 00:25:48.028 "copy": true, 00:25:48.028 "nvme_iov_md": false 00:25:48.028 }, 00:25:48.028 "memory_domains": [ 00:25:48.028 { 00:25:48.028 "dma_device_id": "system", 00:25:48.028 "dma_device_type": 1 00:25:48.028 }, 00:25:48.028 { 00:25:48.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:48.028 "dma_device_type": 2 00:25:48.028 } 00:25:48.028 ], 00:25:48.028 "driver_specific": { 00:25:48.028 "passthru": { 00:25:48.028 "name": "pt2", 00:25:48.028 "base_bdev_name": "malloc2" 00:25:48.028 } 00:25:48.028 } 00:25:48.028 }' 00:25:48.028 23:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:48.028 23:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:48.028 23:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:48.028 23:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:48.285 23:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:48.285 23:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:48.285 23:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:48.285 23:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:48.285 23:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:48.285 23:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:48.285 23:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:48.285 23:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:48.285 23:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:48.285 23:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:48.285 23:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:25:48.851 23:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:48.851 "name": "pt3", 00:25:48.851 "aliases": [ 00:25:48.851 "00000000-0000-0000-0000-000000000003" 00:25:48.851 ], 00:25:48.851 "product_name": "passthru", 00:25:48.851 "block_size": 512, 00:25:48.851 "num_blocks": 65536, 00:25:48.851 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:48.851 "assigned_rate_limits": { 00:25:48.851 "rw_ios_per_sec": 0, 00:25:48.851 "rw_mbytes_per_sec": 0, 00:25:48.851 "r_mbytes_per_sec": 0, 00:25:48.851 "w_mbytes_per_sec": 0 00:25:48.851 }, 00:25:48.851 "claimed": true, 00:25:48.851 "claim_type": "exclusive_write", 00:25:48.851 "zoned": false, 00:25:48.851 "supported_io_types": { 00:25:48.851 "read": true, 00:25:48.851 "write": true, 00:25:48.851 "unmap": true, 00:25:48.851 "flush": true, 00:25:48.851 "reset": true, 00:25:48.851 "nvme_admin": false, 00:25:48.851 "nvme_io": false, 00:25:48.851 "nvme_io_md": false, 00:25:48.851 "write_zeroes": true, 00:25:48.851 "zcopy": true, 00:25:48.851 "get_zone_info": false, 00:25:48.851 "zone_management": false, 00:25:48.851 "zone_append": false, 00:25:48.851 "compare": false, 00:25:48.851 "compare_and_write": false, 00:25:48.851 "abort": true, 00:25:48.851 "seek_hole": false, 00:25:48.851 "seek_data": false, 00:25:48.851 "copy": true, 00:25:48.851 "nvme_iov_md": false 00:25:48.851 }, 00:25:48.851 "memory_domains": [ 00:25:48.851 { 00:25:48.851 "dma_device_id": "system", 00:25:48.851 "dma_device_type": 1 00:25:48.851 }, 00:25:48.851 { 00:25:48.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:48.851 "dma_device_type": 2 00:25:48.851 } 00:25:48.851 ], 00:25:48.851 "driver_specific": { 00:25:48.851 "passthru": { 00:25:48.851 "name": "pt3", 00:25:48.851 "base_bdev_name": "malloc3" 00:25:48.851 } 00:25:48.851 } 00:25:48.851 }' 00:25:48.851 23:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:48.851 23:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:48.851 23:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:48.851 23:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:48.851 23:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:48.851 23:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:48.851 23:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:48.851 23:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:49.109 23:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:49.109 23:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:49.109 23:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:49.109 23:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:49.109 23:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:49.109 23:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:25:49.109 23:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:49.367 23:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:49.367 "name": "pt4", 00:25:49.367 "aliases": [ 00:25:49.367 "00000000-0000-0000-0000-000000000004" 00:25:49.367 ], 00:25:49.367 "product_name": "passthru", 00:25:49.367 "block_size": 512, 00:25:49.367 "num_blocks": 65536, 00:25:49.367 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:49.367 "assigned_rate_limits": { 00:25:49.367 "rw_ios_per_sec": 0, 00:25:49.367 "rw_mbytes_per_sec": 0, 00:25:49.367 "r_mbytes_per_sec": 0, 00:25:49.367 "w_mbytes_per_sec": 0 00:25:49.367 }, 00:25:49.367 "claimed": true, 00:25:49.367 "claim_type": "exclusive_write", 00:25:49.367 "zoned": false, 00:25:49.367 "supported_io_types": { 00:25:49.367 "read": true, 00:25:49.367 "write": true, 00:25:49.367 "unmap": true, 00:25:49.367 "flush": true, 00:25:49.367 "reset": true, 00:25:49.367 "nvme_admin": false, 00:25:49.367 "nvme_io": false, 00:25:49.367 "nvme_io_md": false, 00:25:49.367 "write_zeroes": true, 00:25:49.367 "zcopy": true, 00:25:49.367 "get_zone_info": false, 00:25:49.367 "zone_management": false, 00:25:49.367 "zone_append": false, 00:25:49.367 "compare": false, 00:25:49.367 "compare_and_write": false, 00:25:49.367 "abort": true, 00:25:49.367 "seek_hole": false, 00:25:49.367 "seek_data": false, 00:25:49.367 "copy": true, 00:25:49.367 "nvme_iov_md": false 00:25:49.367 }, 00:25:49.367 "memory_domains": [ 00:25:49.367 { 00:25:49.367 "dma_device_id": "system", 00:25:49.367 "dma_device_type": 1 00:25:49.367 }, 00:25:49.367 { 00:25:49.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.367 "dma_device_type": 2 00:25:49.367 } 00:25:49.367 ], 00:25:49.367 "driver_specific": { 00:25:49.367 "passthru": { 00:25:49.367 "name": "pt4", 00:25:49.367 "base_bdev_name": "malloc4" 00:25:49.367 } 00:25:49.367 } 00:25:49.367 }' 00:25:49.367 23:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:49.367 23:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:49.367 23:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:49.367 23:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:49.367 23:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:49.626 23:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:49.626 23:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:49.626 23:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:49.626 23:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:49.626 23:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:49.626 23:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:49.626 23:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:49.626 23:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:49.626 23:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:25:49.884 [2024-07-13 23:11:39.250244] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:49.884 23:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 41af6e22-cd2c-47c1-bc5d-027bc0e8ba3a '!=' 41af6e22-cd2c-47c1-bc5d-027bc0e8ba3a ']' 00:25:49.884 23:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:25:49.884 23:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:49.884 23:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:25:49.884 23:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:50.142 [2024-07-13 23:11:39.514136] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:25:50.142 23:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:50.142 23:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:50.142 23:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:50.142 23:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:50.142 23:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:50.142 23:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:50.142 23:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:50.142 23:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:50.142 23:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:50.142 23:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:50.142 23:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.142 23:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:50.707 23:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:50.707 "name": "raid_bdev1", 00:25:50.707 "uuid": "41af6e22-cd2c-47c1-bc5d-027bc0e8ba3a", 00:25:50.707 "strip_size_kb": 0, 00:25:50.707 "state": "online", 00:25:50.707 "raid_level": "raid1", 00:25:50.707 "superblock": true, 00:25:50.707 "num_base_bdevs": 4, 00:25:50.707 "num_base_bdevs_discovered": 3, 00:25:50.707 "num_base_bdevs_operational": 3, 00:25:50.707 "base_bdevs_list": [ 00:25:50.707 { 00:25:50.707 "name": null, 00:25:50.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.707 "is_configured": false, 00:25:50.707 "data_offset": 2048, 00:25:50.707 "data_size": 63488 00:25:50.707 }, 00:25:50.707 { 00:25:50.707 "name": "pt2", 00:25:50.707 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:50.707 "is_configured": true, 00:25:50.707 "data_offset": 2048, 00:25:50.707 "data_size": 63488 00:25:50.707 }, 00:25:50.707 { 00:25:50.707 "name": "pt3", 00:25:50.707 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:50.707 "is_configured": true, 00:25:50.707 "data_offset": 2048, 00:25:50.707 "data_size": 63488 00:25:50.707 }, 00:25:50.707 { 00:25:50.707 "name": "pt4", 00:25:50.707 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:50.707 "is_configured": true, 00:25:50.707 "data_offset": 2048, 00:25:50.707 "data_size": 63488 00:25:50.707 } 00:25:50.707 ] 00:25:50.707 }' 00:25:50.707 23:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:50.707 23:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:51.271 23:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:51.272 [2024-07-13 23:11:40.634337] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:51.272 [2024-07-13 23:11:40.634571] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:51.272 [2024-07-13 23:11:40.634783] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:51.272 [2024-07-13 23:11:40.634977] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:51.272 [2024-07-13 23:11:40.635091] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:25:51.272 23:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:51.272 23:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:25:51.529 23:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:25:51.529 23:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:25:51.529 23:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:25:51.529 23:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:25:51.529 23:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:51.787 23:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:25:51.787 23:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:25:51.787 23:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:52.044 23:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:25:52.044 23:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:25:52.044 23:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:52.608 23:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:25:52.608 23:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:25:52.608 23:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:25:52.608 23:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:25:52.608 23:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:52.608 [2024-07-13 23:11:41.902625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:52.608 [2024-07-13 23:11:41.902884] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:52.608 [2024-07-13 23:11:41.903037] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:25:52.608 [2024-07-13 23:11:41.903161] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:52.608 [2024-07-13 23:11:41.905776] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:52.608 [2024-07-13 23:11:41.905962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:52.608 [2024-07-13 23:11:41.906255] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:52.608 [2024-07-13 23:11:41.906462] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:52.608 pt2 00:25:52.608 23:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:25:52.608 23:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:52.608 23:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:52.608 23:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:52.608 23:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:52.608 23:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:52.608 23:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:52.608 23:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:52.609 23:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:52.609 23:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:52.609 23:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.609 23:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:52.866 23:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:52.866 "name": "raid_bdev1", 00:25:52.866 "uuid": "41af6e22-cd2c-47c1-bc5d-027bc0e8ba3a", 00:25:52.866 "strip_size_kb": 0, 00:25:52.866 "state": "configuring", 00:25:52.866 "raid_level": "raid1", 00:25:52.866 "superblock": true, 00:25:52.866 "num_base_bdevs": 4, 00:25:52.866 "num_base_bdevs_discovered": 1, 00:25:52.866 "num_base_bdevs_operational": 3, 00:25:52.866 "base_bdevs_list": [ 00:25:52.866 { 00:25:52.866 "name": null, 00:25:52.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:52.866 "is_configured": false, 00:25:52.866 "data_offset": 2048, 00:25:52.866 "data_size": 63488 00:25:52.866 }, 00:25:52.866 { 00:25:52.866 "name": "pt2", 00:25:52.866 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:52.866 "is_configured": true, 00:25:52.866 "data_offset": 2048, 00:25:52.866 "data_size": 63488 00:25:52.866 }, 00:25:52.866 { 00:25:52.866 "name": null, 00:25:52.866 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:52.866 "is_configured": false, 00:25:52.866 "data_offset": 2048, 00:25:52.866 "data_size": 63488 00:25:52.866 }, 00:25:52.866 { 00:25:52.866 "name": null, 00:25:52.866 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:52.866 "is_configured": false, 00:25:52.866 "data_offset": 2048, 00:25:52.866 "data_size": 63488 00:25:52.866 } 00:25:52.866 ] 00:25:52.866 }' 00:25:52.866 23:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:52.866 23:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:53.432 23:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:25:53.432 23:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:25:53.432 23:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:53.690 [2024-07-13 23:11:43.019031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:53.690 [2024-07-13 23:11:43.019334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:53.690 [2024-07-13 23:11:43.019541] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:25:53.690 [2024-07-13 23:11:43.019700] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:53.690 [2024-07-13 23:11:43.020346] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:53.690 [2024-07-13 23:11:43.020563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:53.690 [2024-07-13 23:11:43.020787] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:25:53.690 [2024-07-13 23:11:43.020942] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:53.690 pt3 00:25:53.690 23:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:25:53.690 23:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:53.690 23:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:53.690 23:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:53.690 23:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:53.690 23:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:53.690 23:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:53.690 23:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:53.690 23:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:53.690 23:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:53.690 23:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.690 23:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:53.946 23:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:53.946 "name": "raid_bdev1", 00:25:53.946 "uuid": "41af6e22-cd2c-47c1-bc5d-027bc0e8ba3a", 00:25:53.946 "strip_size_kb": 0, 00:25:53.946 "state": "configuring", 00:25:53.946 "raid_level": "raid1", 00:25:53.946 "superblock": true, 00:25:53.946 "num_base_bdevs": 4, 00:25:53.946 "num_base_bdevs_discovered": 2, 00:25:53.946 "num_base_bdevs_operational": 3, 00:25:53.946 "base_bdevs_list": [ 00:25:53.946 { 00:25:53.946 "name": null, 00:25:53.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.946 "is_configured": false, 00:25:53.946 "data_offset": 2048, 00:25:53.946 "data_size": 63488 00:25:53.946 }, 00:25:53.946 { 00:25:53.946 "name": "pt2", 00:25:53.946 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:53.946 "is_configured": true, 00:25:53.946 "data_offset": 2048, 00:25:53.946 "data_size": 63488 00:25:53.946 }, 00:25:53.946 { 00:25:53.946 "name": "pt3", 00:25:53.946 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:53.946 "is_configured": true, 00:25:53.946 "data_offset": 2048, 00:25:53.946 "data_size": 63488 00:25:53.946 }, 00:25:53.946 { 00:25:53.946 "name": null, 00:25:53.946 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:53.946 "is_configured": false, 00:25:53.946 "data_offset": 2048, 00:25:53.946 "data_size": 63488 00:25:53.946 } 00:25:53.946 ] 00:25:53.946 }' 00:25:53.946 23:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:53.946 23:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.875 23:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:25:54.875 23:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:25:54.875 23:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:25:54.875 23:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:54.875 [2024-07-13 23:11:44.171267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:54.875 [2024-07-13 23:11:44.171577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:54.875 [2024-07-13 23:11:44.171742] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:25:54.875 [2024-07-13 23:11:44.171867] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:54.875 [2024-07-13 23:11:44.172467] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:54.875 [2024-07-13 23:11:44.172661] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:54.875 [2024-07-13 23:11:44.172864] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:25:54.875 [2024-07-13 23:11:44.173042] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:54.875 [2024-07-13 23:11:44.173232] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:25:54.875 [2024-07-13 23:11:44.173389] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:54.875 [2024-07-13 23:11:44.173589] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002c80 00:25:54.875 [2024-07-13 23:11:44.174078] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:25:54.875 [2024-07-13 23:11:44.174223] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:25:54.875 [2024-07-13 23:11:44.174483] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:54.875 pt4 00:25:54.875 23:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:54.875 23:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:54.875 23:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:54.875 23:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:54.875 23:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:54.875 23:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:54.875 23:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:54.875 23:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:54.875 23:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:54.875 23:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:54.876 23:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.876 23:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:55.133 23:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:55.133 "name": "raid_bdev1", 00:25:55.133 "uuid": "41af6e22-cd2c-47c1-bc5d-027bc0e8ba3a", 00:25:55.133 "strip_size_kb": 0, 00:25:55.133 "state": "online", 00:25:55.133 "raid_level": "raid1", 00:25:55.133 "superblock": true, 00:25:55.133 "num_base_bdevs": 4, 00:25:55.133 "num_base_bdevs_discovered": 3, 00:25:55.133 "num_base_bdevs_operational": 3, 00:25:55.133 "base_bdevs_list": [ 00:25:55.133 { 00:25:55.133 "name": null, 00:25:55.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.133 "is_configured": false, 00:25:55.133 "data_offset": 2048, 00:25:55.133 "data_size": 63488 00:25:55.133 }, 00:25:55.133 { 00:25:55.133 "name": "pt2", 00:25:55.133 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:55.133 "is_configured": true, 00:25:55.133 "data_offset": 2048, 00:25:55.133 "data_size": 63488 00:25:55.133 }, 00:25:55.133 { 00:25:55.133 "name": "pt3", 00:25:55.133 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:55.133 "is_configured": true, 00:25:55.133 "data_offset": 2048, 00:25:55.133 "data_size": 63488 00:25:55.133 }, 00:25:55.133 { 00:25:55.133 "name": "pt4", 00:25:55.133 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:55.133 "is_configured": true, 00:25:55.133 "data_offset": 2048, 00:25:55.133 "data_size": 63488 00:25:55.133 } 00:25:55.133 ] 00:25:55.133 }' 00:25:55.133 23:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:55.133 23:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:56.063 23:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:56.063 [2024-07-13 23:11:45.387510] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:56.063 [2024-07-13 23:11:45.387739] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:56.063 [2024-07-13 23:11:45.387916] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:56.063 [2024-07-13 23:11:45.388127] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:56.063 [2024-07-13 23:11:45.388238] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:25:56.063 23:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:56.063 23:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:25:56.321 23:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:25:56.321 23:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:25:56.321 23:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:25:56.321 23:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:25:56.321 23:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:56.579 23:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:56.837 [2024-07-13 23:11:46.123613] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:56.837 [2024-07-13 23:11:46.123904] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:56.837 [2024-07-13 23:11:46.124057] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:25:56.837 [2024-07-13 23:11:46.124181] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:56.837 [2024-07-13 23:11:46.126750] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:56.837 [2024-07-13 23:11:46.126957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:56.837 [2024-07-13 23:11:46.127182] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:56.837 [2024-07-13 23:11:46.127347] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:56.837 [2024-07-13 23:11:46.127640] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:25:56.837 [2024-07-13 23:11:46.127777] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:56.837 [2024-07-13 23:11:46.127845] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:25:56.837 [2024-07-13 23:11:46.128054] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:56.837 [2024-07-13 23:11:46.128364] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:56.837 pt1 00:25:56.837 23:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:25:56.837 23:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:25:56.837 23:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:56.837 23:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:56.837 23:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:56.837 23:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:56.837 23:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:56.837 23:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:56.837 23:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:56.837 23:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:56.837 23:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:56.837 23:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:56.837 23:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:57.096 23:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:57.096 "name": "raid_bdev1", 00:25:57.096 "uuid": "41af6e22-cd2c-47c1-bc5d-027bc0e8ba3a", 00:25:57.096 "strip_size_kb": 0, 00:25:57.096 "state": "configuring", 00:25:57.096 "raid_level": "raid1", 00:25:57.096 "superblock": true, 00:25:57.096 "num_base_bdevs": 4, 00:25:57.096 "num_base_bdevs_discovered": 2, 00:25:57.096 "num_base_bdevs_operational": 3, 00:25:57.096 "base_bdevs_list": [ 00:25:57.096 { 00:25:57.096 "name": null, 00:25:57.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.096 "is_configured": false, 00:25:57.096 "data_offset": 2048, 00:25:57.096 "data_size": 63488 00:25:57.096 }, 00:25:57.096 { 00:25:57.096 "name": "pt2", 00:25:57.096 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:57.096 "is_configured": true, 00:25:57.096 "data_offset": 2048, 00:25:57.096 "data_size": 63488 00:25:57.096 }, 00:25:57.096 { 00:25:57.096 "name": "pt3", 00:25:57.096 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:57.096 "is_configured": true, 00:25:57.096 "data_offset": 2048, 00:25:57.096 "data_size": 63488 00:25:57.096 }, 00:25:57.096 { 00:25:57.096 "name": null, 00:25:57.096 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:57.096 "is_configured": false, 00:25:57.096 "data_offset": 2048, 00:25:57.096 "data_size": 63488 00:25:57.096 } 00:25:57.096 ] 00:25:57.096 }' 00:25:57.096 23:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:57.096 23:11:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.762 23:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:25:57.762 23:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:25:58.020 23:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:25:58.020 23:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:58.278 [2024-07-13 23:11:47.488061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:58.278 [2024-07-13 23:11:47.488366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:58.278 [2024-07-13 23:11:47.488526] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:25:58.278 [2024-07-13 23:11:47.488664] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:58.278 [2024-07-13 23:11:47.489255] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:58.278 [2024-07-13 23:11:47.489437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:58.278 [2024-07-13 23:11:47.489697] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:25:58.278 [2024-07-13 23:11:47.489839] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:58.278 [2024-07-13 23:11:47.490117] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:25:58.278 [2024-07-13 23:11:47.490258] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:58.278 [2024-07-13 23:11:47.490456] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002ef0 00:25:58.278 [2024-07-13 23:11:47.490905] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:25:58.278 [2024-07-13 23:11:47.491065] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:25:58.278 [2024-07-13 23:11:47.491285] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:58.278 pt4 00:25:58.278 23:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:58.278 23:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:58.278 23:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:58.278 23:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:58.278 23:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:58.278 23:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:58.278 23:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:58.278 23:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:58.278 23:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:58.278 23:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:58.278 23:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:58.278 23:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:58.536 23:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:58.536 "name": "raid_bdev1", 00:25:58.536 "uuid": "41af6e22-cd2c-47c1-bc5d-027bc0e8ba3a", 00:25:58.536 "strip_size_kb": 0, 00:25:58.536 "state": "online", 00:25:58.536 "raid_level": "raid1", 00:25:58.536 "superblock": true, 00:25:58.536 "num_base_bdevs": 4, 00:25:58.536 "num_base_bdevs_discovered": 3, 00:25:58.536 "num_base_bdevs_operational": 3, 00:25:58.536 "base_bdevs_list": [ 00:25:58.536 { 00:25:58.536 "name": null, 00:25:58.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:58.536 "is_configured": false, 00:25:58.536 "data_offset": 2048, 00:25:58.536 "data_size": 63488 00:25:58.536 }, 00:25:58.536 { 00:25:58.536 "name": "pt2", 00:25:58.536 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:58.536 "is_configured": true, 00:25:58.536 "data_offset": 2048, 00:25:58.536 "data_size": 63488 00:25:58.536 }, 00:25:58.536 { 00:25:58.536 "name": "pt3", 00:25:58.536 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:58.536 "is_configured": true, 00:25:58.536 "data_offset": 2048, 00:25:58.536 "data_size": 63488 00:25:58.536 }, 00:25:58.536 { 00:25:58.536 "name": "pt4", 00:25:58.536 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:58.536 "is_configured": true, 00:25:58.536 "data_offset": 2048, 00:25:58.536 "data_size": 63488 00:25:58.536 } 00:25:58.536 ] 00:25:58.536 }' 00:25:58.536 23:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:58.536 23:11:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.102 23:11:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:25:59.102 23:11:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:25:59.359 23:11:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:25:59.359 23:11:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:25:59.359 23:11:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:59.618 [2024-07-13 23:11:48.876564] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:59.618 23:11:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 41af6e22-cd2c-47c1-bc5d-027bc0e8ba3a '!=' 41af6e22-cd2c-47c1-bc5d-027bc0e8ba3a ']' 00:25:59.618 23:11:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 152544 00:25:59.618 23:11:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 152544 ']' 00:25:59.618 23:11:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 152544 00:25:59.618 23:11:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:25:59.618 23:11:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:59.618 23:11:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 152544 00:25:59.618 killing process with pid 152544 00:25:59.618 23:11:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:59.618 23:11:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:59.618 23:11:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 152544' 00:25:59.618 23:11:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 152544 00:25:59.618 [2024-07-13 23:11:48.914352] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:59.618 23:11:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 152544 00:25:59.618 [2024-07-13 23:11:48.914442] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:59.618 [2024-07-13 23:11:48.914518] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:59.618 [2024-07-13 23:11:48.914529] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:25:59.618 [2024-07-13 23:11:48.953841] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:59.876 23:11:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:25:59.876 00:25:59.876 real 0m25.889s 00:25:59.876 user 0m49.413s 00:25:59.876 sys 0m3.175s 00:25:59.876 23:11:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:59.876 23:11:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.876 ************************************ 00:25:59.876 END TEST raid_superblock_test 00:25:59.876 ************************************ 00:25:59.876 23:11:49 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:25:59.876 23:11:49 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:25:59.876 23:11:49 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:25:59.876 23:11:49 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:59.876 23:11:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:59.876 ************************************ 00:25:59.876 START TEST raid_read_error_test 00:25:59.876 ************************************ 00:25:59.876 23:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 read 00:25:59.876 23:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:25:59.876 23:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:25:59.876 23:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:25:59.876 23:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:25:59.876 23:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:59.876 23:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:25:59.876 23:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:59.876 23:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:59.876 23:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:25:59.876 23:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:59.876 23:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:59.876 23:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:25:59.876 23:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:59.876 23:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:59.876 23:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:25:59.876 23:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:59.876 23:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:59.876 23:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:59.876 23:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:25:59.876 23:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:25:59.876 23:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:25:59.876 23:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:25:59.876 23:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:25:59.876 23:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:25:59.876 23:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:25:59.876 23:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:25:59.876 23:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:25:59.876 23:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.KoLmTSiimw 00:25:59.876 23:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=153389 00:25:59.876 23:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:25:59.877 23:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 153389 /var/tmp/spdk-raid.sock 00:25:59.877 23:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 153389 ']' 00:25:59.877 23:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:59.877 23:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:59.877 23:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:59.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:59.877 23:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:59.877 23:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.136 [2024-07-13 23:11:49.307271] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:26:00.136 [2024-07-13 23:11:49.307725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153389 ] 00:26:00.136 [2024-07-13 23:11:49.447729] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.136 [2024-07-13 23:11:49.523584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.394 [2024-07-13 23:11:49.580944] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:00.962 23:11:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:00.962 23:11:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:26:00.962 23:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:00.962 23:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:01.221 BaseBdev1_malloc 00:26:01.221 23:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:26:01.480 true 00:26:01.480 23:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:01.739 [2024-07-13 23:11:50.950729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:01.739 [2024-07-13 23:11:50.951057] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:01.739 [2024-07-13 23:11:50.951273] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:26:01.739 [2024-07-13 23:11:50.951464] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:01.739 [2024-07-13 23:11:50.954480] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:01.739 [2024-07-13 23:11:50.954674] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:01.739 BaseBdev1 00:26:01.739 23:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:01.739 23:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:01.998 BaseBdev2_malloc 00:26:01.998 23:11:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:26:02.258 true 00:26:02.258 23:11:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:02.517 [2024-07-13 23:11:51.678352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:02.517 [2024-07-13 23:11:51.678666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:02.517 [2024-07-13 23:11:51.678834] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:26:02.517 [2024-07-13 23:11:51.678993] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:02.517 [2024-07-13 23:11:51.681623] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:02.517 [2024-07-13 23:11:51.681815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:02.517 BaseBdev2 00:26:02.517 23:11:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:02.517 23:11:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:02.775 BaseBdev3_malloc 00:26:02.775 23:11:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:26:02.775 true 00:26:03.033 23:11:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:26:03.033 [2024-07-13 23:11:52.402605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:26:03.033 [2024-07-13 23:11:52.402889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:03.033 [2024-07-13 23:11:52.403057] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:26:03.033 [2024-07-13 23:11:52.403211] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:03.033 [2024-07-13 23:11:52.406076] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:03.033 [2024-07-13 23:11:52.406284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:03.033 BaseBdev3 00:26:03.033 23:11:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:03.033 23:11:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:03.292 BaseBdev4_malloc 00:26:03.550 23:11:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:26:03.550 true 00:26:03.550 23:11:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:26:03.808 [2024-07-13 23:11:53.133303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:26:03.808 [2024-07-13 23:11:53.133626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:03.808 [2024-07-13 23:11:53.133831] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:26:03.808 [2024-07-13 23:11:53.134028] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:03.808 [2024-07-13 23:11:53.136526] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:03.808 [2024-07-13 23:11:53.136710] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:03.808 BaseBdev4 00:26:03.808 23:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:26:04.066 [2024-07-13 23:11:53.349655] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:04.066 [2024-07-13 23:11:53.352067] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:04.066 [2024-07-13 23:11:53.352310] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:04.066 [2024-07-13 23:11:53.352499] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:04.066 [2024-07-13 23:11:53.352959] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:26:04.066 [2024-07-13 23:11:53.353097] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:04.066 [2024-07-13 23:11:53.353296] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:26:04.066 [2024-07-13 23:11:53.353830] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:26:04.066 [2024-07-13 23:11:53.354020] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:26:04.066 [2024-07-13 23:11:53.354390] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:04.066 23:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:04.066 23:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:04.066 23:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:04.066 23:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:04.066 23:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:04.066 23:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:04.066 23:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:04.066 23:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:04.066 23:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:04.066 23:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:04.066 23:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:04.066 23:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:04.324 23:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:04.324 "name": "raid_bdev1", 00:26:04.324 "uuid": "3b19f13e-fa5c-4e99-ab67-e7c9fdb691dc", 00:26:04.324 "strip_size_kb": 0, 00:26:04.324 "state": "online", 00:26:04.324 "raid_level": "raid1", 00:26:04.324 "superblock": true, 00:26:04.324 "num_base_bdevs": 4, 00:26:04.324 "num_base_bdevs_discovered": 4, 00:26:04.324 "num_base_bdevs_operational": 4, 00:26:04.324 "base_bdevs_list": [ 00:26:04.324 { 00:26:04.324 "name": "BaseBdev1", 00:26:04.324 "uuid": "0cca4494-a602-5902-b89d-90e56b11d82b", 00:26:04.324 "is_configured": true, 00:26:04.324 "data_offset": 2048, 00:26:04.324 "data_size": 63488 00:26:04.324 }, 00:26:04.324 { 00:26:04.324 "name": "BaseBdev2", 00:26:04.324 "uuid": "aff52dcb-1410-5074-929c-744461c8e5a4", 00:26:04.324 "is_configured": true, 00:26:04.324 "data_offset": 2048, 00:26:04.324 "data_size": 63488 00:26:04.324 }, 00:26:04.324 { 00:26:04.324 "name": "BaseBdev3", 00:26:04.324 "uuid": "eb80996c-3770-5aed-bba2-b4ee1edfac6c", 00:26:04.324 "is_configured": true, 00:26:04.324 "data_offset": 2048, 00:26:04.324 "data_size": 63488 00:26:04.324 }, 00:26:04.324 { 00:26:04.324 "name": "BaseBdev4", 00:26:04.324 "uuid": "c80b8b8e-67fc-53e0-ab35-51dfa7c5f1ad", 00:26:04.324 "is_configured": true, 00:26:04.324 "data_offset": 2048, 00:26:04.324 "data_size": 63488 00:26:04.324 } 00:26:04.324 ] 00:26:04.324 }' 00:26:04.324 23:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:04.324 23:11:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:04.890 23:11:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:26:04.890 23:11:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:26:05.149 [2024-07-13 23:11:54.351108] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:26:06.083 23:11:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:26:06.341 23:11:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:26:06.341 23:11:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:26:06.341 23:11:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:26:06.341 23:11:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:26:06.341 23:11:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:06.341 23:11:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:06.341 23:11:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:06.341 23:11:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:06.341 23:11:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:06.341 23:11:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:06.341 23:11:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:06.341 23:11:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:06.341 23:11:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:06.341 23:11:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:06.341 23:11:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:06.341 23:11:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:06.600 23:11:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:06.600 "name": "raid_bdev1", 00:26:06.600 "uuid": "3b19f13e-fa5c-4e99-ab67-e7c9fdb691dc", 00:26:06.600 "strip_size_kb": 0, 00:26:06.600 "state": "online", 00:26:06.600 "raid_level": "raid1", 00:26:06.600 "superblock": true, 00:26:06.600 "num_base_bdevs": 4, 00:26:06.600 "num_base_bdevs_discovered": 4, 00:26:06.600 "num_base_bdevs_operational": 4, 00:26:06.600 "base_bdevs_list": [ 00:26:06.600 { 00:26:06.600 "name": "BaseBdev1", 00:26:06.600 "uuid": "0cca4494-a602-5902-b89d-90e56b11d82b", 00:26:06.600 "is_configured": true, 00:26:06.600 "data_offset": 2048, 00:26:06.600 "data_size": 63488 00:26:06.600 }, 00:26:06.600 { 00:26:06.600 "name": "BaseBdev2", 00:26:06.600 "uuid": "aff52dcb-1410-5074-929c-744461c8e5a4", 00:26:06.600 "is_configured": true, 00:26:06.600 "data_offset": 2048, 00:26:06.600 "data_size": 63488 00:26:06.600 }, 00:26:06.600 { 00:26:06.600 "name": "BaseBdev3", 00:26:06.600 "uuid": "eb80996c-3770-5aed-bba2-b4ee1edfac6c", 00:26:06.600 "is_configured": true, 00:26:06.600 "data_offset": 2048, 00:26:06.600 "data_size": 63488 00:26:06.600 }, 00:26:06.600 { 00:26:06.600 "name": "BaseBdev4", 00:26:06.600 "uuid": "c80b8b8e-67fc-53e0-ab35-51dfa7c5f1ad", 00:26:06.600 "is_configured": true, 00:26:06.600 "data_offset": 2048, 00:26:06.600 "data_size": 63488 00:26:06.600 } 00:26:06.600 ] 00:26:06.600 }' 00:26:06.600 23:11:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:06.600 23:11:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.202 23:11:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:07.461 [2024-07-13 23:11:56.663093] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:07.461 [2024-07-13 23:11:56.663395] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:07.461 [2024-07-13 23:11:56.666368] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:07.461 [2024-07-13 23:11:56.666603] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:07.461 [2024-07-13 23:11:56.666772] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:07.461 [2024-07-13 23:11:56.666903] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:26:07.461 0 00:26:07.461 23:11:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 153389 00:26:07.461 23:11:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 153389 ']' 00:26:07.461 23:11:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 153389 00:26:07.461 23:11:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:26:07.461 23:11:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:07.461 23:11:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 153389 00:26:07.461 23:11:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:07.461 23:11:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:07.461 23:11:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 153389' 00:26:07.461 killing process with pid 153389 00:26:07.461 23:11:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 153389 00:26:07.461 [2024-07-13 23:11:56.700935] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:07.461 23:11:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 153389 00:26:07.461 [2024-07-13 23:11:56.734666] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:07.719 23:11:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.KoLmTSiimw 00:26:07.719 23:11:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:26:07.719 23:11:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:26:07.719 23:11:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:26:07.719 23:11:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:26:07.719 23:11:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:07.719 23:11:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:26:07.719 23:11:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:26:07.719 00:26:07.719 real 0m7.758s 00:26:07.719 user 0m12.701s 00:26:07.719 sys 0m0.970s 00:26:07.719 23:11:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:07.719 23:11:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.719 ************************************ 00:26:07.719 END TEST raid_read_error_test 00:26:07.719 ************************************ 00:26:07.719 23:11:57 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:26:07.719 23:11:57 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:26:07.719 23:11:57 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:26:07.719 23:11:57 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:07.719 23:11:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:07.719 ************************************ 00:26:07.719 START TEST raid_write_error_test 00:26:07.719 ************************************ 00:26:07.719 23:11:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 write 00:26:07.719 23:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:26:07.719 23:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:26:07.719 23:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:26:07.719 23:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:26:07.719 23:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:07.719 23:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:26:07.719 23:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:07.719 23:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:07.719 23:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:26:07.719 23:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:07.719 23:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:07.719 23:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:26:07.719 23:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:07.719 23:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:07.720 23:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:26:07.720 23:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:07.720 23:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:07.720 23:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:07.720 23:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:26:07.720 23:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:26:07.720 23:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:26:07.720 23:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:26:07.720 23:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:26:07.720 23:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:26:07.720 23:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:26:07.720 23:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:26:07.720 23:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:26:07.720 23:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.gL2VP05WCs 00:26:07.720 23:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=153594 00:26:07.720 23:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 153594 /var/tmp/spdk-raid.sock 00:26:07.720 23:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:07.720 23:11:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 153594 ']' 00:26:07.720 23:11:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:07.720 23:11:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:07.720 23:11:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:07.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:07.720 23:11:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:07.720 23:11:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.720 [2024-07-13 23:11:57.119617] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:26:07.720 [2024-07-13 23:11:57.120037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153594 ] 00:26:07.978 [2024-07-13 23:11:57.262977] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.978 [2024-07-13 23:11:57.327722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:07.978 [2024-07-13 23:11:57.384229] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:08.237 23:11:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:08.237 23:11:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:26:08.237 23:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:08.237 23:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:08.494 BaseBdev1_malloc 00:26:08.494 23:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:26:08.494 true 00:26:08.752 23:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:08.752 [2024-07-13 23:11:58.155225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:08.752 [2024-07-13 23:11:58.155672] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:08.752 [2024-07-13 23:11:58.155887] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:26:08.752 [2024-07-13 23:11:58.156076] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:08.752 [2024-07-13 23:11:58.159228] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:08.752 [2024-07-13 23:11:58.159421] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:09.011 BaseBdev1 00:26:09.011 23:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:09.011 23:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:09.269 BaseBdev2_malloc 00:26:09.269 23:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:26:09.528 true 00:26:09.528 23:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:09.786 [2024-07-13 23:11:58.979008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:09.786 [2024-07-13 23:11:58.979292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:09.786 [2024-07-13 23:11:58.979459] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:26:09.786 [2024-07-13 23:11:58.979613] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:09.786 [2024-07-13 23:11:58.982324] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:09.786 [2024-07-13 23:11:58.982529] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:09.786 BaseBdev2 00:26:09.786 23:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:09.786 23:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:10.044 BaseBdev3_malloc 00:26:10.044 23:11:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:26:10.303 true 00:26:10.303 23:11:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:26:10.303 [2024-07-13 23:11:59.709433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:26:10.303 [2024-07-13 23:11:59.709762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:10.303 [2024-07-13 23:11:59.709954] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:26:10.561 [2024-07-13 23:11:59.710116] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:10.561 [2024-07-13 23:11:59.712899] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:10.561 [2024-07-13 23:11:59.713175] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:10.561 BaseBdev3 00:26:10.561 23:11:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:10.562 23:11:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:10.562 BaseBdev4_malloc 00:26:10.562 23:11:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:26:10.819 true 00:26:10.819 23:12:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:26:11.077 [2024-07-13 23:12:00.392673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:26:11.077 [2024-07-13 23:12:00.392970] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:11.077 [2024-07-13 23:12:00.393125] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:26:11.077 [2024-07-13 23:12:00.393332] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:11.077 [2024-07-13 23:12:00.395790] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:11.077 [2024-07-13 23:12:00.395984] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:11.077 BaseBdev4 00:26:11.077 23:12:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:26:11.336 [2024-07-13 23:12:00.604893] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:11.336 [2024-07-13 23:12:00.607247] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:11.336 [2024-07-13 23:12:00.607488] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:11.336 [2024-07-13 23:12:00.607677] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:11.336 [2024-07-13 23:12:00.608079] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:26:11.336 [2024-07-13 23:12:00.608213] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:11.336 [2024-07-13 23:12:00.608446] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:26:11.336 [2024-07-13 23:12:00.609044] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:26:11.336 [2024-07-13 23:12:00.609178] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:26:11.336 [2024-07-13 23:12:00.609471] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:11.336 23:12:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:11.336 23:12:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:11.336 23:12:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:11.336 23:12:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:11.336 23:12:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:11.336 23:12:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:11.336 23:12:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:11.336 23:12:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:11.336 23:12:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:11.336 23:12:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:11.336 23:12:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:11.336 23:12:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:11.595 23:12:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:11.595 "name": "raid_bdev1", 00:26:11.595 "uuid": "63a9bb7e-b2ae-4fd0-984a-0e714bee83c3", 00:26:11.595 "strip_size_kb": 0, 00:26:11.595 "state": "online", 00:26:11.595 "raid_level": "raid1", 00:26:11.595 "superblock": true, 00:26:11.595 "num_base_bdevs": 4, 00:26:11.595 "num_base_bdevs_discovered": 4, 00:26:11.595 "num_base_bdevs_operational": 4, 00:26:11.595 "base_bdevs_list": [ 00:26:11.595 { 00:26:11.595 "name": "BaseBdev1", 00:26:11.595 "uuid": "1812dd87-f1c0-5097-b42a-55f8f13ebb25", 00:26:11.595 "is_configured": true, 00:26:11.595 "data_offset": 2048, 00:26:11.595 "data_size": 63488 00:26:11.595 }, 00:26:11.595 { 00:26:11.595 "name": "BaseBdev2", 00:26:11.595 "uuid": "394c3f69-0c14-549d-819e-83d900e77a37", 00:26:11.595 "is_configured": true, 00:26:11.595 "data_offset": 2048, 00:26:11.595 "data_size": 63488 00:26:11.595 }, 00:26:11.595 { 00:26:11.595 "name": "BaseBdev3", 00:26:11.595 "uuid": "4022e3ee-ff69-5d74-aec3-99372812ca50", 00:26:11.595 "is_configured": true, 00:26:11.595 "data_offset": 2048, 00:26:11.595 "data_size": 63488 00:26:11.595 }, 00:26:11.595 { 00:26:11.595 "name": "BaseBdev4", 00:26:11.595 "uuid": "30d7012e-cac1-578f-98e7-1b070621b05d", 00:26:11.595 "is_configured": true, 00:26:11.595 "data_offset": 2048, 00:26:11.595 "data_size": 63488 00:26:11.595 } 00:26:11.595 ] 00:26:11.595 }' 00:26:11.595 23:12:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:11.595 23:12:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.161 23:12:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:26:12.161 23:12:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:26:12.161 [2024-07-13 23:12:01.530337] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:26:13.092 23:12:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:26:13.348 [2024-07-13 23:12:02.719703] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:26:13.348 [2024-07-13 23:12:02.720034] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:13.348 [2024-07-13 23:12:02.720472] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002600 00:26:13.348 23:12:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:26:13.348 23:12:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:26:13.348 23:12:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:26:13.348 23:12:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=3 00:26:13.349 23:12:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:13.349 23:12:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:13.349 23:12:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:13.349 23:12:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:13.349 23:12:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:13.349 23:12:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:13.349 23:12:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:13.349 23:12:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:13.349 23:12:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:13.349 23:12:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:13.349 23:12:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:13.349 23:12:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:13.606 23:12:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:13.606 "name": "raid_bdev1", 00:26:13.606 "uuid": "63a9bb7e-b2ae-4fd0-984a-0e714bee83c3", 00:26:13.606 "strip_size_kb": 0, 00:26:13.606 "state": "online", 00:26:13.606 "raid_level": "raid1", 00:26:13.606 "superblock": true, 00:26:13.606 "num_base_bdevs": 4, 00:26:13.606 "num_base_bdevs_discovered": 3, 00:26:13.606 "num_base_bdevs_operational": 3, 00:26:13.606 "base_bdevs_list": [ 00:26:13.606 { 00:26:13.606 "name": null, 00:26:13.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.606 "is_configured": false, 00:26:13.606 "data_offset": 2048, 00:26:13.606 "data_size": 63488 00:26:13.606 }, 00:26:13.606 { 00:26:13.606 "name": "BaseBdev2", 00:26:13.606 "uuid": "394c3f69-0c14-549d-819e-83d900e77a37", 00:26:13.606 "is_configured": true, 00:26:13.606 "data_offset": 2048, 00:26:13.606 "data_size": 63488 00:26:13.606 }, 00:26:13.606 { 00:26:13.606 "name": "BaseBdev3", 00:26:13.606 "uuid": "4022e3ee-ff69-5d74-aec3-99372812ca50", 00:26:13.606 "is_configured": true, 00:26:13.606 "data_offset": 2048, 00:26:13.606 "data_size": 63488 00:26:13.606 }, 00:26:13.606 { 00:26:13.606 "name": "BaseBdev4", 00:26:13.606 "uuid": "30d7012e-cac1-578f-98e7-1b070621b05d", 00:26:13.606 "is_configured": true, 00:26:13.606 "data_offset": 2048, 00:26:13.606 "data_size": 63488 00:26:13.606 } 00:26:13.606 ] 00:26:13.606 }' 00:26:13.606 23:12:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:13.606 23:12:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.171 23:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:14.427 [2024-07-13 23:12:03.810906] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:14.427 [2024-07-13 23:12:03.811237] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:14.427 [2024-07-13 23:12:03.814315] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:14.427 [2024-07-13 23:12:03.814545] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:14.427 [2024-07-13 23:12:03.814707] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:14.427 [2024-07-13 23:12:03.814824] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:26:14.427 0 00:26:14.685 23:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 153594 00:26:14.685 23:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 153594 ']' 00:26:14.685 23:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 153594 00:26:14.685 23:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:26:14.685 23:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:14.685 23:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 153594 00:26:14.685 killing process with pid 153594 00:26:14.685 23:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:14.685 23:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:14.685 23:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 153594' 00:26:14.685 23:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 153594 00:26:14.685 23:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 153594 00:26:14.685 [2024-07-13 23:12:03.858288] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:14.685 [2024-07-13 23:12:03.892768] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:14.941 23:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.gL2VP05WCs 00:26:14.941 23:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:26:14.941 23:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:26:14.941 ************************************ 00:26:14.941 END TEST raid_write_error_test 00:26:14.941 ************************************ 00:26:14.941 23:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:26:14.941 23:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:26:14.941 23:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:14.941 23:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:26:14.941 23:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:26:14.941 00:26:14.941 real 0m7.101s 00:26:14.941 user 0m11.901s 00:26:14.941 sys 0m0.928s 00:26:14.941 23:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:14.941 23:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.941 23:12:04 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:26:14.941 23:12:04 bdev_raid -- bdev/bdev_raid.sh@875 -- # '[' true = true ']' 00:26:14.941 23:12:04 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:26:14.941 23:12:04 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:26:14.941 23:12:04 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:26:14.941 23:12:04 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:14.941 23:12:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:14.941 ************************************ 00:26:14.941 START TEST raid_rebuild_test 00:26:14.941 ************************************ 00:26:14.941 23:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 false false true 00:26:14.941 23:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:26:14.941 23:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:26:14.941 23:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:26:14.941 23:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:26:14.941 23:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:26:14.941 23:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:26:14.941 23:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:14.941 23:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:26:14.941 23:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:26:14.941 23:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:14.941 23:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:26:14.941 23:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:26:14.941 23:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:14.941 23:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:14.941 23:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:26:14.941 23:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:26:14.941 23:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:26:14.941 23:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:26:14.941 23:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:26:14.941 23:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:26:14.941 23:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:26:14.941 23:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:26:14.941 23:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:26:14.941 23:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=153788 00:26:14.941 23:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 153788 /var/tmp/spdk-raid.sock 00:26:14.941 23:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 153788 ']' 00:26:14.941 23:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:14.941 23:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:14.941 23:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:14.941 23:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:14.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:14.941 23:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:14.941 23:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.941 [2024-07-13 23:12:04.277529] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:26:14.941 [2024-07-13 23:12:04.277985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153788 ] 00:26:14.941 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:14.941 Zero copy mechanism will not be used. 00:26:15.198 [2024-07-13 23:12:04.424288] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.198 [2024-07-13 23:12:04.487709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:15.198 [2024-07-13 23:12:04.542042] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:15.198 23:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:15.198 23:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:26:15.198 23:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:15.198 23:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:15.456 BaseBdev1_malloc 00:26:15.721 23:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:15.988 [2024-07-13 23:12:05.127143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:15.988 [2024-07-13 23:12:05.127424] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:15.988 [2024-07-13 23:12:05.127609] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:26:15.988 [2024-07-13 23:12:05.127819] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:15.988 [2024-07-13 23:12:05.130822] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:15.988 [2024-07-13 23:12:05.131001] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:15.988 BaseBdev1 00:26:15.988 23:12:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:15.988 23:12:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:15.988 BaseBdev2_malloc 00:26:15.988 23:12:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:16.246 [2024-07-13 23:12:05.630556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:16.246 [2024-07-13 23:12:05.630840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:16.246 [2024-07-13 23:12:05.630926] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:26:16.246 [2024-07-13 23:12:05.631180] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:16.246 [2024-07-13 23:12:05.633735] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:16.246 [2024-07-13 23:12:05.633951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:16.246 BaseBdev2 00:26:16.246 23:12:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:26:16.504 spare_malloc 00:26:16.504 23:12:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:16.761 spare_delay 00:26:16.761 23:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:17.020 [2024-07-13 23:12:06.411287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:17.020 [2024-07-13 23:12:06.411602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:17.020 [2024-07-13 23:12:06.411698] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:26:17.020 [2024-07-13 23:12:06.411904] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:17.020 [2024-07-13 23:12:06.414747] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:17.020 [2024-07-13 23:12:06.414980] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:17.020 spare 00:26:17.278 23:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:26:17.278 [2024-07-13 23:12:06.647524] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:17.278 [2024-07-13 23:12:06.649856] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:17.278 [2024-07-13 23:12:06.650143] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:26:17.278 [2024-07-13 23:12:06.650281] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:26:17.278 [2024-07-13 23:12:06.650540] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:26:17.278 [2024-07-13 23:12:06.651143] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:26:17.278 [2024-07-13 23:12:06.651292] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:26:17.278 [2024-07-13 23:12:06.651654] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:17.278 23:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:17.278 23:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:17.278 23:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:17.278 23:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:17.278 23:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:17.278 23:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:17.278 23:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:17.278 23:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:17.278 23:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:17.278 23:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:17.278 23:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:17.278 23:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:17.536 23:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:17.536 "name": "raid_bdev1", 00:26:17.536 "uuid": "3cc80de1-3990-4d83-a713-7f458551ecfc", 00:26:17.536 "strip_size_kb": 0, 00:26:17.536 "state": "online", 00:26:17.536 "raid_level": "raid1", 00:26:17.536 "superblock": false, 00:26:17.536 "num_base_bdevs": 2, 00:26:17.536 "num_base_bdevs_discovered": 2, 00:26:17.536 "num_base_bdevs_operational": 2, 00:26:17.536 "base_bdevs_list": [ 00:26:17.536 { 00:26:17.536 "name": "BaseBdev1", 00:26:17.536 "uuid": "341b7d36-df4b-561c-b332-8c94f67f3a6d", 00:26:17.536 "is_configured": true, 00:26:17.536 "data_offset": 0, 00:26:17.536 "data_size": 65536 00:26:17.536 }, 00:26:17.536 { 00:26:17.536 "name": "BaseBdev2", 00:26:17.536 "uuid": "aa5bc7bb-f505-5e17-bfd0-dc24956434e7", 00:26:17.536 "is_configured": true, 00:26:17.536 "data_offset": 0, 00:26:17.536 "data_size": 65536 00:26:17.536 } 00:26:17.536 ] 00:26:17.536 }' 00:26:17.536 23:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:17.536 23:12:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.103 23:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:26:18.103 23:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:18.362 [2024-07-13 23:12:07.676080] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:18.362 23:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:26:18.362 23:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:18.362 23:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:18.620 23:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:26:18.620 23:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:26:18.620 23:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:26:18.620 23:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:26:18.620 23:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:26:18.620 23:12:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:18.620 23:12:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:26:18.620 23:12:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:18.620 23:12:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:18.620 23:12:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:18.620 23:12:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:26:18.620 23:12:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:18.620 23:12:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:18.620 23:12:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:18.878 [2024-07-13 23:12:08.163990] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:26:18.878 /dev/nbd0 00:26:18.878 23:12:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:18.878 23:12:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:18.878 23:12:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:26:18.878 23:12:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:26:18.878 23:12:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:18.878 23:12:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:18.878 23:12:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:26:18.878 23:12:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:26:18.878 23:12:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:18.878 23:12:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:18.878 23:12:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:18.878 1+0 records in 00:26:18.878 1+0 records out 00:26:18.878 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00075474 s, 5.4 MB/s 00:26:18.878 23:12:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:18.878 23:12:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:26:18.878 23:12:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:18.878 23:12:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:18.878 23:12:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:26:18.878 23:12:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:18.878 23:12:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:18.878 23:12:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:26:18.878 23:12:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:26:18.878 23:12:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:26:25.443 65536+0 records in 00:26:25.443 65536+0 records out 00:26:25.443 33554432 bytes (34 MB, 32 MiB) copied, 5.59889 s, 6.0 MB/s 00:26:25.443 23:12:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:25.443 23:12:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:25.443 23:12:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:25.443 23:12:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:25.443 23:12:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:26:25.443 23:12:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:25.443 23:12:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:25.443 23:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:25.443 23:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:25.443 23:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:25.443 23:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:25.443 23:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:25.443 23:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:25.443 [2024-07-13 23:12:14.117837] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:25.443 23:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:26:25.443 23:12:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:26:25.443 23:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:26:25.443 [2024-07-13 23:12:14.393577] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:25.443 23:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:25.443 23:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:25.443 23:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:25.443 23:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:25.443 23:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:25.443 23:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:25.443 23:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:25.443 23:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:25.443 23:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:25.443 23:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:25.443 23:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:25.443 23:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:25.443 23:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:25.443 "name": "raid_bdev1", 00:26:25.443 "uuid": "3cc80de1-3990-4d83-a713-7f458551ecfc", 00:26:25.443 "strip_size_kb": 0, 00:26:25.443 "state": "online", 00:26:25.443 "raid_level": "raid1", 00:26:25.443 "superblock": false, 00:26:25.443 "num_base_bdevs": 2, 00:26:25.443 "num_base_bdevs_discovered": 1, 00:26:25.443 "num_base_bdevs_operational": 1, 00:26:25.443 "base_bdevs_list": [ 00:26:25.443 { 00:26:25.443 "name": null, 00:26:25.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:25.443 "is_configured": false, 00:26:25.443 "data_offset": 0, 00:26:25.443 "data_size": 65536 00:26:25.443 }, 00:26:25.443 { 00:26:25.443 "name": "BaseBdev2", 00:26:25.443 "uuid": "aa5bc7bb-f505-5e17-bfd0-dc24956434e7", 00:26:25.443 "is_configured": true, 00:26:25.443 "data_offset": 0, 00:26:25.443 "data_size": 65536 00:26:25.443 } 00:26:25.443 ] 00:26:25.443 }' 00:26:25.443 23:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:25.443 23:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.026 23:12:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:26.285 [2024-07-13 23:12:15.497861] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:26.285 [2024-07-13 23:12:15.503597] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06080 00:26:26.285 [2024-07-13 23:12:15.505909] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:26.285 23:12:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:26:27.221 23:12:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:27.221 23:12:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:27.221 23:12:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:27.221 23:12:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:27.221 23:12:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:27.221 23:12:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:27.221 23:12:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:27.479 23:12:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:27.479 "name": "raid_bdev1", 00:26:27.479 "uuid": "3cc80de1-3990-4d83-a713-7f458551ecfc", 00:26:27.479 "strip_size_kb": 0, 00:26:27.479 "state": "online", 00:26:27.479 "raid_level": "raid1", 00:26:27.479 "superblock": false, 00:26:27.479 "num_base_bdevs": 2, 00:26:27.479 "num_base_bdevs_discovered": 2, 00:26:27.479 "num_base_bdevs_operational": 2, 00:26:27.479 "process": { 00:26:27.479 "type": "rebuild", 00:26:27.479 "target": "spare", 00:26:27.479 "progress": { 00:26:27.479 "blocks": 24576, 00:26:27.479 "percent": 37 00:26:27.479 } 00:26:27.479 }, 00:26:27.479 "base_bdevs_list": [ 00:26:27.479 { 00:26:27.479 "name": "spare", 00:26:27.479 "uuid": "b92a32f8-9422-564d-b461-48965b4b5d83", 00:26:27.479 "is_configured": true, 00:26:27.479 "data_offset": 0, 00:26:27.479 "data_size": 65536 00:26:27.479 }, 00:26:27.479 { 00:26:27.479 "name": "BaseBdev2", 00:26:27.479 "uuid": "aa5bc7bb-f505-5e17-bfd0-dc24956434e7", 00:26:27.479 "is_configured": true, 00:26:27.479 "data_offset": 0, 00:26:27.479 "data_size": 65536 00:26:27.479 } 00:26:27.479 ] 00:26:27.479 }' 00:26:27.479 23:12:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:27.479 23:12:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:27.479 23:12:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:27.479 23:12:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:27.479 23:12:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:27.744 [2024-07-13 23:12:17.120320] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:28.008 [2024-07-13 23:12:17.218934] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:28.008 [2024-07-13 23:12:17.219260] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:28.008 [2024-07-13 23:12:17.219448] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:28.008 [2024-07-13 23:12:17.219504] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:28.008 23:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:28.008 23:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:28.008 23:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:28.008 23:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:28.008 23:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:28.008 23:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:28.008 23:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:28.008 23:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:28.008 23:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:28.008 23:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:28.008 23:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:28.008 23:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:28.267 23:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:28.267 "name": "raid_bdev1", 00:26:28.267 "uuid": "3cc80de1-3990-4d83-a713-7f458551ecfc", 00:26:28.267 "strip_size_kb": 0, 00:26:28.267 "state": "online", 00:26:28.267 "raid_level": "raid1", 00:26:28.267 "superblock": false, 00:26:28.267 "num_base_bdevs": 2, 00:26:28.267 "num_base_bdevs_discovered": 1, 00:26:28.267 "num_base_bdevs_operational": 1, 00:26:28.267 "base_bdevs_list": [ 00:26:28.267 { 00:26:28.267 "name": null, 00:26:28.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:28.267 "is_configured": false, 00:26:28.267 "data_offset": 0, 00:26:28.267 "data_size": 65536 00:26:28.267 }, 00:26:28.267 { 00:26:28.267 "name": "BaseBdev2", 00:26:28.267 "uuid": "aa5bc7bb-f505-5e17-bfd0-dc24956434e7", 00:26:28.267 "is_configured": true, 00:26:28.267 "data_offset": 0, 00:26:28.267 "data_size": 65536 00:26:28.267 } 00:26:28.267 ] 00:26:28.267 }' 00:26:28.267 23:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:28.267 23:12:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:28.833 23:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:28.833 23:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:28.833 23:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:28.833 23:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:28.833 23:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:28.833 23:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:28.833 23:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:29.091 23:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:29.091 "name": "raid_bdev1", 00:26:29.091 "uuid": "3cc80de1-3990-4d83-a713-7f458551ecfc", 00:26:29.091 "strip_size_kb": 0, 00:26:29.091 "state": "online", 00:26:29.091 "raid_level": "raid1", 00:26:29.091 "superblock": false, 00:26:29.091 "num_base_bdevs": 2, 00:26:29.091 "num_base_bdevs_discovered": 1, 00:26:29.091 "num_base_bdevs_operational": 1, 00:26:29.091 "base_bdevs_list": [ 00:26:29.091 { 00:26:29.091 "name": null, 00:26:29.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:29.091 "is_configured": false, 00:26:29.091 "data_offset": 0, 00:26:29.091 "data_size": 65536 00:26:29.091 }, 00:26:29.091 { 00:26:29.091 "name": "BaseBdev2", 00:26:29.091 "uuid": "aa5bc7bb-f505-5e17-bfd0-dc24956434e7", 00:26:29.091 "is_configured": true, 00:26:29.091 "data_offset": 0, 00:26:29.091 "data_size": 65536 00:26:29.091 } 00:26:29.091 ] 00:26:29.091 }' 00:26:29.091 23:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:29.091 23:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:29.091 23:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:29.091 23:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:29.091 23:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:29.350 [2024-07-13 23:12:18.680411] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:29.350 [2024-07-13 23:12:18.688373] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06220 00:26:29.350 [2024-07-13 23:12:18.691143] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:29.350 23:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:30.722 23:12:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:30.722 23:12:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:30.722 23:12:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:30.722 23:12:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:30.722 23:12:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:30.723 23:12:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:30.723 23:12:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:30.723 23:12:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:30.723 "name": "raid_bdev1", 00:26:30.723 "uuid": "3cc80de1-3990-4d83-a713-7f458551ecfc", 00:26:30.723 "strip_size_kb": 0, 00:26:30.723 "state": "online", 00:26:30.723 "raid_level": "raid1", 00:26:30.723 "superblock": false, 00:26:30.723 "num_base_bdevs": 2, 00:26:30.723 "num_base_bdevs_discovered": 2, 00:26:30.723 "num_base_bdevs_operational": 2, 00:26:30.723 "process": { 00:26:30.723 "type": "rebuild", 00:26:30.723 "target": "spare", 00:26:30.723 "progress": { 00:26:30.723 "blocks": 24576, 00:26:30.723 "percent": 37 00:26:30.723 } 00:26:30.723 }, 00:26:30.723 "base_bdevs_list": [ 00:26:30.723 { 00:26:30.723 "name": "spare", 00:26:30.723 "uuid": "b92a32f8-9422-564d-b461-48965b4b5d83", 00:26:30.723 "is_configured": true, 00:26:30.723 "data_offset": 0, 00:26:30.723 "data_size": 65536 00:26:30.723 }, 00:26:30.723 { 00:26:30.723 "name": "BaseBdev2", 00:26:30.723 "uuid": "aa5bc7bb-f505-5e17-bfd0-dc24956434e7", 00:26:30.723 "is_configured": true, 00:26:30.723 "data_offset": 0, 00:26:30.723 "data_size": 65536 00:26:30.723 } 00:26:30.723 ] 00:26:30.723 }' 00:26:30.723 23:12:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:30.723 23:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:30.723 23:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:30.723 23:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:30.723 23:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:26:30.723 23:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:26:30.723 23:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:26:30.723 23:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:26:30.723 23:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=778 00:26:30.723 23:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:30.723 23:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:30.723 23:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:30.723 23:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:30.723 23:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:30.723 23:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:30.723 23:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:30.723 23:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:30.982 23:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:30.982 "name": "raid_bdev1", 00:26:30.982 "uuid": "3cc80de1-3990-4d83-a713-7f458551ecfc", 00:26:30.982 "strip_size_kb": 0, 00:26:30.982 "state": "online", 00:26:30.982 "raid_level": "raid1", 00:26:30.982 "superblock": false, 00:26:30.982 "num_base_bdevs": 2, 00:26:30.982 "num_base_bdevs_discovered": 2, 00:26:30.982 "num_base_bdevs_operational": 2, 00:26:30.982 "process": { 00:26:30.982 "type": "rebuild", 00:26:30.982 "target": "spare", 00:26:30.982 "progress": { 00:26:30.982 "blocks": 32768, 00:26:30.982 "percent": 50 00:26:30.982 } 00:26:30.982 }, 00:26:30.982 "base_bdevs_list": [ 00:26:30.982 { 00:26:30.982 "name": "spare", 00:26:30.982 "uuid": "b92a32f8-9422-564d-b461-48965b4b5d83", 00:26:30.982 "is_configured": true, 00:26:30.982 "data_offset": 0, 00:26:30.982 "data_size": 65536 00:26:30.982 }, 00:26:30.982 { 00:26:30.982 "name": "BaseBdev2", 00:26:30.982 "uuid": "aa5bc7bb-f505-5e17-bfd0-dc24956434e7", 00:26:30.982 "is_configured": true, 00:26:30.982 "data_offset": 0, 00:26:30.982 "data_size": 65536 00:26:30.982 } 00:26:30.982 ] 00:26:30.982 }' 00:26:30.982 23:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:30.982 23:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:30.982 23:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:31.240 23:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:31.240 23:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:26:32.173 23:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:32.173 23:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:32.173 23:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:32.173 23:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:32.173 23:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:32.173 23:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:32.173 23:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:32.173 23:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:32.430 23:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:32.430 "name": "raid_bdev1", 00:26:32.430 "uuid": "3cc80de1-3990-4d83-a713-7f458551ecfc", 00:26:32.430 "strip_size_kb": 0, 00:26:32.430 "state": "online", 00:26:32.430 "raid_level": "raid1", 00:26:32.430 "superblock": false, 00:26:32.430 "num_base_bdevs": 2, 00:26:32.430 "num_base_bdevs_discovered": 2, 00:26:32.430 "num_base_bdevs_operational": 2, 00:26:32.430 "process": { 00:26:32.430 "type": "rebuild", 00:26:32.430 "target": "spare", 00:26:32.430 "progress": { 00:26:32.430 "blocks": 61440, 00:26:32.430 "percent": 93 00:26:32.430 } 00:26:32.430 }, 00:26:32.430 "base_bdevs_list": [ 00:26:32.430 { 00:26:32.430 "name": "spare", 00:26:32.430 "uuid": "b92a32f8-9422-564d-b461-48965b4b5d83", 00:26:32.430 "is_configured": true, 00:26:32.430 "data_offset": 0, 00:26:32.430 "data_size": 65536 00:26:32.430 }, 00:26:32.430 { 00:26:32.430 "name": "BaseBdev2", 00:26:32.430 "uuid": "aa5bc7bb-f505-5e17-bfd0-dc24956434e7", 00:26:32.430 "is_configured": true, 00:26:32.430 "data_offset": 0, 00:26:32.430 "data_size": 65536 00:26:32.430 } 00:26:32.430 ] 00:26:32.430 }' 00:26:32.430 23:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:32.431 23:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:32.431 23:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:32.688 23:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:32.688 23:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:26:32.688 [2024-07-13 23:12:21.913407] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:32.688 [2024-07-13 23:12:21.913939] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:32.688 [2024-07-13 23:12:21.914233] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:33.622 23:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:33.622 23:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:33.622 23:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:33.622 23:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:33.622 23:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:33.622 23:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:33.622 23:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:33.622 23:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:33.880 23:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:33.880 "name": "raid_bdev1", 00:26:33.880 "uuid": "3cc80de1-3990-4d83-a713-7f458551ecfc", 00:26:33.880 "strip_size_kb": 0, 00:26:33.880 "state": "online", 00:26:33.880 "raid_level": "raid1", 00:26:33.880 "superblock": false, 00:26:33.880 "num_base_bdevs": 2, 00:26:33.880 "num_base_bdevs_discovered": 2, 00:26:33.880 "num_base_bdevs_operational": 2, 00:26:33.880 "base_bdevs_list": [ 00:26:33.880 { 00:26:33.880 "name": "spare", 00:26:33.880 "uuid": "b92a32f8-9422-564d-b461-48965b4b5d83", 00:26:33.880 "is_configured": true, 00:26:33.880 "data_offset": 0, 00:26:33.880 "data_size": 65536 00:26:33.880 }, 00:26:33.880 { 00:26:33.880 "name": "BaseBdev2", 00:26:33.880 "uuid": "aa5bc7bb-f505-5e17-bfd0-dc24956434e7", 00:26:33.880 "is_configured": true, 00:26:33.880 "data_offset": 0, 00:26:33.880 "data_size": 65536 00:26:33.880 } 00:26:33.880 ] 00:26:33.880 }' 00:26:33.880 23:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:33.880 23:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:33.880 23:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:33.880 23:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:26:33.880 23:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:26:33.880 23:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:33.880 23:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:33.880 23:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:33.880 23:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:33.880 23:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:33.880 23:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:33.880 23:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:34.138 23:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:34.138 "name": "raid_bdev1", 00:26:34.138 "uuid": "3cc80de1-3990-4d83-a713-7f458551ecfc", 00:26:34.138 "strip_size_kb": 0, 00:26:34.138 "state": "online", 00:26:34.138 "raid_level": "raid1", 00:26:34.138 "superblock": false, 00:26:34.138 "num_base_bdevs": 2, 00:26:34.138 "num_base_bdevs_discovered": 2, 00:26:34.138 "num_base_bdevs_operational": 2, 00:26:34.138 "base_bdevs_list": [ 00:26:34.138 { 00:26:34.138 "name": "spare", 00:26:34.138 "uuid": "b92a32f8-9422-564d-b461-48965b4b5d83", 00:26:34.138 "is_configured": true, 00:26:34.138 "data_offset": 0, 00:26:34.138 "data_size": 65536 00:26:34.138 }, 00:26:34.138 { 00:26:34.138 "name": "BaseBdev2", 00:26:34.138 "uuid": "aa5bc7bb-f505-5e17-bfd0-dc24956434e7", 00:26:34.138 "is_configured": true, 00:26:34.138 "data_offset": 0, 00:26:34.138 "data_size": 65536 00:26:34.138 } 00:26:34.138 ] 00:26:34.138 }' 00:26:34.138 23:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:34.138 23:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:34.138 23:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:34.455 23:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:34.455 23:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:34.455 23:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:34.455 23:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:34.455 23:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:34.455 23:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:34.455 23:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:34.455 23:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:34.455 23:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:34.455 23:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:34.455 23:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:34.455 23:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:34.455 23:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:34.722 23:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:34.722 "name": "raid_bdev1", 00:26:34.722 "uuid": "3cc80de1-3990-4d83-a713-7f458551ecfc", 00:26:34.722 "strip_size_kb": 0, 00:26:34.722 "state": "online", 00:26:34.722 "raid_level": "raid1", 00:26:34.722 "superblock": false, 00:26:34.722 "num_base_bdevs": 2, 00:26:34.723 "num_base_bdevs_discovered": 2, 00:26:34.723 "num_base_bdevs_operational": 2, 00:26:34.723 "base_bdevs_list": [ 00:26:34.723 { 00:26:34.723 "name": "spare", 00:26:34.723 "uuid": "b92a32f8-9422-564d-b461-48965b4b5d83", 00:26:34.723 "is_configured": true, 00:26:34.723 "data_offset": 0, 00:26:34.723 "data_size": 65536 00:26:34.723 }, 00:26:34.723 { 00:26:34.723 "name": "BaseBdev2", 00:26:34.723 "uuid": "aa5bc7bb-f505-5e17-bfd0-dc24956434e7", 00:26:34.723 "is_configured": true, 00:26:34.723 "data_offset": 0, 00:26:34.723 "data_size": 65536 00:26:34.723 } 00:26:34.723 ] 00:26:34.723 }' 00:26:34.723 23:12:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:34.723 23:12:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.289 23:12:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:35.546 [2024-07-13 23:12:24.822791] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:35.546 [2024-07-13 23:12:24.823160] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:35.546 [2024-07-13 23:12:24.823468] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:35.546 [2024-07-13 23:12:24.823742] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:35.546 [2024-07-13 23:12:24.823917] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:26:35.546 23:12:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:35.546 23:12:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:26:35.803 23:12:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:26:35.803 23:12:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:26:35.803 23:12:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:26:35.803 23:12:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:35.803 23:12:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:35.803 23:12:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:26:35.803 23:12:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:35.803 23:12:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:35.803 23:12:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:35.803 23:12:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:26:35.803 23:12:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:35.803 23:12:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:35.803 23:12:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:36.061 /dev/nbd0 00:26:36.061 23:12:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:36.061 23:12:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:36.061 23:12:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:26:36.061 23:12:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:26:36.061 23:12:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:36.061 23:12:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:36.061 23:12:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:26:36.061 23:12:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:26:36.061 23:12:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:36.061 23:12:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:36.061 23:12:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:36.061 1+0 records in 00:26:36.061 1+0 records out 00:26:36.061 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374034 s, 11.0 MB/s 00:26:36.061 23:12:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:36.061 23:12:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:26:36.062 23:12:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:36.062 23:12:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:36.062 23:12:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:26:36.062 23:12:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:36.062 23:12:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:36.062 23:12:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:26:36.320 /dev/nbd1 00:26:36.320 23:12:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:36.320 23:12:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:36.321 23:12:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:26:36.321 23:12:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:26:36.321 23:12:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:36.321 23:12:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:36.321 23:12:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:26:36.321 23:12:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:26:36.321 23:12:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:36.321 23:12:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:36.321 23:12:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:36.321 1+0 records in 00:26:36.321 1+0 records out 00:26:36.321 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000495717 s, 8.3 MB/s 00:26:36.321 23:12:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:36.321 23:12:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:26:36.321 23:12:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:36.321 23:12:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:36.321 23:12:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:26:36.321 23:12:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:36.321 23:12:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:36.321 23:12:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:26:36.580 23:12:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:26:36.580 23:12:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:36.580 23:12:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:36.580 23:12:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:36.580 23:12:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:26:36.580 23:12:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:36.580 23:12:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:36.838 23:12:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:36.838 23:12:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:36.838 23:12:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:36.838 23:12:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:36.838 23:12:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:36.838 23:12:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:36.838 23:12:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:26:36.838 23:12:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:26:36.838 23:12:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:36.838 23:12:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:37.098 23:12:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:37.098 23:12:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:37.098 23:12:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:37.098 23:12:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:37.098 23:12:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:37.098 23:12:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:37.098 23:12:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:26:37.098 23:12:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:26:37.098 23:12:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:26:37.098 23:12:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 153788 00:26:37.098 23:12:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 153788 ']' 00:26:37.098 23:12:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 153788 00:26:37.098 23:12:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:26:37.098 23:12:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:37.098 23:12:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 153788 00:26:37.098 23:12:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:37.098 23:12:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:37.098 23:12:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 153788' 00:26:37.098 killing process with pid 153788 00:26:37.098 23:12:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@967 -- # kill 153788 00:26:37.098 Received shutdown signal, test time was about 60.000000 seconds 00:26:37.098 00:26:37.098 Latency(us) 00:26:37.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.098 =================================================================================================================== 00:26:37.098 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:37.098 23:12:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # wait 153788 00:26:37.098 [2024-07-13 23:12:26.327131] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:37.098 [2024-07-13 23:12:26.369737] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:37.356 23:12:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:26:37.356 00:26:37.356 real 0m22.533s 00:26:37.356 user 0m31.601s 00:26:37.356 sys 0m4.074s 00:26:37.356 23:12:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:37.356 23:12:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.356 ************************************ 00:26:37.356 END TEST raid_rebuild_test 00:26:37.356 ************************************ 00:26:37.613 23:12:26 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:26:37.613 23:12:26 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:26:37.613 23:12:26 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:26:37.613 23:12:26 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:37.613 23:12:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:37.613 ************************************ 00:26:37.613 START TEST raid_rebuild_test_sb 00:26:37.613 ************************************ 00:26:37.613 23:12:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:26:37.613 23:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:26:37.613 23:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:26:37.613 23:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:26:37.613 23:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:26:37.613 23:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:26:37.613 23:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:26:37.613 23:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:37.613 23:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:26:37.613 23:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:26:37.613 23:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:37.613 23:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:26:37.613 23:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:26:37.613 23:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:37.613 23:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:37.613 23:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:26:37.613 23:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:26:37.613 23:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:26:37.613 23:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:26:37.613 23:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:26:37.613 23:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:26:37.613 23:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:26:37.613 23:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:26:37.613 23:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:26:37.613 23:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:26:37.613 23:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=154327 00:26:37.613 23:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:37.613 23:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 154327 /var/tmp/spdk-raid.sock 00:26:37.613 23:12:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 154327 ']' 00:26:37.613 23:12:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:37.613 23:12:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:37.614 23:12:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:37.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:37.614 23:12:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:37.614 23:12:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:37.614 [2024-07-13 23:12:26.870045] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:26:37.614 [2024-07-13 23:12:26.870530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154327 ] 00:26:37.614 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:37.614 Zero copy mechanism will not be used. 00:26:37.614 [2024-07-13 23:12:27.009886] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.872 [2024-07-13 23:12:27.135991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.872 [2024-07-13 23:12:27.216687] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:38.808 23:12:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:38.808 23:12:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:26:38.808 23:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:38.808 23:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:38.808 BaseBdev1_malloc 00:26:38.808 23:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:39.066 [2024-07-13 23:12:28.453394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:39.066 [2024-07-13 23:12:28.453918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:39.066 [2024-07-13 23:12:28.454102] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:26:39.066 [2024-07-13 23:12:28.454353] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:39.066 [2024-07-13 23:12:28.457255] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:39.066 [2024-07-13 23:12:28.457500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:39.066 BaseBdev1 00:26:39.066 23:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:39.325 23:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:39.583 BaseBdev2_malloc 00:26:39.583 23:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:39.583 [2024-07-13 23:12:28.987325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:39.583 [2024-07-13 23:12:28.987809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:39.583 [2024-07-13 23:12:28.988061] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:26:39.583 [2024-07-13 23:12:28.988237] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:39.840 [2024-07-13 23:12:28.991506] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:39.840 [2024-07-13 23:12:28.991708] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:39.840 BaseBdev2 00:26:39.840 23:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:26:39.840 spare_malloc 00:26:39.840 23:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:40.098 spare_delay 00:26:40.098 23:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:40.354 [2024-07-13 23:12:29.644365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:40.354 [2024-07-13 23:12:29.644835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:40.354 [2024-07-13 23:12:29.645108] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:26:40.354 [2024-07-13 23:12:29.645328] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:40.354 [2024-07-13 23:12:29.648661] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:40.354 [2024-07-13 23:12:29.648865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:40.354 spare 00:26:40.354 23:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:26:40.612 [2024-07-13 23:12:29.917543] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:40.612 [2024-07-13 23:12:29.920685] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:40.612 [2024-07-13 23:12:29.921192] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:26:40.612 [2024-07-13 23:12:29.921398] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:40.612 [2024-07-13 23:12:29.921751] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:26:40.612 [2024-07-13 23:12:29.922537] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:26:40.612 [2024-07-13 23:12:29.922681] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:26:40.612 [2024-07-13 23:12:29.923140] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:40.612 23:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:40.612 23:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:40.612 23:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:40.612 23:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:40.612 23:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:40.612 23:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:40.612 23:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:40.612 23:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:40.612 23:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:40.612 23:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:40.612 23:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:40.612 23:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:40.871 23:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:40.871 "name": "raid_bdev1", 00:26:40.871 "uuid": "cfa52e29-e192-457c-aa14-334af231f049", 00:26:40.871 "strip_size_kb": 0, 00:26:40.871 "state": "online", 00:26:40.871 "raid_level": "raid1", 00:26:40.871 "superblock": true, 00:26:40.871 "num_base_bdevs": 2, 00:26:40.871 "num_base_bdevs_discovered": 2, 00:26:40.871 "num_base_bdevs_operational": 2, 00:26:40.871 "base_bdevs_list": [ 00:26:40.871 { 00:26:40.871 "name": "BaseBdev1", 00:26:40.871 "uuid": "1c9cb637-4d81-5a24-9a77-8dc5b895c22d", 00:26:40.871 "is_configured": true, 00:26:40.871 "data_offset": 2048, 00:26:40.871 "data_size": 63488 00:26:40.871 }, 00:26:40.871 { 00:26:40.871 "name": "BaseBdev2", 00:26:40.871 "uuid": "45f36de7-52c6-5e11-babd-740b261984c5", 00:26:40.871 "is_configured": true, 00:26:40.871 "data_offset": 2048, 00:26:40.871 "data_size": 63488 00:26:40.871 } 00:26:40.871 ] 00:26:40.871 }' 00:26:40.871 23:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:40.871 23:12:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:41.438 23:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:41.438 23:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:26:41.696 [2024-07-13 23:12:31.003735] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:41.696 23:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:26:41.696 23:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:41.696 23:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:41.954 23:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:26:41.954 23:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:26:41.954 23:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:26:41.954 23:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:26:41.954 23:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:26:41.954 23:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:41.954 23:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:26:41.954 23:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:41.954 23:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:41.954 23:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:41.954 23:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:26:41.954 23:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:41.954 23:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:41.954 23:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:42.212 [2024-07-13 23:12:31.495530] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:26:42.213 /dev/nbd0 00:26:42.213 23:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:42.213 23:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:42.213 23:12:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:26:42.213 23:12:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:26:42.213 23:12:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:42.213 23:12:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:42.213 23:12:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:26:42.213 23:12:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:26:42.213 23:12:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:42.213 23:12:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:42.213 23:12:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:42.213 1+0 records in 00:26:42.213 1+0 records out 00:26:42.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000548508 s, 7.5 MB/s 00:26:42.213 23:12:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:42.213 23:12:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:26:42.213 23:12:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:42.213 23:12:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:42.213 23:12:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:26:42.213 23:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:42.213 23:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:42.213 23:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:26:42.213 23:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:26:42.213 23:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:26:48.777 63488+0 records in 00:26:48.777 63488+0 records out 00:26:48.777 32505856 bytes (33 MB, 31 MiB) copied, 5.77233 s, 5.6 MB/s 00:26:48.777 23:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:48.777 23:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:48.777 23:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:48.777 23:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:48.777 23:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:26:48.777 23:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:48.777 23:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:48.777 23:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:48.777 23:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:48.777 23:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:48.777 23:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:48.777 23:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:48.777 [2024-07-13 23:12:37.596524] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:48.777 23:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:48.777 23:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:26:48.777 23:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:26:48.777 23:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:26:48.777 [2024-07-13 23:12:37.816192] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:48.777 23:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:48.777 23:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:48.777 23:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:48.777 23:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:48.777 23:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:48.777 23:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:48.777 23:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:48.777 23:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:48.777 23:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:48.777 23:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:48.777 23:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:48.777 23:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:48.777 23:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:48.777 "name": "raid_bdev1", 00:26:48.778 "uuid": "cfa52e29-e192-457c-aa14-334af231f049", 00:26:48.778 "strip_size_kb": 0, 00:26:48.778 "state": "online", 00:26:48.778 "raid_level": "raid1", 00:26:48.778 "superblock": true, 00:26:48.778 "num_base_bdevs": 2, 00:26:48.778 "num_base_bdevs_discovered": 1, 00:26:48.778 "num_base_bdevs_operational": 1, 00:26:48.778 "base_bdevs_list": [ 00:26:48.778 { 00:26:48.778 "name": null, 00:26:48.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:48.778 "is_configured": false, 00:26:48.778 "data_offset": 2048, 00:26:48.778 "data_size": 63488 00:26:48.778 }, 00:26:48.778 { 00:26:48.778 "name": "BaseBdev2", 00:26:48.778 "uuid": "45f36de7-52c6-5e11-babd-740b261984c5", 00:26:48.778 "is_configured": true, 00:26:48.778 "data_offset": 2048, 00:26:48.778 "data_size": 63488 00:26:48.778 } 00:26:48.778 ] 00:26:48.778 }' 00:26:48.778 23:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:48.778 23:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.345 23:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:49.603 [2024-07-13 23:12:38.940466] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:49.603 [2024-07-13 23:12:38.947848] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e0e0 00:26:49.603 [2024-07-13 23:12:38.950282] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:49.603 23:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:26:50.975 23:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:50.975 23:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:50.975 23:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:50.975 23:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:50.975 23:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:50.975 23:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:50.975 23:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:50.975 23:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:50.975 "name": "raid_bdev1", 00:26:50.975 "uuid": "cfa52e29-e192-457c-aa14-334af231f049", 00:26:50.975 "strip_size_kb": 0, 00:26:50.975 "state": "online", 00:26:50.975 "raid_level": "raid1", 00:26:50.975 "superblock": true, 00:26:50.975 "num_base_bdevs": 2, 00:26:50.975 "num_base_bdevs_discovered": 2, 00:26:50.975 "num_base_bdevs_operational": 2, 00:26:50.975 "process": { 00:26:50.975 "type": "rebuild", 00:26:50.975 "target": "spare", 00:26:50.975 "progress": { 00:26:50.975 "blocks": 24576, 00:26:50.975 "percent": 38 00:26:50.975 } 00:26:50.975 }, 00:26:50.975 "base_bdevs_list": [ 00:26:50.975 { 00:26:50.975 "name": "spare", 00:26:50.975 "uuid": "81fc6484-8b4a-5091-b199-5edcbcbaf71d", 00:26:50.975 "is_configured": true, 00:26:50.975 "data_offset": 2048, 00:26:50.975 "data_size": 63488 00:26:50.975 }, 00:26:50.975 { 00:26:50.975 "name": "BaseBdev2", 00:26:50.975 "uuid": "45f36de7-52c6-5e11-babd-740b261984c5", 00:26:50.975 "is_configured": true, 00:26:50.975 "data_offset": 2048, 00:26:50.975 "data_size": 63488 00:26:50.975 } 00:26:50.975 ] 00:26:50.975 }' 00:26:50.975 23:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:50.975 23:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:50.975 23:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:50.975 23:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:50.975 23:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:51.233 [2024-07-13 23:12:40.556940] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:51.233 [2024-07-13 23:12:40.561525] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:51.233 [2024-07-13 23:12:40.561849] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:51.233 [2024-07-13 23:12:40.562020] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:51.233 [2024-07-13 23:12:40.562208] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:51.233 23:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:51.233 23:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:51.233 23:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:51.233 23:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:51.233 23:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:51.233 23:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:51.233 23:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:51.233 23:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:51.233 23:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:51.233 23:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:51.233 23:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:51.233 23:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:51.491 23:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:51.491 "name": "raid_bdev1", 00:26:51.491 "uuid": "cfa52e29-e192-457c-aa14-334af231f049", 00:26:51.491 "strip_size_kb": 0, 00:26:51.491 "state": "online", 00:26:51.491 "raid_level": "raid1", 00:26:51.491 "superblock": true, 00:26:51.491 "num_base_bdevs": 2, 00:26:51.491 "num_base_bdevs_discovered": 1, 00:26:51.491 "num_base_bdevs_operational": 1, 00:26:51.491 "base_bdevs_list": [ 00:26:51.491 { 00:26:51.491 "name": null, 00:26:51.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:51.491 "is_configured": false, 00:26:51.491 "data_offset": 2048, 00:26:51.491 "data_size": 63488 00:26:51.491 }, 00:26:51.491 { 00:26:51.491 "name": "BaseBdev2", 00:26:51.491 "uuid": "45f36de7-52c6-5e11-babd-740b261984c5", 00:26:51.491 "is_configured": true, 00:26:51.491 "data_offset": 2048, 00:26:51.491 "data_size": 63488 00:26:51.491 } 00:26:51.491 ] 00:26:51.491 }' 00:26:51.491 23:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:51.491 23:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:52.425 23:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:52.425 23:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:52.425 23:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:52.425 23:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:52.425 23:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:52.425 23:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:52.425 23:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:52.425 23:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:52.425 "name": "raid_bdev1", 00:26:52.425 "uuid": "cfa52e29-e192-457c-aa14-334af231f049", 00:26:52.425 "strip_size_kb": 0, 00:26:52.425 "state": "online", 00:26:52.425 "raid_level": "raid1", 00:26:52.425 "superblock": true, 00:26:52.425 "num_base_bdevs": 2, 00:26:52.425 "num_base_bdevs_discovered": 1, 00:26:52.425 "num_base_bdevs_operational": 1, 00:26:52.425 "base_bdevs_list": [ 00:26:52.425 { 00:26:52.425 "name": null, 00:26:52.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:52.425 "is_configured": false, 00:26:52.425 "data_offset": 2048, 00:26:52.425 "data_size": 63488 00:26:52.425 }, 00:26:52.425 { 00:26:52.425 "name": "BaseBdev2", 00:26:52.425 "uuid": "45f36de7-52c6-5e11-babd-740b261984c5", 00:26:52.425 "is_configured": true, 00:26:52.425 "data_offset": 2048, 00:26:52.425 "data_size": 63488 00:26:52.425 } 00:26:52.425 ] 00:26:52.425 }' 00:26:52.425 23:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:52.425 23:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:52.425 23:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:52.425 23:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:52.425 23:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:52.683 [2024-07-13 23:12:42.017567] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:52.683 [2024-07-13 23:12:42.024779] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e280 00:26:52.683 [2024-07-13 23:12:42.027102] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:52.683 23:12:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:54.058 23:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:54.058 23:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:54.058 23:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:54.058 23:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:54.058 23:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:54.058 23:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:54.058 23:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:54.058 23:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:54.058 "name": "raid_bdev1", 00:26:54.058 "uuid": "cfa52e29-e192-457c-aa14-334af231f049", 00:26:54.058 "strip_size_kb": 0, 00:26:54.058 "state": "online", 00:26:54.058 "raid_level": "raid1", 00:26:54.058 "superblock": true, 00:26:54.058 "num_base_bdevs": 2, 00:26:54.058 "num_base_bdevs_discovered": 2, 00:26:54.058 "num_base_bdevs_operational": 2, 00:26:54.058 "process": { 00:26:54.058 "type": "rebuild", 00:26:54.058 "target": "spare", 00:26:54.058 "progress": { 00:26:54.058 "blocks": 24576, 00:26:54.058 "percent": 38 00:26:54.058 } 00:26:54.058 }, 00:26:54.058 "base_bdevs_list": [ 00:26:54.058 { 00:26:54.058 "name": "spare", 00:26:54.058 "uuid": "81fc6484-8b4a-5091-b199-5edcbcbaf71d", 00:26:54.058 "is_configured": true, 00:26:54.058 "data_offset": 2048, 00:26:54.058 "data_size": 63488 00:26:54.058 }, 00:26:54.058 { 00:26:54.058 "name": "BaseBdev2", 00:26:54.058 "uuid": "45f36de7-52c6-5e11-babd-740b261984c5", 00:26:54.058 "is_configured": true, 00:26:54.058 "data_offset": 2048, 00:26:54.058 "data_size": 63488 00:26:54.058 } 00:26:54.058 ] 00:26:54.058 }' 00:26:54.058 23:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:54.058 23:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:54.058 23:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:54.058 23:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:54.058 23:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:26:54.058 23:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:26:54.058 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:26:54.058 23:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:26:54.058 23:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:26:54.058 23:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:26:54.058 23:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=801 00:26:54.058 23:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:54.058 23:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:54.058 23:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:54.058 23:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:54.058 23:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:54.058 23:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:54.058 23:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:54.058 23:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:54.316 23:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:54.316 "name": "raid_bdev1", 00:26:54.316 "uuid": "cfa52e29-e192-457c-aa14-334af231f049", 00:26:54.316 "strip_size_kb": 0, 00:26:54.316 "state": "online", 00:26:54.316 "raid_level": "raid1", 00:26:54.316 "superblock": true, 00:26:54.316 "num_base_bdevs": 2, 00:26:54.316 "num_base_bdevs_discovered": 2, 00:26:54.316 "num_base_bdevs_operational": 2, 00:26:54.316 "process": { 00:26:54.316 "type": "rebuild", 00:26:54.316 "target": "spare", 00:26:54.316 "progress": { 00:26:54.316 "blocks": 32768, 00:26:54.316 "percent": 51 00:26:54.316 } 00:26:54.316 }, 00:26:54.316 "base_bdevs_list": [ 00:26:54.316 { 00:26:54.316 "name": "spare", 00:26:54.316 "uuid": "81fc6484-8b4a-5091-b199-5edcbcbaf71d", 00:26:54.316 "is_configured": true, 00:26:54.316 "data_offset": 2048, 00:26:54.316 "data_size": 63488 00:26:54.316 }, 00:26:54.316 { 00:26:54.316 "name": "BaseBdev2", 00:26:54.316 "uuid": "45f36de7-52c6-5e11-babd-740b261984c5", 00:26:54.316 "is_configured": true, 00:26:54.316 "data_offset": 2048, 00:26:54.316 "data_size": 63488 00:26:54.316 } 00:26:54.316 ] 00:26:54.316 }' 00:26:54.316 23:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:54.574 23:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:54.574 23:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:54.574 23:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:54.574 23:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:26:55.510 23:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:55.510 23:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:55.510 23:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:55.510 23:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:55.510 23:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:55.510 23:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:55.510 23:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:55.510 23:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:55.769 23:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:55.769 "name": "raid_bdev1", 00:26:55.769 "uuid": "cfa52e29-e192-457c-aa14-334af231f049", 00:26:55.769 "strip_size_kb": 0, 00:26:55.769 "state": "online", 00:26:55.769 "raid_level": "raid1", 00:26:55.769 "superblock": true, 00:26:55.769 "num_base_bdevs": 2, 00:26:55.769 "num_base_bdevs_discovered": 2, 00:26:55.769 "num_base_bdevs_operational": 2, 00:26:55.769 "process": { 00:26:55.769 "type": "rebuild", 00:26:55.769 "target": "spare", 00:26:55.769 "progress": { 00:26:55.769 "blocks": 61440, 00:26:55.769 "percent": 96 00:26:55.769 } 00:26:55.769 }, 00:26:55.769 "base_bdevs_list": [ 00:26:55.769 { 00:26:55.769 "name": "spare", 00:26:55.769 "uuid": "81fc6484-8b4a-5091-b199-5edcbcbaf71d", 00:26:55.769 "is_configured": true, 00:26:55.769 "data_offset": 2048, 00:26:55.769 "data_size": 63488 00:26:55.769 }, 00:26:55.769 { 00:26:55.769 "name": "BaseBdev2", 00:26:55.769 "uuid": "45f36de7-52c6-5e11-babd-740b261984c5", 00:26:55.769 "is_configured": true, 00:26:55.769 "data_offset": 2048, 00:26:55.769 "data_size": 63488 00:26:55.769 } 00:26:55.769 ] 00:26:55.769 }' 00:26:55.769 23:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:55.769 23:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:55.769 23:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:55.769 [2024-07-13 23:12:45.146719] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:55.769 [2024-07-13 23:12:45.147180] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:55.769 [2024-07-13 23:12:45.147524] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:55.769 23:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:55.769 23:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:26:57.146 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:57.146 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:57.146 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:57.146 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:57.146 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:57.146 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:57.146 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:57.146 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:57.146 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:57.146 "name": "raid_bdev1", 00:26:57.146 "uuid": "cfa52e29-e192-457c-aa14-334af231f049", 00:26:57.146 "strip_size_kb": 0, 00:26:57.146 "state": "online", 00:26:57.146 "raid_level": "raid1", 00:26:57.146 "superblock": true, 00:26:57.146 "num_base_bdevs": 2, 00:26:57.146 "num_base_bdevs_discovered": 2, 00:26:57.146 "num_base_bdevs_operational": 2, 00:26:57.146 "base_bdevs_list": [ 00:26:57.146 { 00:26:57.146 "name": "spare", 00:26:57.146 "uuid": "81fc6484-8b4a-5091-b199-5edcbcbaf71d", 00:26:57.146 "is_configured": true, 00:26:57.146 "data_offset": 2048, 00:26:57.146 "data_size": 63488 00:26:57.146 }, 00:26:57.146 { 00:26:57.146 "name": "BaseBdev2", 00:26:57.146 "uuid": "45f36de7-52c6-5e11-babd-740b261984c5", 00:26:57.146 "is_configured": true, 00:26:57.146 "data_offset": 2048, 00:26:57.146 "data_size": 63488 00:26:57.146 } 00:26:57.146 ] 00:26:57.146 }' 00:26:57.146 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:57.146 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:57.146 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:57.146 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:26:57.146 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:26:57.146 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:57.146 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:57.146 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:57.146 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:57.146 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:57.146 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:57.146 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:57.405 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:57.405 "name": "raid_bdev1", 00:26:57.405 "uuid": "cfa52e29-e192-457c-aa14-334af231f049", 00:26:57.405 "strip_size_kb": 0, 00:26:57.405 "state": "online", 00:26:57.405 "raid_level": "raid1", 00:26:57.405 "superblock": true, 00:26:57.405 "num_base_bdevs": 2, 00:26:57.405 "num_base_bdevs_discovered": 2, 00:26:57.405 "num_base_bdevs_operational": 2, 00:26:57.405 "base_bdevs_list": [ 00:26:57.405 { 00:26:57.405 "name": "spare", 00:26:57.405 "uuid": "81fc6484-8b4a-5091-b199-5edcbcbaf71d", 00:26:57.405 "is_configured": true, 00:26:57.405 "data_offset": 2048, 00:26:57.405 "data_size": 63488 00:26:57.405 }, 00:26:57.405 { 00:26:57.405 "name": "BaseBdev2", 00:26:57.405 "uuid": "45f36de7-52c6-5e11-babd-740b261984c5", 00:26:57.405 "is_configured": true, 00:26:57.405 "data_offset": 2048, 00:26:57.405 "data_size": 63488 00:26:57.405 } 00:26:57.405 ] 00:26:57.405 }' 00:26:57.405 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:57.405 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:57.405 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:57.678 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:57.678 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:57.678 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:57.678 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:57.678 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:57.678 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:57.678 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:57.678 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:57.678 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:57.678 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:57.678 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:57.678 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:57.678 23:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:57.959 23:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:57.959 "name": "raid_bdev1", 00:26:57.959 "uuid": "cfa52e29-e192-457c-aa14-334af231f049", 00:26:57.959 "strip_size_kb": 0, 00:26:57.959 "state": "online", 00:26:57.959 "raid_level": "raid1", 00:26:57.959 "superblock": true, 00:26:57.959 "num_base_bdevs": 2, 00:26:57.959 "num_base_bdevs_discovered": 2, 00:26:57.959 "num_base_bdevs_operational": 2, 00:26:57.959 "base_bdevs_list": [ 00:26:57.959 { 00:26:57.959 "name": "spare", 00:26:57.959 "uuid": "81fc6484-8b4a-5091-b199-5edcbcbaf71d", 00:26:57.959 "is_configured": true, 00:26:57.959 "data_offset": 2048, 00:26:57.959 "data_size": 63488 00:26:57.959 }, 00:26:57.959 { 00:26:57.959 "name": "BaseBdev2", 00:26:57.959 "uuid": "45f36de7-52c6-5e11-babd-740b261984c5", 00:26:57.959 "is_configured": true, 00:26:57.959 "data_offset": 2048, 00:26:57.959 "data_size": 63488 00:26:57.959 } 00:26:57.959 ] 00:26:57.959 }' 00:26:57.959 23:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:57.959 23:12:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:58.525 23:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:58.784 [2024-07-13 23:12:47.951342] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:58.784 [2024-07-13 23:12:47.951631] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:58.784 [2024-07-13 23:12:47.951889] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:58.784 [2024-07-13 23:12:47.952098] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:58.784 [2024-07-13 23:12:47.952216] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:26:58.784 23:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:58.784 23:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:26:59.043 23:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:26:59.043 23:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:26:59.043 23:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:26:59.043 23:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:59.043 23:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:59.043 23:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:26:59.043 23:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:59.043 23:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:59.043 23:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:59.043 23:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:26:59.043 23:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:59.043 23:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:59.043 23:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:59.301 /dev/nbd0 00:26:59.301 23:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:59.301 23:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:59.301 23:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:26:59.301 23:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:26:59.301 23:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:59.301 23:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:59.301 23:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:26:59.301 23:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:26:59.301 23:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:59.301 23:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:59.301 23:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:59.301 1+0 records in 00:26:59.301 1+0 records out 00:26:59.301 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423338 s, 9.7 MB/s 00:26:59.301 23:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:59.301 23:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:26:59.301 23:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:59.301 23:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:59.301 23:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:26:59.301 23:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:59.301 23:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:59.301 23:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:26:59.558 /dev/nbd1 00:26:59.558 23:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:59.558 23:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:59.558 23:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:26:59.558 23:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:26:59.558 23:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:59.558 23:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:59.558 23:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:26:59.558 23:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:26:59.558 23:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:59.558 23:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:59.559 23:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:59.559 1+0 records in 00:26:59.559 1+0 records out 00:26:59.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567212 s, 7.2 MB/s 00:26:59.559 23:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:59.559 23:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:26:59.559 23:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:59.559 23:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:59.559 23:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:26:59.559 23:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:59.559 23:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:59.559 23:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:26:59.559 23:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:26:59.559 23:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:59.559 23:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:59.559 23:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:59.559 23:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:26:59.559 23:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:59.559 23:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:59.816 23:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:59.816 23:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:59.816 23:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:59.816 23:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:59.816 23:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:59.816 23:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:59.816 23:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:26:59.816 23:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:27:00.074 23:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:00.074 23:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:00.332 23:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:00.332 23:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:00.332 23:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:00.332 23:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:00.332 23:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:00.332 23:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:00.332 23:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:27:00.332 23:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:27:00.332 23:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:27:00.332 23:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:00.589 23:12:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:00.849 [2024-07-13 23:12:50.018136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:00.849 [2024-07-13 23:12:50.018535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:00.849 [2024-07-13 23:12:50.018756] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:27:00.849 [2024-07-13 23:12:50.018907] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:00.849 [2024-07-13 23:12:50.021475] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:00.849 [2024-07-13 23:12:50.021686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:00.849 [2024-07-13 23:12:50.021962] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:00.849 [2024-07-13 23:12:50.022225] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:00.849 [2024-07-13 23:12:50.022571] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:00.849 spare 00:27:00.849 23:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:00.849 23:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:00.849 23:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:00.849 23:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:00.849 23:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:00.849 23:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:00.849 23:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:00.849 23:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:00.849 23:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:00.849 23:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:00.849 23:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:00.849 23:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:00.849 [2024-07-13 23:12:50.122888] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:27:00.849 [2024-07-13 23:12:50.123068] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:00.849 [2024-07-13 23:12:50.123261] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeb00 00:27:00.849 [2024-07-13 23:12:50.123899] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:27:00.849 [2024-07-13 23:12:50.124055] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:27:00.849 [2024-07-13 23:12:50.124302] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:01.108 23:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:01.108 "name": "raid_bdev1", 00:27:01.108 "uuid": "cfa52e29-e192-457c-aa14-334af231f049", 00:27:01.108 "strip_size_kb": 0, 00:27:01.108 "state": "online", 00:27:01.108 "raid_level": "raid1", 00:27:01.108 "superblock": true, 00:27:01.108 "num_base_bdevs": 2, 00:27:01.108 "num_base_bdevs_discovered": 2, 00:27:01.108 "num_base_bdevs_operational": 2, 00:27:01.108 "base_bdevs_list": [ 00:27:01.108 { 00:27:01.108 "name": "spare", 00:27:01.108 "uuid": "81fc6484-8b4a-5091-b199-5edcbcbaf71d", 00:27:01.108 "is_configured": true, 00:27:01.108 "data_offset": 2048, 00:27:01.108 "data_size": 63488 00:27:01.108 }, 00:27:01.108 { 00:27:01.108 "name": "BaseBdev2", 00:27:01.108 "uuid": "45f36de7-52c6-5e11-babd-740b261984c5", 00:27:01.108 "is_configured": true, 00:27:01.108 "data_offset": 2048, 00:27:01.108 "data_size": 63488 00:27:01.108 } 00:27:01.108 ] 00:27:01.108 }' 00:27:01.108 23:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:01.108 23:12:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:01.675 23:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:01.675 23:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:01.675 23:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:01.675 23:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:01.675 23:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:01.675 23:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:01.675 23:12:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:01.934 23:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:01.934 "name": "raid_bdev1", 00:27:01.934 "uuid": "cfa52e29-e192-457c-aa14-334af231f049", 00:27:01.934 "strip_size_kb": 0, 00:27:01.934 "state": "online", 00:27:01.934 "raid_level": "raid1", 00:27:01.934 "superblock": true, 00:27:01.934 "num_base_bdevs": 2, 00:27:01.934 "num_base_bdevs_discovered": 2, 00:27:01.934 "num_base_bdevs_operational": 2, 00:27:01.934 "base_bdevs_list": [ 00:27:01.934 { 00:27:01.934 "name": "spare", 00:27:01.934 "uuid": "81fc6484-8b4a-5091-b199-5edcbcbaf71d", 00:27:01.934 "is_configured": true, 00:27:01.934 "data_offset": 2048, 00:27:01.934 "data_size": 63488 00:27:01.934 }, 00:27:01.934 { 00:27:01.934 "name": "BaseBdev2", 00:27:01.934 "uuid": "45f36de7-52c6-5e11-babd-740b261984c5", 00:27:01.934 "is_configured": true, 00:27:01.934 "data_offset": 2048, 00:27:01.934 "data_size": 63488 00:27:01.934 } 00:27:01.934 ] 00:27:01.934 }' 00:27:01.934 23:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:01.934 23:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:01.934 23:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:01.934 23:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:01.934 23:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:01.934 23:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:27:02.192 23:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:27:02.192 23:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:02.451 [2024-07-13 23:12:51.711102] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:02.451 23:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:02.451 23:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:02.451 23:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:02.451 23:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:02.451 23:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:02.451 23:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:02.451 23:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:02.451 23:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:02.451 23:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:02.451 23:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:02.451 23:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:02.451 23:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:02.710 23:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:02.710 "name": "raid_bdev1", 00:27:02.710 "uuid": "cfa52e29-e192-457c-aa14-334af231f049", 00:27:02.710 "strip_size_kb": 0, 00:27:02.710 "state": "online", 00:27:02.710 "raid_level": "raid1", 00:27:02.710 "superblock": true, 00:27:02.710 "num_base_bdevs": 2, 00:27:02.710 "num_base_bdevs_discovered": 1, 00:27:02.710 "num_base_bdevs_operational": 1, 00:27:02.710 "base_bdevs_list": [ 00:27:02.710 { 00:27:02.710 "name": null, 00:27:02.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:02.710 "is_configured": false, 00:27:02.710 "data_offset": 2048, 00:27:02.710 "data_size": 63488 00:27:02.710 }, 00:27:02.710 { 00:27:02.710 "name": "BaseBdev2", 00:27:02.710 "uuid": "45f36de7-52c6-5e11-babd-740b261984c5", 00:27:02.710 "is_configured": true, 00:27:02.710 "data_offset": 2048, 00:27:02.710 "data_size": 63488 00:27:02.710 } 00:27:02.710 ] 00:27:02.710 }' 00:27:02.710 23:12:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:02.710 23:12:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:03.277 23:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:03.535 [2024-07-13 23:12:52.859409] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:03.535 [2024-07-13 23:12:52.860074] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:03.535 [2024-07-13 23:12:52.860246] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:03.535 [2024-07-13 23:12:52.860392] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:03.535 [2024-07-13 23:12:52.867292] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeca0 00:27:03.535 [2024-07-13 23:12:52.869587] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:03.535 23:12:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:27:04.908 23:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:04.908 23:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:04.908 23:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:04.908 23:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:04.908 23:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:04.908 23:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:04.908 23:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:04.908 23:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:04.908 "name": "raid_bdev1", 00:27:04.908 "uuid": "cfa52e29-e192-457c-aa14-334af231f049", 00:27:04.908 "strip_size_kb": 0, 00:27:04.908 "state": "online", 00:27:04.908 "raid_level": "raid1", 00:27:04.908 "superblock": true, 00:27:04.908 "num_base_bdevs": 2, 00:27:04.908 "num_base_bdevs_discovered": 2, 00:27:04.908 "num_base_bdevs_operational": 2, 00:27:04.908 "process": { 00:27:04.908 "type": "rebuild", 00:27:04.908 "target": "spare", 00:27:04.908 "progress": { 00:27:04.908 "blocks": 24576, 00:27:04.908 "percent": 38 00:27:04.908 } 00:27:04.908 }, 00:27:04.908 "base_bdevs_list": [ 00:27:04.908 { 00:27:04.908 "name": "spare", 00:27:04.908 "uuid": "81fc6484-8b4a-5091-b199-5edcbcbaf71d", 00:27:04.908 "is_configured": true, 00:27:04.908 "data_offset": 2048, 00:27:04.908 "data_size": 63488 00:27:04.908 }, 00:27:04.908 { 00:27:04.908 "name": "BaseBdev2", 00:27:04.908 "uuid": "45f36de7-52c6-5e11-babd-740b261984c5", 00:27:04.908 "is_configured": true, 00:27:04.908 "data_offset": 2048, 00:27:04.908 "data_size": 63488 00:27:04.908 } 00:27:04.908 ] 00:27:04.908 }' 00:27:04.908 23:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:04.908 23:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:04.908 23:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:04.908 23:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:04.908 23:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:05.166 [2024-07-13 23:12:54.504626] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:05.425 [2024-07-13 23:12:54.581938] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:05.425 [2024-07-13 23:12:54.582547] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:05.425 [2024-07-13 23:12:54.582718] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:05.425 [2024-07-13 23:12:54.582926] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:05.425 23:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:05.425 23:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:05.425 23:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:05.425 23:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:05.425 23:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:05.425 23:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:05.425 23:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:05.425 23:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:05.425 23:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:05.425 23:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:05.425 23:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:05.425 23:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:05.685 23:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:05.685 "name": "raid_bdev1", 00:27:05.685 "uuid": "cfa52e29-e192-457c-aa14-334af231f049", 00:27:05.685 "strip_size_kb": 0, 00:27:05.685 "state": "online", 00:27:05.685 "raid_level": "raid1", 00:27:05.685 "superblock": true, 00:27:05.685 "num_base_bdevs": 2, 00:27:05.685 "num_base_bdevs_discovered": 1, 00:27:05.685 "num_base_bdevs_operational": 1, 00:27:05.685 "base_bdevs_list": [ 00:27:05.685 { 00:27:05.685 "name": null, 00:27:05.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:05.685 "is_configured": false, 00:27:05.685 "data_offset": 2048, 00:27:05.685 "data_size": 63488 00:27:05.685 }, 00:27:05.685 { 00:27:05.685 "name": "BaseBdev2", 00:27:05.685 "uuid": "45f36de7-52c6-5e11-babd-740b261984c5", 00:27:05.685 "is_configured": true, 00:27:05.685 "data_offset": 2048, 00:27:05.685 "data_size": 63488 00:27:05.685 } 00:27:05.685 ] 00:27:05.685 }' 00:27:05.685 23:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:05.685 23:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:06.252 23:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:06.511 [2024-07-13 23:12:55.707340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:06.511 [2024-07-13 23:12:55.707851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:06.511 [2024-07-13 23:12:55.708105] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:27:06.512 [2024-07-13 23:12:55.708295] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:06.512 [2024-07-13 23:12:55.709215] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:06.512 [2024-07-13 23:12:55.709442] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:06.512 [2024-07-13 23:12:55.709736] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:06.512 [2024-07-13 23:12:55.709930] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:06.512 [2024-07-13 23:12:55.710075] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:06.512 [2024-07-13 23:12:55.710323] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:06.512 [2024-07-13 23:12:55.717998] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caefe0 00:27:06.512 spare 00:27:06.512 [2024-07-13 23:12:55.720664] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:06.512 23:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:27:07.444 23:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:07.444 23:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:07.444 23:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:07.444 23:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:07.444 23:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:07.444 23:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:07.444 23:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:07.702 23:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:07.702 "name": "raid_bdev1", 00:27:07.702 "uuid": "cfa52e29-e192-457c-aa14-334af231f049", 00:27:07.702 "strip_size_kb": 0, 00:27:07.702 "state": "online", 00:27:07.702 "raid_level": "raid1", 00:27:07.702 "superblock": true, 00:27:07.702 "num_base_bdevs": 2, 00:27:07.702 "num_base_bdevs_discovered": 2, 00:27:07.702 "num_base_bdevs_operational": 2, 00:27:07.702 "process": { 00:27:07.702 "type": "rebuild", 00:27:07.702 "target": "spare", 00:27:07.702 "progress": { 00:27:07.702 "blocks": 24576, 00:27:07.702 "percent": 38 00:27:07.702 } 00:27:07.702 }, 00:27:07.702 "base_bdevs_list": [ 00:27:07.702 { 00:27:07.702 "name": "spare", 00:27:07.702 "uuid": "81fc6484-8b4a-5091-b199-5edcbcbaf71d", 00:27:07.702 "is_configured": true, 00:27:07.702 "data_offset": 2048, 00:27:07.702 "data_size": 63488 00:27:07.702 }, 00:27:07.702 { 00:27:07.702 "name": "BaseBdev2", 00:27:07.702 "uuid": "45f36de7-52c6-5e11-babd-740b261984c5", 00:27:07.702 "is_configured": true, 00:27:07.702 "data_offset": 2048, 00:27:07.702 "data_size": 63488 00:27:07.702 } 00:27:07.702 ] 00:27:07.702 }' 00:27:07.702 23:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:07.702 23:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:07.702 23:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:07.702 23:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:07.702 23:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:07.960 [2024-07-13 23:12:57.290554] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:07.960 [2024-07-13 23:12:57.333230] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:07.960 [2024-07-13 23:12:57.333702] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:07.960 [2024-07-13 23:12:57.333895] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:07.960 [2024-07-13 23:12:57.334032] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:07.960 23:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:07.960 23:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:07.960 23:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:07.960 23:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:07.960 23:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:07.960 23:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:07.960 23:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:07.960 23:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:07.960 23:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:07.960 23:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:08.219 23:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:08.219 23:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:08.219 23:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:08.219 "name": "raid_bdev1", 00:27:08.219 "uuid": "cfa52e29-e192-457c-aa14-334af231f049", 00:27:08.219 "strip_size_kb": 0, 00:27:08.219 "state": "online", 00:27:08.219 "raid_level": "raid1", 00:27:08.219 "superblock": true, 00:27:08.219 "num_base_bdevs": 2, 00:27:08.219 "num_base_bdevs_discovered": 1, 00:27:08.219 "num_base_bdevs_operational": 1, 00:27:08.219 "base_bdevs_list": [ 00:27:08.219 { 00:27:08.219 "name": null, 00:27:08.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:08.219 "is_configured": false, 00:27:08.219 "data_offset": 2048, 00:27:08.219 "data_size": 63488 00:27:08.219 }, 00:27:08.219 { 00:27:08.219 "name": "BaseBdev2", 00:27:08.219 "uuid": "45f36de7-52c6-5e11-babd-740b261984c5", 00:27:08.219 "is_configured": true, 00:27:08.219 "data_offset": 2048, 00:27:08.219 "data_size": 63488 00:27:08.219 } 00:27:08.219 ] 00:27:08.219 }' 00:27:08.219 23:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:08.219 23:12:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:08.790 23:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:08.790 23:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:08.790 23:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:08.790 23:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:08.790 23:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:09.053 23:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:09.053 23:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:09.311 23:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:09.311 "name": "raid_bdev1", 00:27:09.311 "uuid": "cfa52e29-e192-457c-aa14-334af231f049", 00:27:09.312 "strip_size_kb": 0, 00:27:09.312 "state": "online", 00:27:09.312 "raid_level": "raid1", 00:27:09.312 "superblock": true, 00:27:09.312 "num_base_bdevs": 2, 00:27:09.312 "num_base_bdevs_discovered": 1, 00:27:09.312 "num_base_bdevs_operational": 1, 00:27:09.312 "base_bdevs_list": [ 00:27:09.312 { 00:27:09.312 "name": null, 00:27:09.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:09.312 "is_configured": false, 00:27:09.312 "data_offset": 2048, 00:27:09.312 "data_size": 63488 00:27:09.312 }, 00:27:09.312 { 00:27:09.312 "name": "BaseBdev2", 00:27:09.312 "uuid": "45f36de7-52c6-5e11-babd-740b261984c5", 00:27:09.312 "is_configured": true, 00:27:09.312 "data_offset": 2048, 00:27:09.312 "data_size": 63488 00:27:09.312 } 00:27:09.312 ] 00:27:09.312 }' 00:27:09.312 23:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:09.312 23:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:09.312 23:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:09.312 23:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:09.312 23:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:27:09.569 23:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:09.827 [2024-07-13 23:12:59.042317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:09.827 [2024-07-13 23:12:59.042831] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:09.827 [2024-07-13 23:12:59.043059] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:27:09.827 [2024-07-13 23:12:59.043255] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:09.827 [2024-07-13 23:12:59.044015] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:09.827 [2024-07-13 23:12:59.044222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:09.827 [2024-07-13 23:12:59.044535] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:27:09.827 [2024-07-13 23:12:59.044679] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:09.827 [2024-07-13 23:12:59.044857] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:09.827 BaseBdev1 00:27:09.827 23:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:27:10.763 23:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:10.763 23:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:10.763 23:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:10.763 23:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:10.763 23:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:10.763 23:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:10.763 23:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:10.763 23:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:10.763 23:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:10.763 23:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:10.763 23:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:10.763 23:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:11.022 23:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:11.022 "name": "raid_bdev1", 00:27:11.022 "uuid": "cfa52e29-e192-457c-aa14-334af231f049", 00:27:11.022 "strip_size_kb": 0, 00:27:11.022 "state": "online", 00:27:11.022 "raid_level": "raid1", 00:27:11.022 "superblock": true, 00:27:11.022 "num_base_bdevs": 2, 00:27:11.022 "num_base_bdevs_discovered": 1, 00:27:11.022 "num_base_bdevs_operational": 1, 00:27:11.022 "base_bdevs_list": [ 00:27:11.022 { 00:27:11.022 "name": null, 00:27:11.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:11.022 "is_configured": false, 00:27:11.022 "data_offset": 2048, 00:27:11.022 "data_size": 63488 00:27:11.022 }, 00:27:11.022 { 00:27:11.022 "name": "BaseBdev2", 00:27:11.022 "uuid": "45f36de7-52c6-5e11-babd-740b261984c5", 00:27:11.022 "is_configured": true, 00:27:11.022 "data_offset": 2048, 00:27:11.022 "data_size": 63488 00:27:11.022 } 00:27:11.022 ] 00:27:11.022 }' 00:27:11.022 23:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:11.022 23:13:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:11.590 23:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:11.590 23:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:11.590 23:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:11.590 23:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:11.590 23:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:11.590 23:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:11.590 23:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:11.849 23:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:11.849 "name": "raid_bdev1", 00:27:11.850 "uuid": "cfa52e29-e192-457c-aa14-334af231f049", 00:27:11.850 "strip_size_kb": 0, 00:27:11.850 "state": "online", 00:27:11.850 "raid_level": "raid1", 00:27:11.850 "superblock": true, 00:27:11.850 "num_base_bdevs": 2, 00:27:11.850 "num_base_bdevs_discovered": 1, 00:27:11.850 "num_base_bdevs_operational": 1, 00:27:11.850 "base_bdevs_list": [ 00:27:11.850 { 00:27:11.850 "name": null, 00:27:11.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:11.850 "is_configured": false, 00:27:11.850 "data_offset": 2048, 00:27:11.850 "data_size": 63488 00:27:11.850 }, 00:27:11.850 { 00:27:11.850 "name": "BaseBdev2", 00:27:11.850 "uuid": "45f36de7-52c6-5e11-babd-740b261984c5", 00:27:11.850 "is_configured": true, 00:27:11.850 "data_offset": 2048, 00:27:11.850 "data_size": 63488 00:27:11.850 } 00:27:11.850 ] 00:27:11.850 }' 00:27:11.850 23:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:12.108 23:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:12.108 23:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:12.108 23:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:12.108 23:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:12.108 23:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:27:12.108 23:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:12.108 23:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:12.108 23:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:12.108 23:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:12.108 23:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:12.108 23:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:12.108 23:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:12.108 23:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:12.108 23:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:12.108 23:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:12.367 [2024-07-13 23:13:01.590853] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:12.367 [2024-07-13 23:13:01.591510] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:12.367 [2024-07-13 23:13:01.591688] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:12.367 request: 00:27:12.367 { 00:27:12.367 "base_bdev": "BaseBdev1", 00:27:12.367 "raid_bdev": "raid_bdev1", 00:27:12.367 "method": "bdev_raid_add_base_bdev", 00:27:12.367 "req_id": 1 00:27:12.367 } 00:27:12.367 Got JSON-RPC error response 00:27:12.367 response: 00:27:12.367 { 00:27:12.367 "code": -22, 00:27:12.367 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:27:12.367 } 00:27:12.367 23:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:27:12.367 23:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:12.367 23:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:12.367 23:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:12.367 23:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:27:13.305 23:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:13.305 23:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:13.305 23:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:13.305 23:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:13.305 23:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:13.305 23:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:13.305 23:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:13.305 23:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:13.305 23:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:13.305 23:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:13.305 23:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:13.305 23:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:13.565 23:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:13.565 "name": "raid_bdev1", 00:27:13.565 "uuid": "cfa52e29-e192-457c-aa14-334af231f049", 00:27:13.565 "strip_size_kb": 0, 00:27:13.565 "state": "online", 00:27:13.565 "raid_level": "raid1", 00:27:13.565 "superblock": true, 00:27:13.565 "num_base_bdevs": 2, 00:27:13.565 "num_base_bdevs_discovered": 1, 00:27:13.565 "num_base_bdevs_operational": 1, 00:27:13.565 "base_bdevs_list": [ 00:27:13.565 { 00:27:13.565 "name": null, 00:27:13.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:13.565 "is_configured": false, 00:27:13.565 "data_offset": 2048, 00:27:13.565 "data_size": 63488 00:27:13.565 }, 00:27:13.565 { 00:27:13.565 "name": "BaseBdev2", 00:27:13.565 "uuid": "45f36de7-52c6-5e11-babd-740b261984c5", 00:27:13.565 "is_configured": true, 00:27:13.565 "data_offset": 2048, 00:27:13.565 "data_size": 63488 00:27:13.565 } 00:27:13.565 ] 00:27:13.565 }' 00:27:13.565 23:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:13.565 23:13:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:14.131 23:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:14.131 23:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:14.131 23:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:14.131 23:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:14.131 23:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:14.131 23:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:14.131 23:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:14.388 23:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:14.388 "name": "raid_bdev1", 00:27:14.388 "uuid": "cfa52e29-e192-457c-aa14-334af231f049", 00:27:14.388 "strip_size_kb": 0, 00:27:14.388 "state": "online", 00:27:14.388 "raid_level": "raid1", 00:27:14.388 "superblock": true, 00:27:14.388 "num_base_bdevs": 2, 00:27:14.388 "num_base_bdevs_discovered": 1, 00:27:14.388 "num_base_bdevs_operational": 1, 00:27:14.388 "base_bdevs_list": [ 00:27:14.388 { 00:27:14.388 "name": null, 00:27:14.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:14.388 "is_configured": false, 00:27:14.388 "data_offset": 2048, 00:27:14.388 "data_size": 63488 00:27:14.388 }, 00:27:14.388 { 00:27:14.388 "name": "BaseBdev2", 00:27:14.388 "uuid": "45f36de7-52c6-5e11-babd-740b261984c5", 00:27:14.388 "is_configured": true, 00:27:14.388 "data_offset": 2048, 00:27:14.388 "data_size": 63488 00:27:14.388 } 00:27:14.388 ] 00:27:14.388 }' 00:27:14.388 23:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:14.388 23:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:14.388 23:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:14.388 23:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:14.388 23:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 154327 00:27:14.388 23:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 154327 ']' 00:27:14.388 23:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 154327 00:27:14.388 23:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:27:14.646 23:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:14.646 23:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 154327 00:27:14.646 23:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:14.646 23:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:14.646 23:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 154327' 00:27:14.646 killing process with pid 154327 00:27:14.646 23:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 154327 00:27:14.646 Received shutdown signal, test time was about 60.000000 seconds 00:27:14.646 00:27:14.646 Latency(us) 00:27:14.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.646 =================================================================================================================== 00:27:14.646 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:14.646 23:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 154327 00:27:14.646 [2024-07-13 23:13:03.811925] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:14.646 [2024-07-13 23:13:03.812320] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:14.646 [2024-07-13 23:13:03.812494] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:14.646 [2024-07-13 23:13:03.812622] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:27:14.646 [2024-07-13 23:13:03.848993] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:14.904 23:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:27:14.904 00:27:14.904 real 0m37.368s 00:27:14.904 user 0m55.055s 00:27:14.904 sys 0m6.328s 00:27:14.904 23:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:14.904 23:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:14.904 ************************************ 00:27:14.904 END TEST raid_rebuild_test_sb 00:27:14.904 ************************************ 00:27:14.904 23:13:04 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:27:14.904 23:13:04 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:27:14.904 23:13:04 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:27:14.904 23:13:04 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:14.904 23:13:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:14.904 ************************************ 00:27:14.904 START TEST raid_rebuild_test_io 00:27:14.904 ************************************ 00:27:14.904 23:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 false true true 00:27:14.904 23:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:27:14.904 23:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:27:14.904 23:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:27:14.904 23:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:27:14.904 23:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:27:14.904 23:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:27:14.904 23:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:14.904 23:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:27:14.904 23:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:14.904 23:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:14.904 23:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:27:14.904 23:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:14.904 23:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:14.904 23:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:14.904 23:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:27:14.904 23:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:27:14.904 23:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:27:14.904 23:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:27:14.904 23:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:27:14.904 23:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:27:14.904 23:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:27:14.904 23:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:27:14.904 23:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:27:14.904 23:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=155281 00:27:14.904 23:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 155281 /var/tmp/spdk-raid.sock 00:27:14.904 23:13:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:14.904 23:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@829 -- # '[' -z 155281 ']' 00:27:14.904 23:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:14.904 23:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:14.904 23:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:14.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:14.904 23:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:14.904 23:13:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:14.904 [2024-07-13 23:13:04.302199] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:27:14.904 [2024-07-13 23:13:04.302663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155281 ] 00:27:14.904 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:14.904 Zero copy mechanism will not be used. 00:27:15.162 [2024-07-13 23:13:04.445578] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.162 [2024-07-13 23:13:04.527359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.420 [2024-07-13 23:13:04.600632] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:15.987 23:13:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:15.987 23:13:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # return 0 00:27:15.987 23:13:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:15.987 23:13:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:16.246 BaseBdev1_malloc 00:27:16.246 23:13:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:16.504 [2024-07-13 23:13:05.709454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:16.504 [2024-07-13 23:13:05.709961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:16.504 [2024-07-13 23:13:05.710224] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:27:16.504 [2024-07-13 23:13:05.710467] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:16.504 [2024-07-13 23:13:05.713681] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:16.504 [2024-07-13 23:13:05.713936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:16.504 BaseBdev1 00:27:16.504 23:13:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:16.504 23:13:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:16.763 BaseBdev2_malloc 00:27:16.763 23:13:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:16.763 [2024-07-13 23:13:06.157724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:16.763 [2024-07-13 23:13:06.158183] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:16.763 [2024-07-13 23:13:06.158362] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:27:16.763 [2024-07-13 23:13:06.158617] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:16.763 [2024-07-13 23:13:06.161478] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:16.763 [2024-07-13 23:13:06.161676] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:16.763 BaseBdev2 00:27:17.021 23:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:27:17.279 spare_malloc 00:27:17.279 23:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:17.538 spare_delay 00:27:17.538 23:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:17.797 [2024-07-13 23:13:06.968143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:17.797 [2024-07-13 23:13:06.968610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:17.797 [2024-07-13 23:13:06.968846] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:27:17.797 [2024-07-13 23:13:06.969052] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:17.797 [2024-07-13 23:13:06.972235] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:17.797 [2024-07-13 23:13:06.972468] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:17.797 spare 00:27:17.797 23:13:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:27:17.797 [2024-07-13 23:13:07.189045] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:17.797 [2024-07-13 23:13:07.192019] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:17.797 [2024-07-13 23:13:07.192367] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:27:17.797 [2024-07-13 23:13:07.192511] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:27:17.797 [2024-07-13 23:13:07.192998] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:27:17.797 [2024-07-13 23:13:07.193802] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:27:17.797 [2024-07-13 23:13:07.193944] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:27:17.797 [2024-07-13 23:13:07.194441] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:18.055 23:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:18.055 23:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:18.055 23:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:18.055 23:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:18.055 23:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:18.055 23:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:18.055 23:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:18.055 23:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:18.055 23:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:18.055 23:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:18.055 23:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:18.055 23:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:18.055 23:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:18.055 "name": "raid_bdev1", 00:27:18.055 "uuid": "61b3fd82-4f33-4f2c-ba51-bc900f5a7514", 00:27:18.055 "strip_size_kb": 0, 00:27:18.055 "state": "online", 00:27:18.055 "raid_level": "raid1", 00:27:18.055 "superblock": false, 00:27:18.055 "num_base_bdevs": 2, 00:27:18.055 "num_base_bdevs_discovered": 2, 00:27:18.055 "num_base_bdevs_operational": 2, 00:27:18.055 "base_bdevs_list": [ 00:27:18.055 { 00:27:18.055 "name": "BaseBdev1", 00:27:18.055 "uuid": "ba0ba185-86ea-5d82-a15c-3a2ca5a6fa9d", 00:27:18.055 "is_configured": true, 00:27:18.055 "data_offset": 0, 00:27:18.055 "data_size": 65536 00:27:18.055 }, 00:27:18.055 { 00:27:18.055 "name": "BaseBdev2", 00:27:18.055 "uuid": "5799d1e1-0a68-5907-8c99-c2c7c39da17c", 00:27:18.055 "is_configured": true, 00:27:18.055 "data_offset": 0, 00:27:18.055 "data_size": 65536 00:27:18.055 } 00:27:18.055 ] 00:27:18.055 }' 00:27:18.055 23:13:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:18.055 23:13:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:18.990 23:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:27:18.990 23:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:18.990 [2024-07-13 23:13:08.391119] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:19.249 23:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:27:19.249 23:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:19.249 23:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:19.249 23:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:27:19.249 23:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:27:19.249 23:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:19.249 23:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:27:19.507 [2024-07-13 23:13:08.738614] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:27:19.507 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:19.507 Zero copy mechanism will not be used. 00:27:19.507 Running I/O for 60 seconds... 00:27:19.507 [2024-07-13 23:13:08.840688] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:19.507 [2024-07-13 23:13:08.849129] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002530 00:27:19.507 23:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:19.507 23:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:19.507 23:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:19.507 23:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:19.507 23:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:19.507 23:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:19.507 23:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:19.508 23:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:19.508 23:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:19.508 23:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:19.508 23:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:19.508 23:13:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:19.781 23:13:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:19.781 "name": "raid_bdev1", 00:27:19.781 "uuid": "61b3fd82-4f33-4f2c-ba51-bc900f5a7514", 00:27:19.781 "strip_size_kb": 0, 00:27:19.781 "state": "online", 00:27:19.781 "raid_level": "raid1", 00:27:19.781 "superblock": false, 00:27:19.781 "num_base_bdevs": 2, 00:27:19.781 "num_base_bdevs_discovered": 1, 00:27:19.781 "num_base_bdevs_operational": 1, 00:27:19.781 "base_bdevs_list": [ 00:27:19.781 { 00:27:19.781 "name": null, 00:27:19.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:19.781 "is_configured": false, 00:27:19.781 "data_offset": 0, 00:27:19.781 "data_size": 65536 00:27:19.781 }, 00:27:19.781 { 00:27:19.781 "name": "BaseBdev2", 00:27:19.781 "uuid": "5799d1e1-0a68-5907-8c99-c2c7c39da17c", 00:27:19.781 "is_configured": true, 00:27:19.781 "data_offset": 0, 00:27:19.781 "data_size": 65536 00:27:19.781 } 00:27:19.781 ] 00:27:19.781 }' 00:27:19.781 23:13:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:19.781 23:13:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:20.381 23:13:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:20.640 [2024-07-13 23:13:10.018099] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:20.898 23:13:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:27:20.898 [2024-07-13 23:13:10.087526] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:27:20.898 [2024-07-13 23:13:10.090640] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:20.898 [2024-07-13 23:13:10.194395] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:20.898 [2024-07-13 23:13:10.195264] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:21.156 [2024-07-13 23:13:10.424722] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:21.156 [2024-07-13 23:13:10.425526] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:21.414 [2024-07-13 23:13:10.798635] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:21.414 [2024-07-13 23:13:10.799480] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:21.672 [2024-07-13 23:13:11.009512] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:21.672 [2024-07-13 23:13:11.010148] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:21.931 23:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:21.931 23:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:21.931 23:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:21.931 23:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:21.931 23:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:21.931 23:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:21.931 23:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:21.931 23:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:21.931 "name": "raid_bdev1", 00:27:21.931 "uuid": "61b3fd82-4f33-4f2c-ba51-bc900f5a7514", 00:27:21.931 "strip_size_kb": 0, 00:27:21.931 "state": "online", 00:27:21.931 "raid_level": "raid1", 00:27:21.931 "superblock": false, 00:27:21.931 "num_base_bdevs": 2, 00:27:21.931 "num_base_bdevs_discovered": 2, 00:27:21.931 "num_base_bdevs_operational": 2, 00:27:21.931 "process": { 00:27:21.931 "type": "rebuild", 00:27:21.931 "target": "spare", 00:27:21.931 "progress": { 00:27:21.931 "blocks": 14336, 00:27:21.931 "percent": 21 00:27:21.931 } 00:27:21.931 }, 00:27:21.931 "base_bdevs_list": [ 00:27:21.931 { 00:27:21.931 "name": "spare", 00:27:21.931 "uuid": "0a36a4ae-1c2d-587d-af7f-57f09014a056", 00:27:21.931 "is_configured": true, 00:27:21.931 "data_offset": 0, 00:27:21.931 "data_size": 65536 00:27:21.931 }, 00:27:21.931 { 00:27:21.931 "name": "BaseBdev2", 00:27:21.931 "uuid": "5799d1e1-0a68-5907-8c99-c2c7c39da17c", 00:27:21.931 "is_configured": true, 00:27:21.931 "data_offset": 0, 00:27:21.931 "data_size": 65536 00:27:21.931 } 00:27:21.931 ] 00:27:21.931 }' 00:27:22.190 23:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:22.190 23:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:22.190 23:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:22.190 [2024-07-13 23:13:11.395251] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:27:22.190 23:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:22.190 23:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:22.447 [2024-07-13 23:13:11.686275] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:22.447 [2024-07-13 23:13:11.811360] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:22.447 [2024-07-13 23:13:11.828835] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:22.447 [2024-07-13 23:13:11.829143] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:22.447 [2024-07-13 23:13:11.829209] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:22.447 [2024-07-13 23:13:11.847678] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002530 00:27:22.704 23:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:22.704 23:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:22.704 23:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:22.704 23:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:22.704 23:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:22.704 23:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:22.704 23:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:22.704 23:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:22.704 23:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:22.704 23:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:22.704 23:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:22.704 23:13:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:22.961 23:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:22.961 "name": "raid_bdev1", 00:27:22.961 "uuid": "61b3fd82-4f33-4f2c-ba51-bc900f5a7514", 00:27:22.961 "strip_size_kb": 0, 00:27:22.961 "state": "online", 00:27:22.961 "raid_level": "raid1", 00:27:22.961 "superblock": false, 00:27:22.961 "num_base_bdevs": 2, 00:27:22.961 "num_base_bdevs_discovered": 1, 00:27:22.961 "num_base_bdevs_operational": 1, 00:27:22.961 "base_bdevs_list": [ 00:27:22.961 { 00:27:22.961 "name": null, 00:27:22.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:22.961 "is_configured": false, 00:27:22.961 "data_offset": 0, 00:27:22.961 "data_size": 65536 00:27:22.961 }, 00:27:22.961 { 00:27:22.961 "name": "BaseBdev2", 00:27:22.961 "uuid": "5799d1e1-0a68-5907-8c99-c2c7c39da17c", 00:27:22.961 "is_configured": true, 00:27:22.961 "data_offset": 0, 00:27:22.961 "data_size": 65536 00:27:22.961 } 00:27:22.961 ] 00:27:22.961 }' 00:27:22.961 23:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:22.961 23:13:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:23.547 23:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:23.547 23:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:23.547 23:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:23.547 23:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:23.547 23:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:23.547 23:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:23.547 23:13:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:23.805 23:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:23.805 "name": "raid_bdev1", 00:27:23.805 "uuid": "61b3fd82-4f33-4f2c-ba51-bc900f5a7514", 00:27:23.805 "strip_size_kb": 0, 00:27:23.805 "state": "online", 00:27:23.805 "raid_level": "raid1", 00:27:23.805 "superblock": false, 00:27:23.805 "num_base_bdevs": 2, 00:27:23.805 "num_base_bdevs_discovered": 1, 00:27:23.805 "num_base_bdevs_operational": 1, 00:27:23.805 "base_bdevs_list": [ 00:27:23.805 { 00:27:23.805 "name": null, 00:27:23.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:23.805 "is_configured": false, 00:27:23.805 "data_offset": 0, 00:27:23.805 "data_size": 65536 00:27:23.805 }, 00:27:23.805 { 00:27:23.805 "name": "BaseBdev2", 00:27:23.805 "uuid": "5799d1e1-0a68-5907-8c99-c2c7c39da17c", 00:27:23.805 "is_configured": true, 00:27:23.805 "data_offset": 0, 00:27:23.805 "data_size": 65536 00:27:23.805 } 00:27:23.805 ] 00:27:23.805 }' 00:27:23.805 23:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:23.805 23:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:23.805 23:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:23.805 23:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:23.805 23:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:24.062 [2024-07-13 23:13:13.320542] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:24.062 23:13:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:24.062 [2024-07-13 23:13:13.361837] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:27:24.062 [2024-07-13 23:13:13.364557] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:24.320 [2024-07-13 23:13:13.488027] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:24.320 [2024-07-13 23:13:13.488893] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:24.320 [2024-07-13 23:13:13.704921] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:24.320 [2024-07-13 23:13:13.705642] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:24.885 [2024-07-13 23:13:14.071661] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:25.143 [2024-07-13 23:13:14.296955] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:25.143 23:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:25.143 23:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:25.143 23:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:25.143 23:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:25.143 23:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:25.143 23:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:25.143 23:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:25.143 [2024-07-13 23:13:14.526993] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:27:25.402 23:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:25.402 "name": "raid_bdev1", 00:27:25.402 "uuid": "61b3fd82-4f33-4f2c-ba51-bc900f5a7514", 00:27:25.402 "strip_size_kb": 0, 00:27:25.402 "state": "online", 00:27:25.402 "raid_level": "raid1", 00:27:25.402 "superblock": false, 00:27:25.402 "num_base_bdevs": 2, 00:27:25.402 "num_base_bdevs_discovered": 2, 00:27:25.402 "num_base_bdevs_operational": 2, 00:27:25.402 "process": { 00:27:25.402 "type": "rebuild", 00:27:25.402 "target": "spare", 00:27:25.402 "progress": { 00:27:25.402 "blocks": 14336, 00:27:25.402 "percent": 21 00:27:25.402 } 00:27:25.402 }, 00:27:25.402 "base_bdevs_list": [ 00:27:25.402 { 00:27:25.402 "name": "spare", 00:27:25.402 "uuid": "0a36a4ae-1c2d-587d-af7f-57f09014a056", 00:27:25.402 "is_configured": true, 00:27:25.402 "data_offset": 0, 00:27:25.402 "data_size": 65536 00:27:25.402 }, 00:27:25.402 { 00:27:25.402 "name": "BaseBdev2", 00:27:25.402 "uuid": "5799d1e1-0a68-5907-8c99-c2c7c39da17c", 00:27:25.402 "is_configured": true, 00:27:25.402 "data_offset": 0, 00:27:25.402 "data_size": 65536 00:27:25.402 } 00:27:25.402 ] 00:27:25.402 }' 00:27:25.402 23:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:25.402 23:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:25.402 23:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:25.402 23:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:25.402 23:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:27:25.402 23:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:27:25.402 23:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:27:25.402 23:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:27:25.402 23:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=832 00:27:25.402 23:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:25.402 23:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:25.402 23:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:25.402 23:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:25.402 23:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:25.402 23:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:25.402 23:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:25.402 23:13:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:25.402 [2024-07-13 23:13:14.749728] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:27:25.660 [2024-07-13 23:13:14.969620] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:27:25.661 [2024-07-13 23:13:14.970418] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:27:25.661 23:13:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:25.661 "name": "raid_bdev1", 00:27:25.661 "uuid": "61b3fd82-4f33-4f2c-ba51-bc900f5a7514", 00:27:25.661 "strip_size_kb": 0, 00:27:25.661 "state": "online", 00:27:25.661 "raid_level": "raid1", 00:27:25.661 "superblock": false, 00:27:25.661 "num_base_bdevs": 2, 00:27:25.661 "num_base_bdevs_discovered": 2, 00:27:25.661 "num_base_bdevs_operational": 2, 00:27:25.661 "process": { 00:27:25.661 "type": "rebuild", 00:27:25.661 "target": "spare", 00:27:25.661 "progress": { 00:27:25.661 "blocks": 20480, 00:27:25.661 "percent": 31 00:27:25.661 } 00:27:25.661 }, 00:27:25.661 "base_bdevs_list": [ 00:27:25.661 { 00:27:25.661 "name": "spare", 00:27:25.661 "uuid": "0a36a4ae-1c2d-587d-af7f-57f09014a056", 00:27:25.661 "is_configured": true, 00:27:25.661 "data_offset": 0, 00:27:25.661 "data_size": 65536 00:27:25.661 }, 00:27:25.661 { 00:27:25.661 "name": "BaseBdev2", 00:27:25.661 "uuid": "5799d1e1-0a68-5907-8c99-c2c7c39da17c", 00:27:25.661 "is_configured": true, 00:27:25.661 "data_offset": 0, 00:27:25.661 "data_size": 65536 00:27:25.661 } 00:27:25.661 ] 00:27:25.661 }' 00:27:25.661 23:13:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:25.661 23:13:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:25.661 23:13:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:25.918 23:13:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:25.918 23:13:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:26.177 [2024-07-13 23:13:15.349653] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:27:26.177 [2024-07-13 23:13:15.486143] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:27:26.435 [2024-07-13 23:13:15.827802] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:27:27.003 23:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:27.003 23:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:27.003 23:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:27.003 23:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:27.003 23:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:27.003 23:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:27.003 23:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:27.003 23:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:27.003 [2024-07-13 23:13:16.184570] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:27:27.003 23:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:27.003 "name": "raid_bdev1", 00:27:27.003 "uuid": "61b3fd82-4f33-4f2c-ba51-bc900f5a7514", 00:27:27.003 "strip_size_kb": 0, 00:27:27.003 "state": "online", 00:27:27.003 "raid_level": "raid1", 00:27:27.003 "superblock": false, 00:27:27.003 "num_base_bdevs": 2, 00:27:27.003 "num_base_bdevs_discovered": 2, 00:27:27.003 "num_base_bdevs_operational": 2, 00:27:27.003 "process": { 00:27:27.003 "type": "rebuild", 00:27:27.003 "target": "spare", 00:27:27.003 "progress": { 00:27:27.003 "blocks": 38912, 00:27:27.003 "percent": 59 00:27:27.003 } 00:27:27.003 }, 00:27:27.003 "base_bdevs_list": [ 00:27:27.003 { 00:27:27.003 "name": "spare", 00:27:27.003 "uuid": "0a36a4ae-1c2d-587d-af7f-57f09014a056", 00:27:27.003 "is_configured": true, 00:27:27.003 "data_offset": 0, 00:27:27.003 "data_size": 65536 00:27:27.003 }, 00:27:27.003 { 00:27:27.003 "name": "BaseBdev2", 00:27:27.003 "uuid": "5799d1e1-0a68-5907-8c99-c2c7c39da17c", 00:27:27.003 "is_configured": true, 00:27:27.003 "data_offset": 0, 00:27:27.003 "data_size": 65536 00:27:27.003 } 00:27:27.003 ] 00:27:27.003 }' 00:27:27.003 23:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:27.003 23:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:27.003 23:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:27.261 23:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:27.261 23:13:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:27.828 [2024-07-13 23:13:17.079698] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:27:28.086 [2024-07-13 23:13:17.304599] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:27:28.086 23:13:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:28.086 23:13:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:28.086 23:13:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:28.086 23:13:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:28.086 23:13:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:28.086 23:13:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:28.086 23:13:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:28.086 23:13:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:28.344 [2024-07-13 23:13:17.522830] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:27:28.344 23:13:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:28.344 "name": "raid_bdev1", 00:27:28.344 "uuid": "61b3fd82-4f33-4f2c-ba51-bc900f5a7514", 00:27:28.344 "strip_size_kb": 0, 00:27:28.344 "state": "online", 00:27:28.344 "raid_level": "raid1", 00:27:28.344 "superblock": false, 00:27:28.344 "num_base_bdevs": 2, 00:27:28.344 "num_base_bdevs_discovered": 2, 00:27:28.344 "num_base_bdevs_operational": 2, 00:27:28.344 "process": { 00:27:28.344 "type": "rebuild", 00:27:28.344 "target": "spare", 00:27:28.344 "progress": { 00:27:28.344 "blocks": 61440, 00:27:28.344 "percent": 93 00:27:28.344 } 00:27:28.344 }, 00:27:28.344 "base_bdevs_list": [ 00:27:28.344 { 00:27:28.344 "name": "spare", 00:27:28.344 "uuid": "0a36a4ae-1c2d-587d-af7f-57f09014a056", 00:27:28.344 "is_configured": true, 00:27:28.344 "data_offset": 0, 00:27:28.344 "data_size": 65536 00:27:28.344 }, 00:27:28.344 { 00:27:28.344 "name": "BaseBdev2", 00:27:28.344 "uuid": "5799d1e1-0a68-5907-8c99-c2c7c39da17c", 00:27:28.344 "is_configured": true, 00:27:28.344 "data_offset": 0, 00:27:28.344 "data_size": 65536 00:27:28.344 } 00:27:28.344 ] 00:27:28.344 }' 00:27:28.344 23:13:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:28.602 23:13:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:28.602 23:13:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:28.602 23:13:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:28.602 23:13:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:28.602 [2024-07-13 23:13:17.843020] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:28.602 [2024-07-13 23:13:17.850739] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:28.602 [2024-07-13 23:13:17.852834] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:29.535 23:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:29.535 23:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:29.535 23:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:29.535 23:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:29.535 23:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:29.535 23:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:29.535 23:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:29.535 23:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:29.794 23:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:29.794 "name": "raid_bdev1", 00:27:29.794 "uuid": "61b3fd82-4f33-4f2c-ba51-bc900f5a7514", 00:27:29.794 "strip_size_kb": 0, 00:27:29.794 "state": "online", 00:27:29.794 "raid_level": "raid1", 00:27:29.794 "superblock": false, 00:27:29.794 "num_base_bdevs": 2, 00:27:29.794 "num_base_bdevs_discovered": 2, 00:27:29.794 "num_base_bdevs_operational": 2, 00:27:29.794 "base_bdevs_list": [ 00:27:29.794 { 00:27:29.794 "name": "spare", 00:27:29.794 "uuid": "0a36a4ae-1c2d-587d-af7f-57f09014a056", 00:27:29.794 "is_configured": true, 00:27:29.794 "data_offset": 0, 00:27:29.794 "data_size": 65536 00:27:29.794 }, 00:27:29.794 { 00:27:29.794 "name": "BaseBdev2", 00:27:29.794 "uuid": "5799d1e1-0a68-5907-8c99-c2c7c39da17c", 00:27:29.794 "is_configured": true, 00:27:29.794 "data_offset": 0, 00:27:29.794 "data_size": 65536 00:27:29.794 } 00:27:29.794 ] 00:27:29.794 }' 00:27:29.794 23:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:29.794 23:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:29.794 23:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:29.794 23:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:27:29.794 23:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:27:29.794 23:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:29.794 23:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:29.794 23:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:29.794 23:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:29.794 23:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:29.794 23:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:29.794 23:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:30.052 23:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:30.052 "name": "raid_bdev1", 00:27:30.052 "uuid": "61b3fd82-4f33-4f2c-ba51-bc900f5a7514", 00:27:30.052 "strip_size_kb": 0, 00:27:30.052 "state": "online", 00:27:30.052 "raid_level": "raid1", 00:27:30.052 "superblock": false, 00:27:30.052 "num_base_bdevs": 2, 00:27:30.052 "num_base_bdevs_discovered": 2, 00:27:30.052 "num_base_bdevs_operational": 2, 00:27:30.052 "base_bdevs_list": [ 00:27:30.052 { 00:27:30.052 "name": "spare", 00:27:30.052 "uuid": "0a36a4ae-1c2d-587d-af7f-57f09014a056", 00:27:30.052 "is_configured": true, 00:27:30.052 "data_offset": 0, 00:27:30.052 "data_size": 65536 00:27:30.052 }, 00:27:30.052 { 00:27:30.052 "name": "BaseBdev2", 00:27:30.052 "uuid": "5799d1e1-0a68-5907-8c99-c2c7c39da17c", 00:27:30.052 "is_configured": true, 00:27:30.052 "data_offset": 0, 00:27:30.052 "data_size": 65536 00:27:30.052 } 00:27:30.052 ] 00:27:30.052 }' 00:27:30.052 23:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:30.052 23:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:30.052 23:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:30.310 23:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:30.310 23:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:30.310 23:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:30.310 23:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:30.310 23:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:30.310 23:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:30.310 23:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:30.310 23:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:30.310 23:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:30.310 23:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:30.310 23:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:30.310 23:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:30.310 23:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:30.568 23:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:30.568 "name": "raid_bdev1", 00:27:30.568 "uuid": "61b3fd82-4f33-4f2c-ba51-bc900f5a7514", 00:27:30.568 "strip_size_kb": 0, 00:27:30.568 "state": "online", 00:27:30.568 "raid_level": "raid1", 00:27:30.568 "superblock": false, 00:27:30.568 "num_base_bdevs": 2, 00:27:30.568 "num_base_bdevs_discovered": 2, 00:27:30.568 "num_base_bdevs_operational": 2, 00:27:30.568 "base_bdevs_list": [ 00:27:30.568 { 00:27:30.568 "name": "spare", 00:27:30.568 "uuid": "0a36a4ae-1c2d-587d-af7f-57f09014a056", 00:27:30.568 "is_configured": true, 00:27:30.568 "data_offset": 0, 00:27:30.568 "data_size": 65536 00:27:30.568 }, 00:27:30.568 { 00:27:30.568 "name": "BaseBdev2", 00:27:30.568 "uuid": "5799d1e1-0a68-5907-8c99-c2c7c39da17c", 00:27:30.568 "is_configured": true, 00:27:30.568 "data_offset": 0, 00:27:30.568 "data_size": 65536 00:27:30.568 } 00:27:30.568 ] 00:27:30.568 }' 00:27:30.568 23:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:30.568 23:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:31.131 23:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:31.389 [2024-07-13 23:13:20.696353] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:31.389 [2024-07-13 23:13:20.696801] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:31.646 00:27:31.646 Latency(us) 00:27:31.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:31.646 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:27:31.646 raid_bdev1 : 12.05 103.12 309.35 0.00 0.00 13649.12 284.86 121062.87 00:27:31.646 =================================================================================================================== 00:27:31.646 Total : 103.12 309.35 0.00 0.00 13649.12 284.86 121062.87 00:27:31.646 [2024-07-13 23:13:20.801640] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:31.646 [2024-07-13 23:13:20.801925] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:31.646 0 00:27:31.646 [2024-07-13 23:13:20.802133] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:31.646 [2024-07-13 23:13:20.802154] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:27:31.646 23:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:31.646 23:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:27:31.904 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:27:31.904 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:27:31.904 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:27:31.904 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:27:31.904 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:31.904 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:27:31.904 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:31.904 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:31.904 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:31.904 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:27:31.904 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:31.904 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:31.904 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:27:32.162 /dev/nbd0 00:27:32.162 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:32.162 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:32.162 23:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:32.162 23:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:27:32.162 23:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:32.162 23:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:32.162 23:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:32.162 23:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:27:32.162 23:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:32.162 23:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:32.162 23:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:32.162 1+0 records in 00:27:32.162 1+0 records out 00:27:32.162 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000550776 s, 7.4 MB/s 00:27:32.162 23:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:32.162 23:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:27:32.162 23:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:32.162 23:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:32.162 23:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:27:32.162 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:32.162 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:32.162 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:27:32.162 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:27:32.162 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:27:32.162 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:32.162 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:27:32.162 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:32.162 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:27:32.162 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:32.162 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:27:32.162 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:32.162 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:32.162 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:27:32.420 /dev/nbd1 00:27:32.420 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:32.420 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:32.420 23:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:27:32.420 23:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:27:32.420 23:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:32.420 23:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:32.420 23:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:27:32.420 23:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:27:32.420 23:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:32.420 23:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:32.420 23:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:32.420 1+0 records in 00:27:32.420 1+0 records out 00:27:32.420 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00120719 s, 3.4 MB/s 00:27:32.420 23:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:32.420 23:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:27:32.420 23:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:32.420 23:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:32.420 23:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:27:32.420 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:32.420 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:32.420 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:27:32.420 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:27:32.420 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:32.420 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:27:32.420 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:32.420 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:27:32.420 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:32.420 23:13:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:32.677 23:13:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:32.677 23:13:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:32.677 23:13:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:32.677 23:13:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:32.677 23:13:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:32.677 23:13:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:32.677 23:13:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:27:32.678 23:13:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:27:32.678 23:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:32.678 23:13:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:32.678 23:13:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:32.678 23:13:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:32.678 23:13:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:27:32.678 23:13:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:32.678 23:13:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:32.936 23:13:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:32.936 23:13:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:32.936 23:13:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:32.936 23:13:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:32.936 23:13:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:32.936 23:13:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:33.195 23:13:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:27:33.195 23:13:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:27:33.195 23:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:27:33.195 23:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 155281 00:27:33.195 23:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@948 -- # '[' -z 155281 ']' 00:27:33.195 23:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # kill -0 155281 00:27:33.195 23:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # uname 00:27:33.195 23:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:33.195 23:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 155281 00:27:33.195 23:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:33.195 23:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:33.195 23:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 155281' 00:27:33.195 killing process with pid 155281 00:27:33.195 23:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@967 -- # kill 155281 00:27:33.195 Received shutdown signal, test time was about 13.627429 seconds 00:27:33.195 00:27:33.195 Latency(us) 00:27:33.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.195 =================================================================================================================== 00:27:33.195 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:33.195 23:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # wait 155281 00:27:33.195 [2024-07-13 23:13:22.369613] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:33.195 [2024-07-13 23:13:22.404668] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:27:33.453 00:27:33.453 real 0m18.521s 00:27:33.453 user 0m29.012s 00:27:33.453 sys 0m2.187s 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:33.453 ************************************ 00:27:33.453 END TEST raid_rebuild_test_io 00:27:33.453 ************************************ 00:27:33.453 23:13:22 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:27:33.453 23:13:22 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:27:33.453 23:13:22 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:27:33.453 23:13:22 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:33.453 23:13:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:33.453 ************************************ 00:27:33.453 START TEST raid_rebuild_test_sb_io 00:27:33.453 ************************************ 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true true true 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=155760 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 155760 /var/tmp/spdk-raid.sock 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@829 -- # '[' -z 155760 ']' 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:33.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:33.453 23:13:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:33.711 [2024-07-13 23:13:22.890376] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:27:33.711 [2024-07-13 23:13:22.890869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155760 ] 00:27:33.711 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:33.711 Zero copy mechanism will not be used. 00:27:33.711 [2024-07-13 23:13:23.035350] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.970 [2024-07-13 23:13:23.136059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.970 [2024-07-13 23:13:23.217501] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:34.537 23:13:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:34.537 23:13:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # return 0 00:27:34.537 23:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:34.537 23:13:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:34.795 BaseBdev1_malloc 00:27:34.795 23:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:35.054 [2024-07-13 23:13:24.334414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:35.054 [2024-07-13 23:13:24.334705] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:35.054 [2024-07-13 23:13:24.334804] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:27:35.054 [2024-07-13 23:13:24.335133] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:35.054 [2024-07-13 23:13:24.337901] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:35.054 [2024-07-13 23:13:24.338120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:35.054 BaseBdev1 00:27:35.054 23:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:35.054 23:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:35.313 BaseBdev2_malloc 00:27:35.313 23:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:35.571 [2024-07-13 23:13:24.824701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:35.571 [2024-07-13 23:13:24.824959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:35.571 [2024-07-13 23:13:24.825160] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:27:35.571 [2024-07-13 23:13:24.825337] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:35.571 [2024-07-13 23:13:24.827642] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:35.571 [2024-07-13 23:13:24.827843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:35.571 BaseBdev2 00:27:35.571 23:13:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:27:35.829 spare_malloc 00:27:35.829 23:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:36.087 spare_delay 00:27:36.087 23:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:36.346 [2024-07-13 23:13:25.546081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:36.346 [2024-07-13 23:13:25.546352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:36.346 [2024-07-13 23:13:25.546565] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:27:36.346 [2024-07-13 23:13:25.546747] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:36.346 [2024-07-13 23:13:25.549461] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:36.346 [2024-07-13 23:13:25.549675] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:36.346 spare 00:27:36.346 23:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:27:36.605 [2024-07-13 23:13:25.758265] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:36.605 [2024-07-13 23:13:25.760884] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:36.605 [2024-07-13 23:13:25.761297] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:27:36.605 [2024-07-13 23:13:25.761501] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:36.605 [2024-07-13 23:13:25.761808] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:27:36.605 [2024-07-13 23:13:25.762576] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:27:36.605 [2024-07-13 23:13:25.762787] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:27:36.605 [2024-07-13 23:13:25.763094] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:36.605 23:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:36.605 23:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:36.605 23:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:36.605 23:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:36.605 23:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:36.605 23:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:36.605 23:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:36.605 23:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:36.605 23:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:36.605 23:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:36.605 23:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:36.605 23:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:36.605 23:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:36.605 "name": "raid_bdev1", 00:27:36.605 "uuid": "8abf4dc8-7616-4f78-9dd9-dcf48063e7c1", 00:27:36.605 "strip_size_kb": 0, 00:27:36.605 "state": "online", 00:27:36.605 "raid_level": "raid1", 00:27:36.605 "superblock": true, 00:27:36.605 "num_base_bdevs": 2, 00:27:36.605 "num_base_bdevs_discovered": 2, 00:27:36.605 "num_base_bdevs_operational": 2, 00:27:36.605 "base_bdevs_list": [ 00:27:36.605 { 00:27:36.605 "name": "BaseBdev1", 00:27:36.605 "uuid": "abae88ad-d2d0-5954-a2f4-ac20aa7eb064", 00:27:36.605 "is_configured": true, 00:27:36.605 "data_offset": 2048, 00:27:36.605 "data_size": 63488 00:27:36.605 }, 00:27:36.605 { 00:27:36.605 "name": "BaseBdev2", 00:27:36.605 "uuid": "d178658c-c07e-5004-a143-931c8305188b", 00:27:36.605 "is_configured": true, 00:27:36.605 "data_offset": 2048, 00:27:36.605 "data_size": 63488 00:27:36.605 } 00:27:36.605 ] 00:27:36.605 }' 00:27:36.605 23:13:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:36.605 23:13:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:37.541 23:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:27:37.541 23:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:37.541 [2024-07-13 23:13:26.847683] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:37.541 23:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:27:37.541 23:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:37.541 23:13:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:37.799 23:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:27:37.799 23:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:27:37.799 23:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:37.799 23:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:27:38.057 [2024-07-13 23:13:27.242768] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:27:38.057 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:38.057 Zero copy mechanism will not be used. 00:27:38.057 Running I/O for 60 seconds... 00:27:38.057 [2024-07-13 23:13:27.382770] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:38.057 [2024-07-13 23:13:27.383290] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002530 00:27:38.057 23:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:38.057 23:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:38.057 23:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:38.057 23:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:38.057 23:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:38.057 23:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:38.057 23:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:38.057 23:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:38.057 23:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:38.057 23:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:38.057 23:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:38.057 23:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:38.315 23:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:38.315 "name": "raid_bdev1", 00:27:38.315 "uuid": "8abf4dc8-7616-4f78-9dd9-dcf48063e7c1", 00:27:38.315 "strip_size_kb": 0, 00:27:38.315 "state": "online", 00:27:38.315 "raid_level": "raid1", 00:27:38.315 "superblock": true, 00:27:38.315 "num_base_bdevs": 2, 00:27:38.315 "num_base_bdevs_discovered": 1, 00:27:38.315 "num_base_bdevs_operational": 1, 00:27:38.315 "base_bdevs_list": [ 00:27:38.315 { 00:27:38.315 "name": null, 00:27:38.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:38.315 "is_configured": false, 00:27:38.315 "data_offset": 2048, 00:27:38.315 "data_size": 63488 00:27:38.315 }, 00:27:38.315 { 00:27:38.315 "name": "BaseBdev2", 00:27:38.315 "uuid": "d178658c-c07e-5004-a143-931c8305188b", 00:27:38.315 "is_configured": true, 00:27:38.315 "data_offset": 2048, 00:27:38.315 "data_size": 63488 00:27:38.315 } 00:27:38.315 ] 00:27:38.315 }' 00:27:38.315 23:13:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:38.315 23:13:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:38.882 23:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:39.141 [2024-07-13 23:13:28.471748] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:39.141 23:13:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:27:39.141 [2024-07-13 23:13:28.530762] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:27:39.141 [2024-07-13 23:13:28.533597] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:39.400 [2024-07-13 23:13:28.658276] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:39.400 [2024-07-13 23:13:28.659058] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:39.659 [2024-07-13 23:13:28.870918] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:39.659 [2024-07-13 23:13:28.871490] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:39.918 [2024-07-13 23:13:29.202512] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:39.918 [2024-07-13 23:13:29.209677] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:40.176 [2024-07-13 23:13:29.437561] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:40.176 [2024-07-13 23:13:29.437909] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:40.176 23:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:40.176 23:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:40.176 23:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:40.176 23:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:40.176 23:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:40.176 23:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:40.176 23:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:40.435 [2024-07-13 23:13:29.777159] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:27:40.435 23:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:40.435 "name": "raid_bdev1", 00:27:40.435 "uuid": "8abf4dc8-7616-4f78-9dd9-dcf48063e7c1", 00:27:40.435 "strip_size_kb": 0, 00:27:40.435 "state": "online", 00:27:40.435 "raid_level": "raid1", 00:27:40.435 "superblock": true, 00:27:40.435 "num_base_bdevs": 2, 00:27:40.435 "num_base_bdevs_discovered": 2, 00:27:40.435 "num_base_bdevs_operational": 2, 00:27:40.435 "process": { 00:27:40.435 "type": "rebuild", 00:27:40.435 "target": "spare", 00:27:40.435 "progress": { 00:27:40.435 "blocks": 14336, 00:27:40.435 "percent": 22 00:27:40.436 } 00:27:40.436 }, 00:27:40.436 "base_bdevs_list": [ 00:27:40.436 { 00:27:40.436 "name": "spare", 00:27:40.436 "uuid": "f9f9f01d-ec2c-5214-8d66-9ad021777788", 00:27:40.436 "is_configured": true, 00:27:40.436 "data_offset": 2048, 00:27:40.436 "data_size": 63488 00:27:40.436 }, 00:27:40.436 { 00:27:40.436 "name": "BaseBdev2", 00:27:40.436 "uuid": "d178658c-c07e-5004-a143-931c8305188b", 00:27:40.436 "is_configured": true, 00:27:40.436 "data_offset": 2048, 00:27:40.436 "data_size": 63488 00:27:40.436 } 00:27:40.436 ] 00:27:40.436 }' 00:27:40.436 23:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:40.436 23:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:40.436 23:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:40.694 23:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:40.694 23:13:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:40.695 [2024-07-13 23:13:30.099635] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:27:40.695 [2024-07-13 23:13:30.100390] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:27:40.953 [2024-07-13 23:13:30.137615] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:40.953 [2024-07-13 23:13:30.318140] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:40.953 [2024-07-13 23:13:30.320606] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:40.953 [2024-07-13 23:13:30.320829] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:40.953 [2024-07-13 23:13:30.320883] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:40.953 [2024-07-13 23:13:30.359233] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002530 00:27:41.212 23:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:41.212 23:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:41.212 23:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:41.212 23:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:41.212 23:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:41.212 23:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:41.212 23:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:41.212 23:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:41.212 23:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:41.212 23:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:41.212 23:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:41.212 23:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:41.471 23:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:41.471 "name": "raid_bdev1", 00:27:41.471 "uuid": "8abf4dc8-7616-4f78-9dd9-dcf48063e7c1", 00:27:41.471 "strip_size_kb": 0, 00:27:41.471 "state": "online", 00:27:41.471 "raid_level": "raid1", 00:27:41.471 "superblock": true, 00:27:41.471 "num_base_bdevs": 2, 00:27:41.471 "num_base_bdevs_discovered": 1, 00:27:41.471 "num_base_bdevs_operational": 1, 00:27:41.471 "base_bdevs_list": [ 00:27:41.471 { 00:27:41.471 "name": null, 00:27:41.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.472 "is_configured": false, 00:27:41.472 "data_offset": 2048, 00:27:41.472 "data_size": 63488 00:27:41.472 }, 00:27:41.472 { 00:27:41.472 "name": "BaseBdev2", 00:27:41.472 "uuid": "d178658c-c07e-5004-a143-931c8305188b", 00:27:41.472 "is_configured": true, 00:27:41.472 "data_offset": 2048, 00:27:41.472 "data_size": 63488 00:27:41.472 } 00:27:41.472 ] 00:27:41.472 }' 00:27:41.472 23:13:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:41.472 23:13:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:42.039 23:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:42.039 23:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:42.039 23:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:42.039 23:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:42.039 23:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:42.039 23:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:42.039 23:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:42.299 23:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:42.299 "name": "raid_bdev1", 00:27:42.299 "uuid": "8abf4dc8-7616-4f78-9dd9-dcf48063e7c1", 00:27:42.299 "strip_size_kb": 0, 00:27:42.299 "state": "online", 00:27:42.299 "raid_level": "raid1", 00:27:42.299 "superblock": true, 00:27:42.299 "num_base_bdevs": 2, 00:27:42.299 "num_base_bdevs_discovered": 1, 00:27:42.299 "num_base_bdevs_operational": 1, 00:27:42.299 "base_bdevs_list": [ 00:27:42.299 { 00:27:42.299 "name": null, 00:27:42.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:42.299 "is_configured": false, 00:27:42.299 "data_offset": 2048, 00:27:42.299 "data_size": 63488 00:27:42.299 }, 00:27:42.299 { 00:27:42.299 "name": "BaseBdev2", 00:27:42.299 "uuid": "d178658c-c07e-5004-a143-931c8305188b", 00:27:42.299 "is_configured": true, 00:27:42.299 "data_offset": 2048, 00:27:42.299 "data_size": 63488 00:27:42.299 } 00:27:42.299 ] 00:27:42.299 }' 00:27:42.299 23:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:42.299 23:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:42.299 23:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:42.299 23:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:42.299 23:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:42.575 [2024-07-13 23:13:31.935364] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:42.857 23:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:42.857 [2024-07-13 23:13:32.016935] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:27:42.857 [2024-07-13 23:13:32.019568] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:42.857 [2024-07-13 23:13:32.150802] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:42.857 [2024-07-13 23:13:32.151599] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:43.116 [2024-07-13 23:13:32.363290] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:43.116 [2024-07-13 23:13:32.363963] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:43.374 [2024-07-13 23:13:32.759448] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:43.374 [2024-07-13 23:13:32.759963] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:43.632 23:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:43.632 23:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:43.632 23:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:43.632 23:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:43.632 23:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:43.632 23:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:43.632 23:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:43.632 [2024-07-13 23:13:33.011016] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:27:43.891 23:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:43.891 "name": "raid_bdev1", 00:27:43.891 "uuid": "8abf4dc8-7616-4f78-9dd9-dcf48063e7c1", 00:27:43.891 "strip_size_kb": 0, 00:27:43.891 "state": "online", 00:27:43.891 "raid_level": "raid1", 00:27:43.891 "superblock": true, 00:27:43.891 "num_base_bdevs": 2, 00:27:43.891 "num_base_bdevs_discovered": 2, 00:27:43.891 "num_base_bdevs_operational": 2, 00:27:43.891 "process": { 00:27:43.891 "type": "rebuild", 00:27:43.891 "target": "spare", 00:27:43.891 "progress": { 00:27:43.891 "blocks": 18432, 00:27:43.891 "percent": 29 00:27:43.891 } 00:27:43.891 }, 00:27:43.891 "base_bdevs_list": [ 00:27:43.891 { 00:27:43.891 "name": "spare", 00:27:43.891 "uuid": "f9f9f01d-ec2c-5214-8d66-9ad021777788", 00:27:43.891 "is_configured": true, 00:27:43.891 "data_offset": 2048, 00:27:43.891 "data_size": 63488 00:27:43.891 }, 00:27:43.891 { 00:27:43.891 "name": "BaseBdev2", 00:27:43.891 "uuid": "d178658c-c07e-5004-a143-931c8305188b", 00:27:43.891 "is_configured": true, 00:27:43.891 "data_offset": 2048, 00:27:43.891 "data_size": 63488 00:27:43.891 } 00:27:43.891 ] 00:27:43.891 }' 00:27:43.891 23:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:44.149 23:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:44.149 23:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:44.149 [2024-07-13 23:13:33.368540] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:27:44.149 23:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:44.149 23:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:27:44.149 23:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:27:44.149 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:27:44.149 23:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:27:44.149 23:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:27:44.149 23:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:27:44.149 23:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=851 00:27:44.149 23:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:44.149 23:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:44.149 23:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:44.149 23:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:44.149 23:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:44.149 23:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:44.149 23:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:44.149 23:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:44.406 [2024-07-13 23:13:33.594155] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:27:44.406 23:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:44.406 "name": "raid_bdev1", 00:27:44.406 "uuid": "8abf4dc8-7616-4f78-9dd9-dcf48063e7c1", 00:27:44.406 "strip_size_kb": 0, 00:27:44.406 "state": "online", 00:27:44.406 "raid_level": "raid1", 00:27:44.406 "superblock": true, 00:27:44.406 "num_base_bdevs": 2, 00:27:44.406 "num_base_bdevs_discovered": 2, 00:27:44.406 "num_base_bdevs_operational": 2, 00:27:44.406 "process": { 00:27:44.406 "type": "rebuild", 00:27:44.406 "target": "spare", 00:27:44.407 "progress": { 00:27:44.407 "blocks": 20480, 00:27:44.407 "percent": 32 00:27:44.407 } 00:27:44.407 }, 00:27:44.407 "base_bdevs_list": [ 00:27:44.407 { 00:27:44.407 "name": "spare", 00:27:44.407 "uuid": "f9f9f01d-ec2c-5214-8d66-9ad021777788", 00:27:44.407 "is_configured": true, 00:27:44.407 "data_offset": 2048, 00:27:44.407 "data_size": 63488 00:27:44.407 }, 00:27:44.407 { 00:27:44.407 "name": "BaseBdev2", 00:27:44.407 "uuid": "d178658c-c07e-5004-a143-931c8305188b", 00:27:44.407 "is_configured": true, 00:27:44.407 "data_offset": 2048, 00:27:44.407 "data_size": 63488 00:27:44.407 } 00:27:44.407 ] 00:27:44.407 }' 00:27:44.407 23:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:44.407 23:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:44.407 23:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:44.407 23:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:44.407 23:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:44.664 [2024-07-13 23:13:33.931238] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:27:44.921 [2024-07-13 23:13:34.175410] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:27:45.180 [2024-07-13 23:13:34.530829] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:27:45.439 23:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:45.439 23:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:45.439 23:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:45.439 23:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:45.439 23:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:45.439 23:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:45.439 23:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:45.439 23:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:45.439 [2024-07-13 23:13:34.758058] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:27:45.697 23:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:45.697 "name": "raid_bdev1", 00:27:45.697 "uuid": "8abf4dc8-7616-4f78-9dd9-dcf48063e7c1", 00:27:45.697 "strip_size_kb": 0, 00:27:45.697 "state": "online", 00:27:45.697 "raid_level": "raid1", 00:27:45.697 "superblock": true, 00:27:45.697 "num_base_bdevs": 2, 00:27:45.697 "num_base_bdevs_discovered": 2, 00:27:45.697 "num_base_bdevs_operational": 2, 00:27:45.697 "process": { 00:27:45.697 "type": "rebuild", 00:27:45.697 "target": "spare", 00:27:45.697 "progress": { 00:27:45.697 "blocks": 34816, 00:27:45.697 "percent": 54 00:27:45.697 } 00:27:45.697 }, 00:27:45.697 "base_bdevs_list": [ 00:27:45.697 { 00:27:45.697 "name": "spare", 00:27:45.697 "uuid": "f9f9f01d-ec2c-5214-8d66-9ad021777788", 00:27:45.697 "is_configured": true, 00:27:45.697 "data_offset": 2048, 00:27:45.697 "data_size": 63488 00:27:45.697 }, 00:27:45.697 { 00:27:45.697 "name": "BaseBdev2", 00:27:45.697 "uuid": "d178658c-c07e-5004-a143-931c8305188b", 00:27:45.697 "is_configured": true, 00:27:45.697 "data_offset": 2048, 00:27:45.697 "data_size": 63488 00:27:45.697 } 00:27:45.697 ] 00:27:45.697 }' 00:27:45.697 23:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:45.697 23:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:45.697 23:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:45.697 23:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:45.697 23:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:45.955 [2024-07-13 23:13:35.228753] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:27:46.522 [2024-07-13 23:13:35.661223] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:27:46.781 23:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:46.781 23:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:46.781 23:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:46.781 23:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:46.781 23:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:46.781 23:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:46.781 23:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:46.781 23:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:47.039 23:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:47.039 "name": "raid_bdev1", 00:27:47.039 "uuid": "8abf4dc8-7616-4f78-9dd9-dcf48063e7c1", 00:27:47.039 "strip_size_kb": 0, 00:27:47.039 "state": "online", 00:27:47.039 "raid_level": "raid1", 00:27:47.039 "superblock": true, 00:27:47.039 "num_base_bdevs": 2, 00:27:47.039 "num_base_bdevs_discovered": 2, 00:27:47.039 "num_base_bdevs_operational": 2, 00:27:47.039 "process": { 00:27:47.039 "type": "rebuild", 00:27:47.039 "target": "spare", 00:27:47.039 "progress": { 00:27:47.039 "blocks": 57344, 00:27:47.039 "percent": 90 00:27:47.039 } 00:27:47.039 }, 00:27:47.039 "base_bdevs_list": [ 00:27:47.039 { 00:27:47.039 "name": "spare", 00:27:47.039 "uuid": "f9f9f01d-ec2c-5214-8d66-9ad021777788", 00:27:47.039 "is_configured": true, 00:27:47.039 "data_offset": 2048, 00:27:47.039 "data_size": 63488 00:27:47.039 }, 00:27:47.039 { 00:27:47.039 "name": "BaseBdev2", 00:27:47.039 "uuid": "d178658c-c07e-5004-a143-931c8305188b", 00:27:47.039 "is_configured": true, 00:27:47.039 "data_offset": 2048, 00:27:47.039 "data_size": 63488 00:27:47.039 } 00:27:47.039 ] 00:27:47.039 }' 00:27:47.039 23:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:47.039 23:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:47.039 23:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:47.039 23:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:47.039 23:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:47.298 [2024-07-13 23:13:36.540974] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:47.298 [2024-07-13 23:13:36.648700] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:47.298 [2024-07-13 23:13:36.652692] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:48.231 23:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:48.231 23:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:48.231 23:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:48.231 23:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:48.231 23:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:48.231 23:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:48.231 23:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:48.231 23:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:48.489 23:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:48.489 "name": "raid_bdev1", 00:27:48.489 "uuid": "8abf4dc8-7616-4f78-9dd9-dcf48063e7c1", 00:27:48.489 "strip_size_kb": 0, 00:27:48.489 "state": "online", 00:27:48.489 "raid_level": "raid1", 00:27:48.489 "superblock": true, 00:27:48.489 "num_base_bdevs": 2, 00:27:48.489 "num_base_bdevs_discovered": 2, 00:27:48.489 "num_base_bdevs_operational": 2, 00:27:48.489 "base_bdevs_list": [ 00:27:48.489 { 00:27:48.489 "name": "spare", 00:27:48.489 "uuid": "f9f9f01d-ec2c-5214-8d66-9ad021777788", 00:27:48.489 "is_configured": true, 00:27:48.489 "data_offset": 2048, 00:27:48.489 "data_size": 63488 00:27:48.489 }, 00:27:48.489 { 00:27:48.489 "name": "BaseBdev2", 00:27:48.489 "uuid": "d178658c-c07e-5004-a143-931c8305188b", 00:27:48.489 "is_configured": true, 00:27:48.489 "data_offset": 2048, 00:27:48.489 "data_size": 63488 00:27:48.489 } 00:27:48.489 ] 00:27:48.489 }' 00:27:48.490 23:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:48.490 23:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:48.490 23:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:48.490 23:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:27:48.490 23:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:27:48.490 23:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:48.490 23:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:48.490 23:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:48.490 23:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:48.490 23:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:48.490 23:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:48.490 23:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:48.747 23:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:48.747 "name": "raid_bdev1", 00:27:48.747 "uuid": "8abf4dc8-7616-4f78-9dd9-dcf48063e7c1", 00:27:48.747 "strip_size_kb": 0, 00:27:48.747 "state": "online", 00:27:48.747 "raid_level": "raid1", 00:27:48.747 "superblock": true, 00:27:48.747 "num_base_bdevs": 2, 00:27:48.747 "num_base_bdevs_discovered": 2, 00:27:48.747 "num_base_bdevs_operational": 2, 00:27:48.747 "base_bdevs_list": [ 00:27:48.747 { 00:27:48.747 "name": "spare", 00:27:48.747 "uuid": "f9f9f01d-ec2c-5214-8d66-9ad021777788", 00:27:48.747 "is_configured": true, 00:27:48.747 "data_offset": 2048, 00:27:48.747 "data_size": 63488 00:27:48.747 }, 00:27:48.747 { 00:27:48.747 "name": "BaseBdev2", 00:27:48.747 "uuid": "d178658c-c07e-5004-a143-931c8305188b", 00:27:48.747 "is_configured": true, 00:27:48.747 "data_offset": 2048, 00:27:48.747 "data_size": 63488 00:27:48.747 } 00:27:48.747 ] 00:27:48.747 }' 00:27:48.747 23:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:48.747 23:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:48.747 23:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:48.747 23:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:48.747 23:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:48.747 23:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:48.747 23:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:48.747 23:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:48.747 23:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:48.747 23:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:48.747 23:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:48.747 23:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:48.747 23:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:48.747 23:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:49.006 23:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:49.006 23:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:49.264 23:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:49.264 "name": "raid_bdev1", 00:27:49.264 "uuid": "8abf4dc8-7616-4f78-9dd9-dcf48063e7c1", 00:27:49.264 "strip_size_kb": 0, 00:27:49.264 "state": "online", 00:27:49.264 "raid_level": "raid1", 00:27:49.264 "superblock": true, 00:27:49.264 "num_base_bdevs": 2, 00:27:49.264 "num_base_bdevs_discovered": 2, 00:27:49.264 "num_base_bdevs_operational": 2, 00:27:49.264 "base_bdevs_list": [ 00:27:49.264 { 00:27:49.264 "name": "spare", 00:27:49.264 "uuid": "f9f9f01d-ec2c-5214-8d66-9ad021777788", 00:27:49.264 "is_configured": true, 00:27:49.264 "data_offset": 2048, 00:27:49.264 "data_size": 63488 00:27:49.264 }, 00:27:49.264 { 00:27:49.264 "name": "BaseBdev2", 00:27:49.264 "uuid": "d178658c-c07e-5004-a143-931c8305188b", 00:27:49.264 "is_configured": true, 00:27:49.264 "data_offset": 2048, 00:27:49.264 "data_size": 63488 00:27:49.264 } 00:27:49.264 ] 00:27:49.264 }' 00:27:49.264 23:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:49.264 23:13:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:49.832 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:50.090 [2024-07-13 23:13:39.388344] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:50.090 [2024-07-13 23:13:39.388716] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:50.090 00:27:50.090 Latency(us) 00:27:50.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:50.090 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:27:50.090 raid_bdev1 : 12.15 107.04 321.13 0.00 0.00 12673.45 286.72 111053.73 00:27:50.090 =================================================================================================================== 00:27:50.090 Total : 107.04 321.13 0.00 0.00 12673.45 286.72 111053.73 00:27:50.090 [2024-07-13 23:13:39.404601] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:50.090 [2024-07-13 23:13:39.404829] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:50.090 [2024-07-13 23:13:39.405033] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:50.090 [2024-07-13 23:13:39.405192] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:27:50.090 0 00:27:50.090 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:50.090 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:27:50.349 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:27:50.349 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:27:50.349 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:27:50.349 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:27:50.349 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:50.349 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:27:50.349 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:50.349 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:50.349 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:50.349 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:27:50.349 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:50.349 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:50.349 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:27:50.607 /dev/nbd0 00:27:50.607 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:50.607 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:50.607 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:50.607 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:27:50.607 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:50.607 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:50.607 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:50.607 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:27:50.607 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:50.607 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:50.607 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:50.607 1+0 records in 00:27:50.607 1+0 records out 00:27:50.607 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00055666 s, 7.4 MB/s 00:27:50.607 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:50.608 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:27:50.608 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:50.608 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:50.608 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:27:50.608 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:50.608 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:50.608 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:27:50.608 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:27:50.608 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:27:50.608 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:50.608 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:27:50.608 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:50.608 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:27:50.608 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:50.608 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:27:50.608 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:50.608 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:50.608 23:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:27:51.173 /dev/nbd1 00:27:51.173 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:51.173 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:51.173 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:27:51.173 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:27:51.173 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:51.173 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:51.173 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:27:51.173 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:27:51.173 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:51.173 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:51.173 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:51.173 1+0 records in 00:27:51.173 1+0 records out 00:27:51.173 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000618048 s, 6.6 MB/s 00:27:51.173 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:51.173 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:27:51.173 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:51.173 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:51.173 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:27:51.173 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:51.173 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:51.173 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:27:51.173 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:27:51.173 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:51.174 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:27:51.174 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:51.174 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:27:51.174 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:51.174 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:51.432 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:51.432 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:51.432 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:51.432 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:51.432 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:51.432 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:51.432 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:27:51.432 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:27:51.432 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:51.432 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:51.432 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:51.432 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:51.432 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:27:51.432 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:51.432 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:51.690 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:51.690 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:51.690 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:51.690 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:51.690 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:51.690 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:51.690 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:27:51.690 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:27:51.690 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:27:51.690 23:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:51.949 23:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:52.208 [2024-07-13 23:13:41.404146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:52.208 [2024-07-13 23:13:41.404631] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:52.208 [2024-07-13 23:13:41.404846] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:27:52.208 [2024-07-13 23:13:41.405059] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:52.208 [2024-07-13 23:13:41.408469] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:52.208 [2024-07-13 23:13:41.408676] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:52.208 [2024-07-13 23:13:41.409004] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:52.208 [2024-07-13 23:13:41.409214] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:52.208 [2024-07-13 23:13:41.409656] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:52.208 spare 00:27:52.208 23:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:52.208 23:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:52.208 23:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:52.208 23:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:52.208 23:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:52.208 23:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:52.208 23:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:52.208 23:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:52.208 23:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:52.208 23:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:52.208 23:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:52.208 23:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:52.208 [2024-07-13 23:13:41.509950] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:27:52.208 [2024-07-13 23:13:41.510195] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:52.208 [2024-07-13 23:13:41.510491] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000278c0 00:27:52.208 [2024-07-13 23:13:41.511230] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:27:52.208 [2024-07-13 23:13:41.511366] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:27:52.208 [2024-07-13 23:13:41.511650] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:52.467 23:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:52.467 "name": "raid_bdev1", 00:27:52.467 "uuid": "8abf4dc8-7616-4f78-9dd9-dcf48063e7c1", 00:27:52.467 "strip_size_kb": 0, 00:27:52.467 "state": "online", 00:27:52.467 "raid_level": "raid1", 00:27:52.467 "superblock": true, 00:27:52.467 "num_base_bdevs": 2, 00:27:52.467 "num_base_bdevs_discovered": 2, 00:27:52.467 "num_base_bdevs_operational": 2, 00:27:52.467 "base_bdevs_list": [ 00:27:52.467 { 00:27:52.467 "name": "spare", 00:27:52.467 "uuid": "f9f9f01d-ec2c-5214-8d66-9ad021777788", 00:27:52.467 "is_configured": true, 00:27:52.467 "data_offset": 2048, 00:27:52.467 "data_size": 63488 00:27:52.467 }, 00:27:52.467 { 00:27:52.467 "name": "BaseBdev2", 00:27:52.467 "uuid": "d178658c-c07e-5004-a143-931c8305188b", 00:27:52.467 "is_configured": true, 00:27:52.467 "data_offset": 2048, 00:27:52.467 "data_size": 63488 00:27:52.467 } 00:27:52.467 ] 00:27:52.467 }' 00:27:52.467 23:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:52.467 23:13:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:53.033 23:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:53.033 23:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:53.033 23:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:53.033 23:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:53.033 23:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:53.033 23:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:53.033 23:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:53.292 23:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:53.292 "name": "raid_bdev1", 00:27:53.292 "uuid": "8abf4dc8-7616-4f78-9dd9-dcf48063e7c1", 00:27:53.292 "strip_size_kb": 0, 00:27:53.292 "state": "online", 00:27:53.292 "raid_level": "raid1", 00:27:53.292 "superblock": true, 00:27:53.292 "num_base_bdevs": 2, 00:27:53.292 "num_base_bdevs_discovered": 2, 00:27:53.292 "num_base_bdevs_operational": 2, 00:27:53.292 "base_bdevs_list": [ 00:27:53.292 { 00:27:53.292 "name": "spare", 00:27:53.292 "uuid": "f9f9f01d-ec2c-5214-8d66-9ad021777788", 00:27:53.292 "is_configured": true, 00:27:53.292 "data_offset": 2048, 00:27:53.292 "data_size": 63488 00:27:53.292 }, 00:27:53.292 { 00:27:53.292 "name": "BaseBdev2", 00:27:53.292 "uuid": "d178658c-c07e-5004-a143-931c8305188b", 00:27:53.292 "is_configured": true, 00:27:53.292 "data_offset": 2048, 00:27:53.292 "data_size": 63488 00:27:53.292 } 00:27:53.292 ] 00:27:53.292 }' 00:27:53.292 23:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:53.292 23:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:53.292 23:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:53.292 23:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:53.292 23:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:53.292 23:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:27:53.550 23:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:27:53.550 23:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:53.809 [2024-07-13 23:13:43.089952] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:53.809 23:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:53.809 23:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:53.809 23:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:53.809 23:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:53.809 23:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:53.809 23:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:53.809 23:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:53.809 23:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:53.809 23:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:53.809 23:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:53.809 23:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:53.809 23:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:54.091 23:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:54.091 "name": "raid_bdev1", 00:27:54.091 "uuid": "8abf4dc8-7616-4f78-9dd9-dcf48063e7c1", 00:27:54.091 "strip_size_kb": 0, 00:27:54.091 "state": "online", 00:27:54.091 "raid_level": "raid1", 00:27:54.091 "superblock": true, 00:27:54.091 "num_base_bdevs": 2, 00:27:54.091 "num_base_bdevs_discovered": 1, 00:27:54.091 "num_base_bdevs_operational": 1, 00:27:54.091 "base_bdevs_list": [ 00:27:54.091 { 00:27:54.091 "name": null, 00:27:54.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:54.091 "is_configured": false, 00:27:54.091 "data_offset": 2048, 00:27:54.091 "data_size": 63488 00:27:54.091 }, 00:27:54.091 { 00:27:54.091 "name": "BaseBdev2", 00:27:54.091 "uuid": "d178658c-c07e-5004-a143-931c8305188b", 00:27:54.091 "is_configured": true, 00:27:54.091 "data_offset": 2048, 00:27:54.091 "data_size": 63488 00:27:54.091 } 00:27:54.091 ] 00:27:54.091 }' 00:27:54.091 23:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:54.091 23:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:54.675 23:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:54.933 [2024-07-13 23:13:44.294484] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:54.933 [2024-07-13 23:13:44.295120] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:54.933 [2024-07-13 23:13:44.295271] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:54.933 [2024-07-13 23:13:44.295523] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:54.933 [2024-07-13 23:13:44.303737] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027a60 00:27:54.933 [2024-07-13 23:13:44.306544] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:54.933 23:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:27:56.311 23:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:56.311 23:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:56.311 23:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:56.311 23:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:56.311 23:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:56.311 23:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:56.311 23:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:56.311 23:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:56.311 "name": "raid_bdev1", 00:27:56.311 "uuid": "8abf4dc8-7616-4f78-9dd9-dcf48063e7c1", 00:27:56.311 "strip_size_kb": 0, 00:27:56.311 "state": "online", 00:27:56.311 "raid_level": "raid1", 00:27:56.311 "superblock": true, 00:27:56.311 "num_base_bdevs": 2, 00:27:56.311 "num_base_bdevs_discovered": 2, 00:27:56.311 "num_base_bdevs_operational": 2, 00:27:56.311 "process": { 00:27:56.311 "type": "rebuild", 00:27:56.311 "target": "spare", 00:27:56.311 "progress": { 00:27:56.311 "blocks": 24576, 00:27:56.311 "percent": 38 00:27:56.311 } 00:27:56.311 }, 00:27:56.311 "base_bdevs_list": [ 00:27:56.311 { 00:27:56.311 "name": "spare", 00:27:56.311 "uuid": "f9f9f01d-ec2c-5214-8d66-9ad021777788", 00:27:56.311 "is_configured": true, 00:27:56.311 "data_offset": 2048, 00:27:56.311 "data_size": 63488 00:27:56.311 }, 00:27:56.311 { 00:27:56.311 "name": "BaseBdev2", 00:27:56.311 "uuid": "d178658c-c07e-5004-a143-931c8305188b", 00:27:56.311 "is_configured": true, 00:27:56.311 "data_offset": 2048, 00:27:56.311 "data_size": 63488 00:27:56.311 } 00:27:56.311 ] 00:27:56.311 }' 00:27:56.311 23:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:56.311 23:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:56.311 23:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:56.311 23:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:56.311 23:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:56.570 [2024-07-13 23:13:45.912619] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:56.570 [2024-07-13 23:13:45.919299] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:56.570 [2024-07-13 23:13:45.919558] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:56.570 [2024-07-13 23:13:45.919743] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:56.570 [2024-07-13 23:13:45.919802] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:56.570 23:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:56.570 23:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:56.570 23:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:56.570 23:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:56.570 23:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:56.570 23:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:56.570 23:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:56.570 23:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:56.570 23:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:56.570 23:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:56.570 23:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:56.570 23:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:56.829 23:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:56.829 "name": "raid_bdev1", 00:27:56.829 "uuid": "8abf4dc8-7616-4f78-9dd9-dcf48063e7c1", 00:27:56.829 "strip_size_kb": 0, 00:27:56.829 "state": "online", 00:27:56.829 "raid_level": "raid1", 00:27:56.829 "superblock": true, 00:27:56.829 "num_base_bdevs": 2, 00:27:56.829 "num_base_bdevs_discovered": 1, 00:27:56.829 "num_base_bdevs_operational": 1, 00:27:56.829 "base_bdevs_list": [ 00:27:56.829 { 00:27:56.829 "name": null, 00:27:56.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:56.829 "is_configured": false, 00:27:56.829 "data_offset": 2048, 00:27:56.829 "data_size": 63488 00:27:56.829 }, 00:27:56.829 { 00:27:56.829 "name": "BaseBdev2", 00:27:56.829 "uuid": "d178658c-c07e-5004-a143-931c8305188b", 00:27:56.829 "is_configured": true, 00:27:56.829 "data_offset": 2048, 00:27:56.829 "data_size": 63488 00:27:56.829 } 00:27:56.829 ] 00:27:56.829 }' 00:27:56.829 23:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:56.829 23:13:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:57.765 23:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:57.765 [2024-07-13 23:13:47.077041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:57.765 [2024-07-13 23:13:47.077558] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:57.765 [2024-07-13 23:13:47.077802] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:27:57.765 [2024-07-13 23:13:47.077977] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:57.765 [2024-07-13 23:13:47.078765] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:57.765 [2024-07-13 23:13:47.078954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:57.765 [2024-07-13 23:13:47.079270] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:57.765 [2024-07-13 23:13:47.079414] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:57.765 [2024-07-13 23:13:47.079543] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:57.765 [2024-07-13 23:13:47.079679] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:57.765 [2024-07-13 23:13:47.087424] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027da0 00:27:57.765 spare 00:27:57.765 [2024-07-13 23:13:47.090127] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:57.765 23:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:27:59.142 23:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:59.142 23:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:59.142 23:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:59.142 23:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:59.142 23:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:59.142 23:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:59.143 23:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:59.143 23:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:59.143 "name": "raid_bdev1", 00:27:59.143 "uuid": "8abf4dc8-7616-4f78-9dd9-dcf48063e7c1", 00:27:59.143 "strip_size_kb": 0, 00:27:59.143 "state": "online", 00:27:59.143 "raid_level": "raid1", 00:27:59.143 "superblock": true, 00:27:59.143 "num_base_bdevs": 2, 00:27:59.143 "num_base_bdevs_discovered": 2, 00:27:59.143 "num_base_bdevs_operational": 2, 00:27:59.143 "process": { 00:27:59.143 "type": "rebuild", 00:27:59.143 "target": "spare", 00:27:59.143 "progress": { 00:27:59.143 "blocks": 24576, 00:27:59.143 "percent": 38 00:27:59.143 } 00:27:59.143 }, 00:27:59.143 "base_bdevs_list": [ 00:27:59.143 { 00:27:59.143 "name": "spare", 00:27:59.143 "uuid": "f9f9f01d-ec2c-5214-8d66-9ad021777788", 00:27:59.143 "is_configured": true, 00:27:59.143 "data_offset": 2048, 00:27:59.143 "data_size": 63488 00:27:59.143 }, 00:27:59.143 { 00:27:59.143 "name": "BaseBdev2", 00:27:59.143 "uuid": "d178658c-c07e-5004-a143-931c8305188b", 00:27:59.143 "is_configured": true, 00:27:59.143 "data_offset": 2048, 00:27:59.143 "data_size": 63488 00:27:59.143 } 00:27:59.143 ] 00:27:59.143 }' 00:27:59.143 23:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:59.143 23:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:59.143 23:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:59.143 23:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:59.143 23:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:59.402 [2024-07-13 23:13:48.740181] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:59.402 [2024-07-13 23:13:48.803685] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:59.402 [2024-07-13 23:13:48.804265] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:59.402 [2024-07-13 23:13:48.804429] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:59.402 [2024-07-13 23:13:48.804579] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:59.660 23:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:59.660 23:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:59.660 23:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:59.660 23:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:59.660 23:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:59.660 23:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:59.660 23:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:59.660 23:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:59.660 23:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:59.660 23:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:59.660 23:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:59.660 23:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:59.919 23:13:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:59.919 "name": "raid_bdev1", 00:27:59.919 "uuid": "8abf4dc8-7616-4f78-9dd9-dcf48063e7c1", 00:27:59.919 "strip_size_kb": 0, 00:27:59.919 "state": "online", 00:27:59.919 "raid_level": "raid1", 00:27:59.919 "superblock": true, 00:27:59.919 "num_base_bdevs": 2, 00:27:59.919 "num_base_bdevs_discovered": 1, 00:27:59.919 "num_base_bdevs_operational": 1, 00:27:59.919 "base_bdevs_list": [ 00:27:59.919 { 00:27:59.919 "name": null, 00:27:59.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:59.919 "is_configured": false, 00:27:59.919 "data_offset": 2048, 00:27:59.919 "data_size": 63488 00:27:59.919 }, 00:27:59.919 { 00:27:59.919 "name": "BaseBdev2", 00:27:59.919 "uuid": "d178658c-c07e-5004-a143-931c8305188b", 00:27:59.919 "is_configured": true, 00:27:59.919 "data_offset": 2048, 00:27:59.919 "data_size": 63488 00:27:59.919 } 00:27:59.919 ] 00:27:59.919 }' 00:27:59.919 23:13:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:59.919 23:13:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:00.486 23:13:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:00.487 23:13:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:00.487 23:13:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:00.487 23:13:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:00.487 23:13:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:00.487 23:13:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:00.487 23:13:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:00.745 23:13:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:00.745 "name": "raid_bdev1", 00:28:00.745 "uuid": "8abf4dc8-7616-4f78-9dd9-dcf48063e7c1", 00:28:00.745 "strip_size_kb": 0, 00:28:00.745 "state": "online", 00:28:00.745 "raid_level": "raid1", 00:28:00.745 "superblock": true, 00:28:00.745 "num_base_bdevs": 2, 00:28:00.745 "num_base_bdevs_discovered": 1, 00:28:00.745 "num_base_bdevs_operational": 1, 00:28:00.745 "base_bdevs_list": [ 00:28:00.745 { 00:28:00.745 "name": null, 00:28:00.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:00.745 "is_configured": false, 00:28:00.745 "data_offset": 2048, 00:28:00.745 "data_size": 63488 00:28:00.745 }, 00:28:00.745 { 00:28:00.745 "name": "BaseBdev2", 00:28:00.745 "uuid": "d178658c-c07e-5004-a143-931c8305188b", 00:28:00.745 "is_configured": true, 00:28:00.745 "data_offset": 2048, 00:28:00.745 "data_size": 63488 00:28:00.745 } 00:28:00.745 ] 00:28:00.745 }' 00:28:00.745 23:13:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:00.745 23:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:00.745 23:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:00.745 23:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:00.745 23:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:28:01.004 23:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:01.262 [2024-07-13 23:13:50.565448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:01.262 [2024-07-13 23:13:50.566039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:01.262 [2024-07-13 23:13:50.566241] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:28:01.262 [2024-07-13 23:13:50.566435] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:01.262 [2024-07-13 23:13:50.567220] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:01.262 [2024-07-13 23:13:50.567445] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:01.262 [2024-07-13 23:13:50.567776] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:28:01.262 [2024-07-13 23:13:50.567916] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:01.262 [2024-07-13 23:13:50.568039] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:01.262 BaseBdev1 00:28:01.262 23:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:28:02.197 23:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:02.197 23:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:02.197 23:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:02.197 23:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:02.197 23:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:02.197 23:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:02.197 23:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:02.197 23:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:02.197 23:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:02.197 23:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:02.197 23:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:02.197 23:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:02.765 23:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:02.765 "name": "raid_bdev1", 00:28:02.765 "uuid": "8abf4dc8-7616-4f78-9dd9-dcf48063e7c1", 00:28:02.765 "strip_size_kb": 0, 00:28:02.765 "state": "online", 00:28:02.765 "raid_level": "raid1", 00:28:02.765 "superblock": true, 00:28:02.765 "num_base_bdevs": 2, 00:28:02.765 "num_base_bdevs_discovered": 1, 00:28:02.765 "num_base_bdevs_operational": 1, 00:28:02.765 "base_bdevs_list": [ 00:28:02.765 { 00:28:02.765 "name": null, 00:28:02.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.765 "is_configured": false, 00:28:02.765 "data_offset": 2048, 00:28:02.765 "data_size": 63488 00:28:02.765 }, 00:28:02.765 { 00:28:02.765 "name": "BaseBdev2", 00:28:02.765 "uuid": "d178658c-c07e-5004-a143-931c8305188b", 00:28:02.765 "is_configured": true, 00:28:02.765 "data_offset": 2048, 00:28:02.765 "data_size": 63488 00:28:02.765 } 00:28:02.765 ] 00:28:02.765 }' 00:28:02.765 23:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:02.765 23:13:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:03.333 23:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:03.333 23:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:03.333 23:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:03.333 23:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:03.333 23:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:03.333 23:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:03.333 23:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:03.592 23:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:03.592 "name": "raid_bdev1", 00:28:03.592 "uuid": "8abf4dc8-7616-4f78-9dd9-dcf48063e7c1", 00:28:03.592 "strip_size_kb": 0, 00:28:03.592 "state": "online", 00:28:03.592 "raid_level": "raid1", 00:28:03.592 "superblock": true, 00:28:03.592 "num_base_bdevs": 2, 00:28:03.592 "num_base_bdevs_discovered": 1, 00:28:03.592 "num_base_bdevs_operational": 1, 00:28:03.592 "base_bdevs_list": [ 00:28:03.592 { 00:28:03.592 "name": null, 00:28:03.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:03.592 "is_configured": false, 00:28:03.592 "data_offset": 2048, 00:28:03.592 "data_size": 63488 00:28:03.592 }, 00:28:03.592 { 00:28:03.592 "name": "BaseBdev2", 00:28:03.592 "uuid": "d178658c-c07e-5004-a143-931c8305188b", 00:28:03.592 "is_configured": true, 00:28:03.592 "data_offset": 2048, 00:28:03.592 "data_size": 63488 00:28:03.592 } 00:28:03.592 ] 00:28:03.592 }' 00:28:03.592 23:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:03.592 23:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:03.592 23:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:03.592 23:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:03.592 23:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:03.592 23:13:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # local es=0 00:28:03.592 23:13:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:03.592 23:13:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:03.592 23:13:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:03.592 23:13:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:03.592 23:13:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:03.592 23:13:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:03.592 23:13:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:03.592 23:13:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:03.592 23:13:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:28:03.592 23:13:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:03.851 [2024-07-13 23:13:53.109727] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:03.851 [2024-07-13 23:13:53.110434] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:03.851 [2024-07-13 23:13:53.110631] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:03.851 request: 00:28:03.851 { 00:28:03.851 "base_bdev": "BaseBdev1", 00:28:03.851 "raid_bdev": "raid_bdev1", 00:28:03.851 "method": "bdev_raid_add_base_bdev", 00:28:03.851 "req_id": 1 00:28:03.851 } 00:28:03.851 Got JSON-RPC error response 00:28:03.851 response: 00:28:03.851 { 00:28:03.851 "code": -22, 00:28:03.851 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:28:03.851 } 00:28:03.851 23:13:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # es=1 00:28:03.851 23:13:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:03.851 23:13:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:03.851 23:13:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:03.851 23:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:28:04.784 23:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:04.784 23:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:04.784 23:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:04.784 23:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:04.784 23:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:04.784 23:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:04.784 23:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:04.784 23:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:04.784 23:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:04.784 23:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:04.784 23:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:04.784 23:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:05.042 23:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:05.042 "name": "raid_bdev1", 00:28:05.042 "uuid": "8abf4dc8-7616-4f78-9dd9-dcf48063e7c1", 00:28:05.042 "strip_size_kb": 0, 00:28:05.042 "state": "online", 00:28:05.042 "raid_level": "raid1", 00:28:05.042 "superblock": true, 00:28:05.042 "num_base_bdevs": 2, 00:28:05.042 "num_base_bdevs_discovered": 1, 00:28:05.042 "num_base_bdevs_operational": 1, 00:28:05.042 "base_bdevs_list": [ 00:28:05.042 { 00:28:05.042 "name": null, 00:28:05.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:05.042 "is_configured": false, 00:28:05.042 "data_offset": 2048, 00:28:05.042 "data_size": 63488 00:28:05.042 }, 00:28:05.042 { 00:28:05.042 "name": "BaseBdev2", 00:28:05.042 "uuid": "d178658c-c07e-5004-a143-931c8305188b", 00:28:05.042 "is_configured": true, 00:28:05.042 "data_offset": 2048, 00:28:05.042 "data_size": 63488 00:28:05.042 } 00:28:05.042 ] 00:28:05.042 }' 00:28:05.042 23:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:05.042 23:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:05.975 23:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:05.975 23:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:05.975 23:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:05.975 23:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:05.975 23:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:05.975 23:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:05.976 23:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:05.976 23:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:05.976 "name": "raid_bdev1", 00:28:05.976 "uuid": "8abf4dc8-7616-4f78-9dd9-dcf48063e7c1", 00:28:05.976 "strip_size_kb": 0, 00:28:05.976 "state": "online", 00:28:05.976 "raid_level": "raid1", 00:28:05.976 "superblock": true, 00:28:05.976 "num_base_bdevs": 2, 00:28:05.976 "num_base_bdevs_discovered": 1, 00:28:05.976 "num_base_bdevs_operational": 1, 00:28:05.976 "base_bdevs_list": [ 00:28:05.976 { 00:28:05.976 "name": null, 00:28:05.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:05.976 "is_configured": false, 00:28:05.976 "data_offset": 2048, 00:28:05.976 "data_size": 63488 00:28:05.976 }, 00:28:05.976 { 00:28:05.976 "name": "BaseBdev2", 00:28:05.976 "uuid": "d178658c-c07e-5004-a143-931c8305188b", 00:28:05.976 "is_configured": true, 00:28:05.976 "data_offset": 2048, 00:28:05.976 "data_size": 63488 00:28:05.976 } 00:28:05.976 ] 00:28:05.976 }' 00:28:05.976 23:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:05.976 23:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:05.976 23:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:05.976 23:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:05.976 23:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 155760 00:28:05.976 23:13:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@948 -- # '[' -z 155760 ']' 00:28:05.976 23:13:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # kill -0 155760 00:28:05.976 23:13:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # uname 00:28:05.976 23:13:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:05.976 23:13:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 155760 00:28:05.976 killing process with pid 155760 00:28:05.976 Received shutdown signal, test time was about 28.099863 seconds 00:28:05.976 00:28:05.976 Latency(us) 00:28:05.976 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:05.976 =================================================================================================================== 00:28:05.976 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:05.976 23:13:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:05.976 23:13:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:05.976 23:13:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 155760' 00:28:05.976 23:13:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@967 -- # kill 155760 00:28:05.976 23:13:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # wait 155760 00:28:05.976 [2024-07-13 23:13:55.345259] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:05.976 [2024-07-13 23:13:55.345503] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:05.976 [2024-07-13 23:13:55.345777] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:05.976 [2024-07-13 23:13:55.346079] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:28:05.976 [2024-07-13 23:13:55.381296] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:28:06.543 00:28:06.543 real 0m32.904s 00:28:06.543 user 0m52.955s 00:28:06.543 ************************************ 00:28:06.543 END TEST raid_rebuild_test_sb_io 00:28:06.543 ************************************ 00:28:06.543 sys 0m3.599s 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:06.543 23:13:55 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:28:06.543 23:13:55 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:28:06.543 23:13:55 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:28:06.543 23:13:55 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:28:06.543 23:13:55 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:06.543 23:13:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:06.543 ************************************ 00:28:06.543 START TEST raid_rebuild_test 00:28:06.543 ************************************ 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 false false true 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=156639 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 156639 /var/tmp/spdk-raid.sock 00:28:06.543 23:13:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 156639 ']' 00:28:06.544 23:13:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:06.544 23:13:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:06.544 23:13:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:06.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:06.544 23:13:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:06.544 23:13:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.544 [2024-07-13 23:13:55.856931] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:06.544 [2024-07-13 23:13:55.857443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156639 ] 00:28:06.544 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:06.544 Zero copy mechanism will not be used. 00:28:06.803 [2024-07-13 23:13:55.994257] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.803 [2024-07-13 23:13:56.091553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:06.803 [2024-07-13 23:13:56.163850] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:07.445 23:13:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:07.445 23:13:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:28:07.445 23:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:07.445 23:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:07.703 BaseBdev1_malloc 00:28:07.703 23:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:07.962 [2024-07-13 23:13:57.315616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:07.962 [2024-07-13 23:13:57.315949] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:07.962 [2024-07-13 23:13:57.316134] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:28:07.962 [2024-07-13 23:13:57.316316] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:07.962 [2024-07-13 23:13:57.319283] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:07.962 [2024-07-13 23:13:57.319489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:07.962 BaseBdev1 00:28:07.962 23:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:07.962 23:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:08.221 BaseBdev2_malloc 00:28:08.221 23:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:08.479 [2024-07-13 23:13:57.825930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:08.480 [2024-07-13 23:13:57.826366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:08.480 [2024-07-13 23:13:57.826464] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:28:08.480 [2024-07-13 23:13:57.826779] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:08.480 [2024-07-13 23:13:57.829505] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:08.480 [2024-07-13 23:13:57.829725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:08.480 BaseBdev2 00:28:08.480 23:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:08.480 23:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:28:08.739 BaseBdev3_malloc 00:28:08.739 23:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:28:08.998 [2024-07-13 23:13:58.343713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:28:08.998 [2024-07-13 23:13:58.344078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:08.998 [2024-07-13 23:13:58.344313] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:08.998 [2024-07-13 23:13:58.344516] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:08.998 [2024-07-13 23:13:58.347808] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:08.998 [2024-07-13 23:13:58.348022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:28:08.998 BaseBdev3 00:28:08.998 23:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:08.998 23:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:28:09.256 BaseBdev4_malloc 00:28:09.256 23:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:28:09.516 [2024-07-13 23:13:58.872328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:28:09.516 [2024-07-13 23:13:58.872828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:09.516 [2024-07-13 23:13:58.873109] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:28:09.516 [2024-07-13 23:13:58.873327] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:09.516 [2024-07-13 23:13:58.876528] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:09.516 [2024-07-13 23:13:58.876746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:28:09.516 BaseBdev4 00:28:09.516 23:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:28:09.774 spare_malloc 00:28:09.774 23:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:10.032 spare_delay 00:28:10.032 23:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:10.290 [2024-07-13 23:13:59.628306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:10.290 [2024-07-13 23:13:59.628800] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:10.290 [2024-07-13 23:13:59.629013] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:28:10.290 [2024-07-13 23:13:59.629183] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:10.290 [2024-07-13 23:13:59.631993] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:10.290 [2024-07-13 23:13:59.632202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:10.290 spare 00:28:10.290 23:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:28:10.549 [2024-07-13 23:13:59.844680] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:10.549 [2024-07-13 23:13:59.847471] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:10.549 [2024-07-13 23:13:59.847684] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:10.549 [2024-07-13 23:13:59.847792] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:10.549 [2024-07-13 23:13:59.848044] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:28:10.549 [2024-07-13 23:13:59.848111] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:28:10.549 [2024-07-13 23:13:59.848505] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:28:10.549 [2024-07-13 23:13:59.849258] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:28:10.549 [2024-07-13 23:13:59.849424] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:28:10.549 [2024-07-13 23:13:59.849913] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:10.549 23:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:10.549 23:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:10.549 23:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:10.549 23:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:10.549 23:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:10.549 23:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:10.549 23:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:10.549 23:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:10.549 23:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:10.549 23:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:10.549 23:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:10.549 23:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:10.807 23:14:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:10.807 "name": "raid_bdev1", 00:28:10.807 "uuid": "36d96702-72ed-4b99-991d-7ca5d237fae5", 00:28:10.807 "strip_size_kb": 0, 00:28:10.807 "state": "online", 00:28:10.807 "raid_level": "raid1", 00:28:10.807 "superblock": false, 00:28:10.807 "num_base_bdevs": 4, 00:28:10.807 "num_base_bdevs_discovered": 4, 00:28:10.807 "num_base_bdevs_operational": 4, 00:28:10.807 "base_bdevs_list": [ 00:28:10.807 { 00:28:10.807 "name": "BaseBdev1", 00:28:10.807 "uuid": "0e94c706-74b0-5b82-8b40-6987e9341cde", 00:28:10.807 "is_configured": true, 00:28:10.807 "data_offset": 0, 00:28:10.807 "data_size": 65536 00:28:10.807 }, 00:28:10.807 { 00:28:10.807 "name": "BaseBdev2", 00:28:10.807 "uuid": "23ff415f-641f-5bc5-a7b3-cc155ef6a21f", 00:28:10.807 "is_configured": true, 00:28:10.807 "data_offset": 0, 00:28:10.807 "data_size": 65536 00:28:10.807 }, 00:28:10.807 { 00:28:10.807 "name": "BaseBdev3", 00:28:10.807 "uuid": "c5a7c94e-b835-5b84-bc1a-bb94783b32bf", 00:28:10.807 "is_configured": true, 00:28:10.807 "data_offset": 0, 00:28:10.807 "data_size": 65536 00:28:10.807 }, 00:28:10.807 { 00:28:10.807 "name": "BaseBdev4", 00:28:10.807 "uuid": "00a02c4d-fd2d-5adf-ae31-2b63b103512a", 00:28:10.807 "is_configured": true, 00:28:10.807 "data_offset": 0, 00:28:10.807 "data_size": 65536 00:28:10.807 } 00:28:10.807 ] 00:28:10.807 }' 00:28:10.807 23:14:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:10.807 23:14:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.740 23:14:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:11.740 23:14:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:28:11.740 [2024-07-13 23:14:01.046604] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:11.740 23:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:28:11.740 23:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:11.740 23:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:11.999 23:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:28:11.999 23:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:28:11.999 23:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:28:11.999 23:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:28:11.999 23:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:28:11.999 23:14:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:11.999 23:14:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:28:11.999 23:14:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:11.999 23:14:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:11.999 23:14:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:11.999 23:14:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:28:11.999 23:14:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:11.999 23:14:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:11.999 23:14:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:28:12.257 [2024-07-13 23:14:01.610420] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:28:12.257 /dev/nbd0 00:28:12.257 23:14:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:12.257 23:14:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:12.257 23:14:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:28:12.257 23:14:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:28:12.257 23:14:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:12.257 23:14:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:12.257 23:14:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:28:12.257 23:14:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:28:12.257 23:14:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:12.257 23:14:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:12.257 23:14:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:12.514 1+0 records in 00:28:12.514 1+0 records out 00:28:12.514 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337355 s, 12.1 MB/s 00:28:12.514 23:14:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:12.514 23:14:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:28:12.514 23:14:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:12.514 23:14:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:12.514 23:14:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:28:12.514 23:14:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:12.514 23:14:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:12.514 23:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:28:12.514 23:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:28:12.514 23:14:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:28:20.631 65536+0 records in 00:28:20.631 65536+0 records out 00:28:20.631 33554432 bytes (34 MB, 32 MiB) copied, 8.13882 s, 4.1 MB/s 00:28:20.631 23:14:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:28:20.631 23:14:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:20.631 23:14:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:20.631 23:14:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:20.631 23:14:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:28:20.631 23:14:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:20.631 23:14:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:20.889 23:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:20.889 [2024-07-13 23:14:10.058484] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:20.889 23:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:20.889 23:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:20.889 23:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:20.889 23:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:20.889 23:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:20.889 23:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:28:20.889 23:14:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:28:20.889 23:14:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:21.147 [2024-07-13 23:14:10.322193] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:21.147 23:14:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:21.147 23:14:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:21.147 23:14:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:21.147 23:14:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:21.147 23:14:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:21.147 23:14:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:21.147 23:14:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:21.147 23:14:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:21.147 23:14:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:21.147 23:14:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:21.147 23:14:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:21.147 23:14:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:21.405 23:14:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:21.405 "name": "raid_bdev1", 00:28:21.405 "uuid": "36d96702-72ed-4b99-991d-7ca5d237fae5", 00:28:21.405 "strip_size_kb": 0, 00:28:21.405 "state": "online", 00:28:21.405 "raid_level": "raid1", 00:28:21.405 "superblock": false, 00:28:21.405 "num_base_bdevs": 4, 00:28:21.405 "num_base_bdevs_discovered": 3, 00:28:21.405 "num_base_bdevs_operational": 3, 00:28:21.405 "base_bdevs_list": [ 00:28:21.405 { 00:28:21.405 "name": null, 00:28:21.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:21.406 "is_configured": false, 00:28:21.406 "data_offset": 0, 00:28:21.406 "data_size": 65536 00:28:21.406 }, 00:28:21.406 { 00:28:21.406 "name": "BaseBdev2", 00:28:21.406 "uuid": "23ff415f-641f-5bc5-a7b3-cc155ef6a21f", 00:28:21.406 "is_configured": true, 00:28:21.406 "data_offset": 0, 00:28:21.406 "data_size": 65536 00:28:21.406 }, 00:28:21.406 { 00:28:21.406 "name": "BaseBdev3", 00:28:21.406 "uuid": "c5a7c94e-b835-5b84-bc1a-bb94783b32bf", 00:28:21.406 "is_configured": true, 00:28:21.406 "data_offset": 0, 00:28:21.406 "data_size": 65536 00:28:21.406 }, 00:28:21.406 { 00:28:21.406 "name": "BaseBdev4", 00:28:21.406 "uuid": "00a02c4d-fd2d-5adf-ae31-2b63b103512a", 00:28:21.406 "is_configured": true, 00:28:21.406 "data_offset": 0, 00:28:21.406 "data_size": 65536 00:28:21.406 } 00:28:21.406 ] 00:28:21.406 }' 00:28:21.406 23:14:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:21.406 23:14:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:21.971 23:14:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:22.229 [2024-07-13 23:14:11.418366] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:22.229 [2024-07-13 23:14:11.424389] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d063c0 00:28:22.229 [2024-07-13 23:14:11.427128] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:22.229 23:14:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:28:23.164 23:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:23.164 23:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:23.164 23:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:23.164 23:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:23.164 23:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:23.164 23:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:23.164 23:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:23.422 23:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:23.422 "name": "raid_bdev1", 00:28:23.422 "uuid": "36d96702-72ed-4b99-991d-7ca5d237fae5", 00:28:23.422 "strip_size_kb": 0, 00:28:23.422 "state": "online", 00:28:23.422 "raid_level": "raid1", 00:28:23.422 "superblock": false, 00:28:23.422 "num_base_bdevs": 4, 00:28:23.422 "num_base_bdevs_discovered": 4, 00:28:23.422 "num_base_bdevs_operational": 4, 00:28:23.422 "process": { 00:28:23.422 "type": "rebuild", 00:28:23.422 "target": "spare", 00:28:23.422 "progress": { 00:28:23.422 "blocks": 24576, 00:28:23.422 "percent": 37 00:28:23.422 } 00:28:23.422 }, 00:28:23.422 "base_bdevs_list": [ 00:28:23.422 { 00:28:23.422 "name": "spare", 00:28:23.422 "uuid": "69c7d64f-71f8-5169-a255-ffb6f13fd6a3", 00:28:23.422 "is_configured": true, 00:28:23.422 "data_offset": 0, 00:28:23.422 "data_size": 65536 00:28:23.422 }, 00:28:23.422 { 00:28:23.422 "name": "BaseBdev2", 00:28:23.422 "uuid": "23ff415f-641f-5bc5-a7b3-cc155ef6a21f", 00:28:23.422 "is_configured": true, 00:28:23.422 "data_offset": 0, 00:28:23.422 "data_size": 65536 00:28:23.422 }, 00:28:23.422 { 00:28:23.422 "name": "BaseBdev3", 00:28:23.422 "uuid": "c5a7c94e-b835-5b84-bc1a-bb94783b32bf", 00:28:23.422 "is_configured": true, 00:28:23.422 "data_offset": 0, 00:28:23.422 "data_size": 65536 00:28:23.422 }, 00:28:23.422 { 00:28:23.422 "name": "BaseBdev4", 00:28:23.422 "uuid": "00a02c4d-fd2d-5adf-ae31-2b63b103512a", 00:28:23.422 "is_configured": true, 00:28:23.422 "data_offset": 0, 00:28:23.422 "data_size": 65536 00:28:23.422 } 00:28:23.422 ] 00:28:23.422 }' 00:28:23.422 23:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:23.422 23:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:23.422 23:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:23.422 23:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:23.422 23:14:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:23.680 [2024-07-13 23:14:13.033028] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:23.680 [2024-07-13 23:14:13.040988] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:23.680 [2024-07-13 23:14:13.041133] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:23.680 [2024-07-13 23:14:13.041160] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:23.680 [2024-07-13 23:14:13.041172] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:23.680 23:14:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:23.680 23:14:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:23.680 23:14:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:23.680 23:14:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:23.680 23:14:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:23.680 23:14:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:23.680 23:14:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:23.680 23:14:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:23.680 23:14:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:23.680 23:14:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:23.680 23:14:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:23.680 23:14:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:24.248 23:14:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:24.248 "name": "raid_bdev1", 00:28:24.248 "uuid": "36d96702-72ed-4b99-991d-7ca5d237fae5", 00:28:24.248 "strip_size_kb": 0, 00:28:24.248 "state": "online", 00:28:24.248 "raid_level": "raid1", 00:28:24.248 "superblock": false, 00:28:24.248 "num_base_bdevs": 4, 00:28:24.248 "num_base_bdevs_discovered": 3, 00:28:24.248 "num_base_bdevs_operational": 3, 00:28:24.248 "base_bdevs_list": [ 00:28:24.248 { 00:28:24.248 "name": null, 00:28:24.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:24.248 "is_configured": false, 00:28:24.248 "data_offset": 0, 00:28:24.248 "data_size": 65536 00:28:24.248 }, 00:28:24.248 { 00:28:24.248 "name": "BaseBdev2", 00:28:24.248 "uuid": "23ff415f-641f-5bc5-a7b3-cc155ef6a21f", 00:28:24.248 "is_configured": true, 00:28:24.248 "data_offset": 0, 00:28:24.248 "data_size": 65536 00:28:24.248 }, 00:28:24.248 { 00:28:24.248 "name": "BaseBdev3", 00:28:24.248 "uuid": "c5a7c94e-b835-5b84-bc1a-bb94783b32bf", 00:28:24.248 "is_configured": true, 00:28:24.248 "data_offset": 0, 00:28:24.248 "data_size": 65536 00:28:24.248 }, 00:28:24.248 { 00:28:24.248 "name": "BaseBdev4", 00:28:24.248 "uuid": "00a02c4d-fd2d-5adf-ae31-2b63b103512a", 00:28:24.248 "is_configured": true, 00:28:24.248 "data_offset": 0, 00:28:24.248 "data_size": 65536 00:28:24.248 } 00:28:24.248 ] 00:28:24.248 }' 00:28:24.248 23:14:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:24.248 23:14:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.817 23:14:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:24.817 23:14:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:24.817 23:14:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:24.817 23:14:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:24.817 23:14:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:24.817 23:14:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:24.817 23:14:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:25.077 23:14:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:25.077 "name": "raid_bdev1", 00:28:25.077 "uuid": "36d96702-72ed-4b99-991d-7ca5d237fae5", 00:28:25.077 "strip_size_kb": 0, 00:28:25.077 "state": "online", 00:28:25.077 "raid_level": "raid1", 00:28:25.077 "superblock": false, 00:28:25.077 "num_base_bdevs": 4, 00:28:25.077 "num_base_bdevs_discovered": 3, 00:28:25.077 "num_base_bdevs_operational": 3, 00:28:25.077 "base_bdevs_list": [ 00:28:25.077 { 00:28:25.077 "name": null, 00:28:25.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:25.077 "is_configured": false, 00:28:25.077 "data_offset": 0, 00:28:25.077 "data_size": 65536 00:28:25.077 }, 00:28:25.077 { 00:28:25.077 "name": "BaseBdev2", 00:28:25.077 "uuid": "23ff415f-641f-5bc5-a7b3-cc155ef6a21f", 00:28:25.077 "is_configured": true, 00:28:25.077 "data_offset": 0, 00:28:25.077 "data_size": 65536 00:28:25.077 }, 00:28:25.077 { 00:28:25.077 "name": "BaseBdev3", 00:28:25.077 "uuid": "c5a7c94e-b835-5b84-bc1a-bb94783b32bf", 00:28:25.078 "is_configured": true, 00:28:25.078 "data_offset": 0, 00:28:25.078 "data_size": 65536 00:28:25.078 }, 00:28:25.078 { 00:28:25.078 "name": "BaseBdev4", 00:28:25.078 "uuid": "00a02c4d-fd2d-5adf-ae31-2b63b103512a", 00:28:25.078 "is_configured": true, 00:28:25.078 "data_offset": 0, 00:28:25.078 "data_size": 65536 00:28:25.078 } 00:28:25.078 ] 00:28:25.078 }' 00:28:25.078 23:14:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:25.078 23:14:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:25.078 23:14:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:25.078 23:14:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:25.078 23:14:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:25.337 [2024-07-13 23:14:14.673343] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:25.337 [2024-07-13 23:14:14.679530] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06560 00:28:25.337 [2024-07-13 23:14:14.682189] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:25.337 23:14:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:26.711 23:14:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:26.711 23:14:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:26.711 23:14:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:26.711 23:14:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:26.711 23:14:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:26.711 23:14:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:26.711 23:14:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:26.711 23:14:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:26.711 "name": "raid_bdev1", 00:28:26.711 "uuid": "36d96702-72ed-4b99-991d-7ca5d237fae5", 00:28:26.711 "strip_size_kb": 0, 00:28:26.711 "state": "online", 00:28:26.711 "raid_level": "raid1", 00:28:26.711 "superblock": false, 00:28:26.711 "num_base_bdevs": 4, 00:28:26.711 "num_base_bdevs_discovered": 4, 00:28:26.711 "num_base_bdevs_operational": 4, 00:28:26.711 "process": { 00:28:26.711 "type": "rebuild", 00:28:26.711 "target": "spare", 00:28:26.711 "progress": { 00:28:26.711 "blocks": 24576, 00:28:26.711 "percent": 37 00:28:26.711 } 00:28:26.711 }, 00:28:26.711 "base_bdevs_list": [ 00:28:26.711 { 00:28:26.711 "name": "spare", 00:28:26.711 "uuid": "69c7d64f-71f8-5169-a255-ffb6f13fd6a3", 00:28:26.711 "is_configured": true, 00:28:26.711 "data_offset": 0, 00:28:26.711 "data_size": 65536 00:28:26.711 }, 00:28:26.711 { 00:28:26.711 "name": "BaseBdev2", 00:28:26.711 "uuid": "23ff415f-641f-5bc5-a7b3-cc155ef6a21f", 00:28:26.711 "is_configured": true, 00:28:26.711 "data_offset": 0, 00:28:26.711 "data_size": 65536 00:28:26.711 }, 00:28:26.711 { 00:28:26.711 "name": "BaseBdev3", 00:28:26.711 "uuid": "c5a7c94e-b835-5b84-bc1a-bb94783b32bf", 00:28:26.711 "is_configured": true, 00:28:26.711 "data_offset": 0, 00:28:26.711 "data_size": 65536 00:28:26.711 }, 00:28:26.711 { 00:28:26.711 "name": "BaseBdev4", 00:28:26.711 "uuid": "00a02c4d-fd2d-5adf-ae31-2b63b103512a", 00:28:26.711 "is_configured": true, 00:28:26.711 "data_offset": 0, 00:28:26.711 "data_size": 65536 00:28:26.711 } 00:28:26.711 ] 00:28:26.711 }' 00:28:26.711 23:14:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:26.711 23:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:26.711 23:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:26.711 23:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:26.711 23:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:28:26.711 23:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:28:26.711 23:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:28:26.711 23:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:28:26.711 23:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:28:26.969 [2024-07-13 23:14:16.340314] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:27.227 [2024-07-13 23:14:16.394720] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d06560 00:28:27.227 23:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:28:27.227 23:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:28:27.227 23:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:27.227 23:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:27.227 23:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:27.227 23:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:27.227 23:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:27.227 23:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:27.227 23:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:27.486 23:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:27.486 "name": "raid_bdev1", 00:28:27.486 "uuid": "36d96702-72ed-4b99-991d-7ca5d237fae5", 00:28:27.486 "strip_size_kb": 0, 00:28:27.486 "state": "online", 00:28:27.486 "raid_level": "raid1", 00:28:27.486 "superblock": false, 00:28:27.486 "num_base_bdevs": 4, 00:28:27.486 "num_base_bdevs_discovered": 3, 00:28:27.486 "num_base_bdevs_operational": 3, 00:28:27.486 "process": { 00:28:27.486 "type": "rebuild", 00:28:27.486 "target": "spare", 00:28:27.486 "progress": { 00:28:27.486 "blocks": 38912, 00:28:27.486 "percent": 59 00:28:27.486 } 00:28:27.486 }, 00:28:27.486 "base_bdevs_list": [ 00:28:27.486 { 00:28:27.486 "name": "spare", 00:28:27.486 "uuid": "69c7d64f-71f8-5169-a255-ffb6f13fd6a3", 00:28:27.486 "is_configured": true, 00:28:27.486 "data_offset": 0, 00:28:27.486 "data_size": 65536 00:28:27.486 }, 00:28:27.486 { 00:28:27.486 "name": null, 00:28:27.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:27.486 "is_configured": false, 00:28:27.486 "data_offset": 0, 00:28:27.486 "data_size": 65536 00:28:27.486 }, 00:28:27.486 { 00:28:27.486 "name": "BaseBdev3", 00:28:27.486 "uuid": "c5a7c94e-b835-5b84-bc1a-bb94783b32bf", 00:28:27.486 "is_configured": true, 00:28:27.486 "data_offset": 0, 00:28:27.486 "data_size": 65536 00:28:27.486 }, 00:28:27.486 { 00:28:27.486 "name": "BaseBdev4", 00:28:27.486 "uuid": "00a02c4d-fd2d-5adf-ae31-2b63b103512a", 00:28:27.486 "is_configured": true, 00:28:27.486 "data_offset": 0, 00:28:27.486 "data_size": 65536 00:28:27.486 } 00:28:27.486 ] 00:28:27.486 }' 00:28:27.486 23:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:27.486 23:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:27.486 23:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:27.486 23:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:27.486 23:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=894 00:28:27.486 23:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:27.486 23:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:27.486 23:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:27.486 23:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:27.486 23:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:27.486 23:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:27.486 23:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:27.486 23:14:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:27.744 23:14:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:27.744 "name": "raid_bdev1", 00:28:27.744 "uuid": "36d96702-72ed-4b99-991d-7ca5d237fae5", 00:28:27.744 "strip_size_kb": 0, 00:28:27.744 "state": "online", 00:28:27.744 "raid_level": "raid1", 00:28:27.744 "superblock": false, 00:28:27.744 "num_base_bdevs": 4, 00:28:27.744 "num_base_bdevs_discovered": 3, 00:28:27.744 "num_base_bdevs_operational": 3, 00:28:27.744 "process": { 00:28:27.744 "type": "rebuild", 00:28:27.744 "target": "spare", 00:28:27.744 "progress": { 00:28:27.744 "blocks": 47104, 00:28:27.744 "percent": 71 00:28:27.744 } 00:28:27.744 }, 00:28:27.744 "base_bdevs_list": [ 00:28:27.744 { 00:28:27.744 "name": "spare", 00:28:27.744 "uuid": "69c7d64f-71f8-5169-a255-ffb6f13fd6a3", 00:28:27.744 "is_configured": true, 00:28:27.744 "data_offset": 0, 00:28:27.744 "data_size": 65536 00:28:27.744 }, 00:28:27.744 { 00:28:27.744 "name": null, 00:28:27.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:27.744 "is_configured": false, 00:28:27.744 "data_offset": 0, 00:28:27.744 "data_size": 65536 00:28:27.744 }, 00:28:27.744 { 00:28:27.744 "name": "BaseBdev3", 00:28:27.744 "uuid": "c5a7c94e-b835-5b84-bc1a-bb94783b32bf", 00:28:27.744 "is_configured": true, 00:28:27.744 "data_offset": 0, 00:28:27.744 "data_size": 65536 00:28:27.744 }, 00:28:27.744 { 00:28:27.744 "name": "BaseBdev4", 00:28:27.744 "uuid": "00a02c4d-fd2d-5adf-ae31-2b63b103512a", 00:28:27.744 "is_configured": true, 00:28:27.744 "data_offset": 0, 00:28:27.744 "data_size": 65536 00:28:27.744 } 00:28:27.744 ] 00:28:27.744 }' 00:28:27.744 23:14:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:27.744 23:14:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:27.744 23:14:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:27.744 23:14:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:27.744 23:14:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:28.685 [2024-07-13 23:14:17.905831] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:28.685 [2024-07-13 23:14:17.906013] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:28.685 [2024-07-13 23:14:17.906147] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:28.942 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:28.942 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:28.942 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:28.942 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:28.942 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:28.942 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:28.942 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:28.942 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:29.200 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:29.200 "name": "raid_bdev1", 00:28:29.200 "uuid": "36d96702-72ed-4b99-991d-7ca5d237fae5", 00:28:29.201 "strip_size_kb": 0, 00:28:29.201 "state": "online", 00:28:29.201 "raid_level": "raid1", 00:28:29.201 "superblock": false, 00:28:29.201 "num_base_bdevs": 4, 00:28:29.201 "num_base_bdevs_discovered": 3, 00:28:29.201 "num_base_bdevs_operational": 3, 00:28:29.201 "base_bdevs_list": [ 00:28:29.201 { 00:28:29.201 "name": "spare", 00:28:29.201 "uuid": "69c7d64f-71f8-5169-a255-ffb6f13fd6a3", 00:28:29.201 "is_configured": true, 00:28:29.201 "data_offset": 0, 00:28:29.201 "data_size": 65536 00:28:29.201 }, 00:28:29.201 { 00:28:29.201 "name": null, 00:28:29.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:29.201 "is_configured": false, 00:28:29.201 "data_offset": 0, 00:28:29.201 "data_size": 65536 00:28:29.201 }, 00:28:29.201 { 00:28:29.201 "name": "BaseBdev3", 00:28:29.201 "uuid": "c5a7c94e-b835-5b84-bc1a-bb94783b32bf", 00:28:29.201 "is_configured": true, 00:28:29.201 "data_offset": 0, 00:28:29.201 "data_size": 65536 00:28:29.201 }, 00:28:29.201 { 00:28:29.201 "name": "BaseBdev4", 00:28:29.201 "uuid": "00a02c4d-fd2d-5adf-ae31-2b63b103512a", 00:28:29.201 "is_configured": true, 00:28:29.201 "data_offset": 0, 00:28:29.201 "data_size": 65536 00:28:29.201 } 00:28:29.201 ] 00:28:29.201 }' 00:28:29.201 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:29.201 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:29.201 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:29.201 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:28:29.201 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:28:29.201 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:29.201 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:29.201 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:29.201 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:29.201 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:29.201 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:29.201 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:29.459 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:29.459 "name": "raid_bdev1", 00:28:29.459 "uuid": "36d96702-72ed-4b99-991d-7ca5d237fae5", 00:28:29.459 "strip_size_kb": 0, 00:28:29.459 "state": "online", 00:28:29.459 "raid_level": "raid1", 00:28:29.459 "superblock": false, 00:28:29.459 "num_base_bdevs": 4, 00:28:29.459 "num_base_bdevs_discovered": 3, 00:28:29.459 "num_base_bdevs_operational": 3, 00:28:29.459 "base_bdevs_list": [ 00:28:29.459 { 00:28:29.459 "name": "spare", 00:28:29.459 "uuid": "69c7d64f-71f8-5169-a255-ffb6f13fd6a3", 00:28:29.459 "is_configured": true, 00:28:29.459 "data_offset": 0, 00:28:29.459 "data_size": 65536 00:28:29.459 }, 00:28:29.459 { 00:28:29.459 "name": null, 00:28:29.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:29.459 "is_configured": false, 00:28:29.459 "data_offset": 0, 00:28:29.459 "data_size": 65536 00:28:29.459 }, 00:28:29.459 { 00:28:29.459 "name": "BaseBdev3", 00:28:29.459 "uuid": "c5a7c94e-b835-5b84-bc1a-bb94783b32bf", 00:28:29.459 "is_configured": true, 00:28:29.459 "data_offset": 0, 00:28:29.459 "data_size": 65536 00:28:29.459 }, 00:28:29.459 { 00:28:29.459 "name": "BaseBdev4", 00:28:29.459 "uuid": "00a02c4d-fd2d-5adf-ae31-2b63b103512a", 00:28:29.459 "is_configured": true, 00:28:29.459 "data_offset": 0, 00:28:29.459 "data_size": 65536 00:28:29.459 } 00:28:29.459 ] 00:28:29.459 }' 00:28:29.459 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:29.459 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:29.459 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:29.717 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:29.717 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:29.717 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:29.717 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:29.717 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:29.717 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:29.717 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:29.717 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:29.717 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:29.717 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:29.717 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:29.717 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:29.717 23:14:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:29.975 23:14:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:29.975 "name": "raid_bdev1", 00:28:29.975 "uuid": "36d96702-72ed-4b99-991d-7ca5d237fae5", 00:28:29.975 "strip_size_kb": 0, 00:28:29.975 "state": "online", 00:28:29.975 "raid_level": "raid1", 00:28:29.975 "superblock": false, 00:28:29.975 "num_base_bdevs": 4, 00:28:29.975 "num_base_bdevs_discovered": 3, 00:28:29.975 "num_base_bdevs_operational": 3, 00:28:29.975 "base_bdevs_list": [ 00:28:29.975 { 00:28:29.975 "name": "spare", 00:28:29.975 "uuid": "69c7d64f-71f8-5169-a255-ffb6f13fd6a3", 00:28:29.975 "is_configured": true, 00:28:29.975 "data_offset": 0, 00:28:29.975 "data_size": 65536 00:28:29.975 }, 00:28:29.975 { 00:28:29.975 "name": null, 00:28:29.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:29.975 "is_configured": false, 00:28:29.975 "data_offset": 0, 00:28:29.975 "data_size": 65536 00:28:29.975 }, 00:28:29.975 { 00:28:29.975 "name": "BaseBdev3", 00:28:29.975 "uuid": "c5a7c94e-b835-5b84-bc1a-bb94783b32bf", 00:28:29.975 "is_configured": true, 00:28:29.975 "data_offset": 0, 00:28:29.975 "data_size": 65536 00:28:29.975 }, 00:28:29.975 { 00:28:29.975 "name": "BaseBdev4", 00:28:29.975 "uuid": "00a02c4d-fd2d-5adf-ae31-2b63b103512a", 00:28:29.975 "is_configured": true, 00:28:29.975 "data_offset": 0, 00:28:29.975 "data_size": 65536 00:28:29.975 } 00:28:29.975 ] 00:28:29.975 }' 00:28:29.975 23:14:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:29.975 23:14:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:30.540 23:14:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:30.797 [2024-07-13 23:14:20.001796] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:30.797 [2024-07-13 23:14:20.001906] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:30.797 [2024-07-13 23:14:20.002078] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:30.797 [2024-07-13 23:14:20.002241] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:30.797 [2024-07-13 23:14:20.002261] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:28:30.797 23:14:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:30.797 23:14:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:28:31.055 23:14:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:28:31.055 23:14:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:28:31.055 23:14:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:28:31.055 23:14:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:28:31.055 23:14:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:31.055 23:14:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:28:31.055 23:14:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:31.055 23:14:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:31.055 23:14:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:31.055 23:14:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:28:31.055 23:14:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:31.055 23:14:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:31.055 23:14:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:28:31.315 /dev/nbd0 00:28:31.315 23:14:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:31.315 23:14:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:31.315 23:14:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:28:31.315 23:14:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:28:31.315 23:14:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:31.315 23:14:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:31.315 23:14:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:28:31.315 23:14:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:28:31.315 23:14:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:31.315 23:14:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:31.315 23:14:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:31.315 1+0 records in 00:28:31.315 1+0 records out 00:28:31.315 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000638972 s, 6.4 MB/s 00:28:31.315 23:14:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:31.315 23:14:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:28:31.315 23:14:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:31.315 23:14:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:31.315 23:14:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:28:31.315 23:14:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:31.315 23:14:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:31.315 23:14:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:28:31.574 /dev/nbd1 00:28:31.574 23:14:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:31.574 23:14:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:31.574 23:14:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:28:31.574 23:14:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:28:31.574 23:14:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:31.574 23:14:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:31.574 23:14:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:28:31.574 23:14:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:28:31.574 23:14:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:31.574 23:14:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:31.574 23:14:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:31.574 1+0 records in 00:28:31.574 1+0 records out 00:28:31.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432892 s, 9.5 MB/s 00:28:31.574 23:14:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:31.575 23:14:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:28:31.575 23:14:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:31.575 23:14:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:31.575 23:14:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:28:31.575 23:14:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:31.575 23:14:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:31.575 23:14:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:28:31.575 23:14:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:28:31.575 23:14:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:31.575 23:14:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:31.575 23:14:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:31.575 23:14:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:28:31.575 23:14:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:31.575 23:14:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:32.139 23:14:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:32.139 23:14:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:32.139 23:14:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:32.139 23:14:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:32.139 23:14:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:32.139 23:14:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:32.139 23:14:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:28:32.139 23:14:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:28:32.139 23:14:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:32.139 23:14:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:28:32.397 23:14:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:32.397 23:14:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:32.397 23:14:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:32.397 23:14:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:32.397 23:14:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:32.397 23:14:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:32.397 23:14:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:28:32.397 23:14:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:28:32.397 23:14:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:28:32.397 23:14:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 156639 00:28:32.397 23:14:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 156639 ']' 00:28:32.397 23:14:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 156639 00:28:32.397 23:14:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:28:32.397 23:14:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:32.398 23:14:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 156639 00:28:32.398 23:14:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:32.398 23:14:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:32.398 23:14:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 156639' 00:28:32.398 killing process with pid 156639 00:28:32.398 23:14:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@967 -- # kill 156639 00:28:32.398 Received shutdown signal, test time was about 60.000000 seconds 00:28:32.398 00:28:32.398 Latency(us) 00:28:32.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.398 =================================================================================================================== 00:28:32.398 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:32.398 [2024-07-13 23:14:21.609712] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:32.398 23:14:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # wait 156639 00:28:32.398 [2024-07-13 23:14:21.693828] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:32.656 23:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:28:32.656 00:28:32.656 real 0m26.261s 00:28:32.656 user 0m36.118s 00:28:32.656 sys 0m5.104s 00:28:32.656 23:14:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:32.656 23:14:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.656 ************************************ 00:28:32.656 END TEST raid_rebuild_test 00:28:32.656 ************************************ 00:28:32.914 23:14:22 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:28:32.914 23:14:22 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:28:32.914 23:14:22 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:28:32.914 23:14:22 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:32.914 23:14:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:32.914 ************************************ 00:28:32.914 START TEST raid_rebuild_test_sb 00:28:32.914 ************************************ 00:28:32.914 23:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 true false true 00:28:32.914 23:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:28:32.914 23:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:28:32.914 23:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:28:32.914 23:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:28:32.914 23:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:28:32.914 23:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:28:32.914 23:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:32.914 23:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:28:32.914 23:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:32.914 23:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:32.914 23:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:28:32.914 23:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:32.914 23:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:32.914 23:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:28:32.914 23:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:32.914 23:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:32.914 23:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:28:32.914 23:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:32.914 23:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:32.914 23:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:32.914 23:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:28:32.914 23:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:28:32.914 23:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:28:32.914 23:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:28:32.915 23:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:28:32.915 23:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:28:32.915 23:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:28:32.915 23:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:28:32.915 23:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:28:32.915 23:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:28:32.915 23:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=157228 00:28:32.915 23:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:32.915 23:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 157228 /var/tmp/spdk-raid.sock 00:28:32.915 23:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 157228 ']' 00:28:32.915 23:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:32.915 23:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:32.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:32.915 23:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:32.915 23:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:32.915 23:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:32.915 [2024-07-13 23:14:22.178989] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:32.915 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:32.915 Zero copy mechanism will not be used. 00:28:32.915 [2024-07-13 23:14:22.179192] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid157228 ] 00:28:33.173 [2024-07-13 23:14:22.320753] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.173 [2024-07-13 23:14:22.430017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:33.173 [2024-07-13 23:14:22.502993] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:34.107 23:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:34.107 23:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:28:34.107 23:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:34.107 23:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:34.107 BaseBdev1_malloc 00:28:34.107 23:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:34.364 [2024-07-13 23:14:23.659616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:34.364 [2024-07-13 23:14:23.659771] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:34.364 [2024-07-13 23:14:23.659819] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:28:34.364 [2024-07-13 23:14:23.659889] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:34.364 [2024-07-13 23:14:23.662679] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:34.364 [2024-07-13 23:14:23.662749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:34.364 BaseBdev1 00:28:34.364 23:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:34.364 23:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:34.672 BaseBdev2_malloc 00:28:34.672 23:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:34.951 [2024-07-13 23:14:24.161883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:34.951 [2024-07-13 23:14:24.162026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:34.951 [2024-07-13 23:14:24.162075] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:28:34.951 [2024-07-13 23:14:24.162132] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:34.951 [2024-07-13 23:14:24.164839] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:34.951 [2024-07-13 23:14:24.164892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:34.951 BaseBdev2 00:28:34.951 23:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:34.951 23:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:28:35.210 BaseBdev3_malloc 00:28:35.210 23:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:28:35.470 [2024-07-13 23:14:24.671039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:28:35.470 [2024-07-13 23:14:24.671224] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:35.470 [2024-07-13 23:14:24.671291] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:35.470 [2024-07-13 23:14:24.671371] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:35.470 [2024-07-13 23:14:24.674765] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:35.470 [2024-07-13 23:14:24.674834] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:28:35.470 BaseBdev3 00:28:35.470 23:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:35.470 23:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:28:35.729 BaseBdev4_malloc 00:28:35.729 23:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:28:35.988 [2024-07-13 23:14:25.211382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:28:35.988 [2024-07-13 23:14:25.211606] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:35.988 [2024-07-13 23:14:25.211658] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:28:35.988 [2024-07-13 23:14:25.211724] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:35.988 [2024-07-13 23:14:25.214800] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:35.988 [2024-07-13 23:14:25.214868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:28:35.988 BaseBdev4 00:28:35.988 23:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:28:36.246 spare_malloc 00:28:36.246 23:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:36.504 spare_delay 00:28:36.504 23:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:36.763 [2024-07-13 23:14:25.955878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:36.763 [2024-07-13 23:14:25.955991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:36.763 [2024-07-13 23:14:25.956033] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:28:36.763 [2024-07-13 23:14:25.956084] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:36.763 [2024-07-13 23:14:25.959122] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:36.763 [2024-07-13 23:14:25.959229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:36.763 spare 00:28:36.763 23:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:28:37.023 [2024-07-13 23:14:26.208194] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:37.023 [2024-07-13 23:14:26.210944] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:37.023 [2024-07-13 23:14:26.211038] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:37.023 [2024-07-13 23:14:26.211114] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:37.023 [2024-07-13 23:14:26.211447] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:28:37.023 [2024-07-13 23:14:26.211515] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:37.023 [2024-07-13 23:14:26.211712] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:28:37.023 [2024-07-13 23:14:26.212181] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:28:37.023 [2024-07-13 23:14:26.212213] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:28:37.023 [2024-07-13 23:14:26.212482] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:37.023 23:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:37.023 23:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:37.023 23:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:37.023 23:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:37.023 23:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:37.023 23:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:37.023 23:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:37.023 23:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:37.023 23:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:37.023 23:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:37.023 23:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:37.023 23:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:37.282 23:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:37.282 "name": "raid_bdev1", 00:28:37.282 "uuid": "c423db2b-bb18-4617-acb1-ff1baf7511cf", 00:28:37.282 "strip_size_kb": 0, 00:28:37.282 "state": "online", 00:28:37.282 "raid_level": "raid1", 00:28:37.282 "superblock": true, 00:28:37.282 "num_base_bdevs": 4, 00:28:37.282 "num_base_bdevs_discovered": 4, 00:28:37.282 "num_base_bdevs_operational": 4, 00:28:37.282 "base_bdevs_list": [ 00:28:37.282 { 00:28:37.282 "name": "BaseBdev1", 00:28:37.282 "uuid": "2eda90ad-1e81-527a-8dea-f977fa646f0e", 00:28:37.282 "is_configured": true, 00:28:37.282 "data_offset": 2048, 00:28:37.282 "data_size": 63488 00:28:37.282 }, 00:28:37.282 { 00:28:37.282 "name": "BaseBdev2", 00:28:37.282 "uuid": "2f05cb7f-bd02-5261-9e65-bd8420e3a2bd", 00:28:37.282 "is_configured": true, 00:28:37.282 "data_offset": 2048, 00:28:37.282 "data_size": 63488 00:28:37.282 }, 00:28:37.282 { 00:28:37.282 "name": "BaseBdev3", 00:28:37.282 "uuid": "6625ee70-ebe7-5778-8bc5-89fbf24a27a0", 00:28:37.282 "is_configured": true, 00:28:37.282 "data_offset": 2048, 00:28:37.282 "data_size": 63488 00:28:37.282 }, 00:28:37.282 { 00:28:37.282 "name": "BaseBdev4", 00:28:37.282 "uuid": "33f7f7e9-aaca-53fa-b91b-1d14a64535b4", 00:28:37.282 "is_configured": true, 00:28:37.282 "data_offset": 2048, 00:28:37.282 "data_size": 63488 00:28:37.282 } 00:28:37.282 ] 00:28:37.282 }' 00:28:37.282 23:14:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:37.282 23:14:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:37.851 23:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:37.851 23:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:28:38.110 [2024-07-13 23:14:27.385299] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:38.110 23:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:28:38.110 23:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:38.110 23:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:38.369 23:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:28:38.369 23:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:28:38.369 23:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:28:38.369 23:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:28:38.369 23:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:28:38.369 23:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:38.369 23:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:28:38.369 23:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:38.369 23:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:38.369 23:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:38.369 23:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:28:38.369 23:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:38.369 23:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:38.369 23:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:28:38.628 [2024-07-13 23:14:27.889034] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:28:38.628 /dev/nbd0 00:28:38.628 23:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:38.628 23:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:38.628 23:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:28:38.628 23:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:28:38.628 23:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:38.628 23:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:38.628 23:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:28:38.628 23:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:28:38.628 23:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:38.628 23:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:38.628 23:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:38.628 1+0 records in 00:28:38.628 1+0 records out 00:28:38.628 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000703156 s, 5.8 MB/s 00:28:38.628 23:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:38.628 23:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:28:38.628 23:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:38.629 23:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:38.629 23:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:28:38.629 23:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:38.629 23:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:38.629 23:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:28:38.629 23:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:28:38.629 23:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:28:46.744 63488+0 records in 00:28:46.744 63488+0 records out 00:28:46.744 32505856 bytes (33 MB, 31 MiB) copied, 7.64904 s, 4.2 MB/s 00:28:46.744 23:14:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:28:46.744 23:14:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:46.744 23:14:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:46.744 23:14:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:46.744 23:14:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:28:46.744 23:14:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:46.744 23:14:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:46.744 [2024-07-13 23:14:35.915230] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:46.744 23:14:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:46.744 23:14:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:46.744 23:14:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:46.744 23:14:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:46.744 23:14:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:46.744 23:14:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:46.744 23:14:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:28:46.744 23:14:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:28:46.744 23:14:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:47.002 [2024-07-13 23:14:36.214774] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:47.002 23:14:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:47.002 23:14:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:47.002 23:14:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:47.002 23:14:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:47.002 23:14:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:47.002 23:14:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:47.002 23:14:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:47.002 23:14:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:47.002 23:14:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:47.002 23:14:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:47.002 23:14:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:47.002 23:14:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:47.260 23:14:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:47.260 "name": "raid_bdev1", 00:28:47.260 "uuid": "c423db2b-bb18-4617-acb1-ff1baf7511cf", 00:28:47.260 "strip_size_kb": 0, 00:28:47.260 "state": "online", 00:28:47.260 "raid_level": "raid1", 00:28:47.260 "superblock": true, 00:28:47.260 "num_base_bdevs": 4, 00:28:47.260 "num_base_bdevs_discovered": 3, 00:28:47.260 "num_base_bdevs_operational": 3, 00:28:47.260 "base_bdevs_list": [ 00:28:47.260 { 00:28:47.260 "name": null, 00:28:47.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:47.260 "is_configured": false, 00:28:47.260 "data_offset": 2048, 00:28:47.260 "data_size": 63488 00:28:47.260 }, 00:28:47.260 { 00:28:47.260 "name": "BaseBdev2", 00:28:47.260 "uuid": "2f05cb7f-bd02-5261-9e65-bd8420e3a2bd", 00:28:47.260 "is_configured": true, 00:28:47.260 "data_offset": 2048, 00:28:47.260 "data_size": 63488 00:28:47.260 }, 00:28:47.260 { 00:28:47.260 "name": "BaseBdev3", 00:28:47.260 "uuid": "6625ee70-ebe7-5778-8bc5-89fbf24a27a0", 00:28:47.260 "is_configured": true, 00:28:47.260 "data_offset": 2048, 00:28:47.260 "data_size": 63488 00:28:47.260 }, 00:28:47.260 { 00:28:47.260 "name": "BaseBdev4", 00:28:47.260 "uuid": "33f7f7e9-aaca-53fa-b91b-1d14a64535b4", 00:28:47.260 "is_configured": true, 00:28:47.260 "data_offset": 2048, 00:28:47.260 "data_size": 63488 00:28:47.260 } 00:28:47.260 ] 00:28:47.260 }' 00:28:47.260 23:14:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:47.260 23:14:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:47.827 23:14:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:48.085 [2024-07-13 23:14:37.410006] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:48.085 [2024-07-13 23:14:37.415918] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e420 00:28:48.085 [2024-07-13 23:14:37.418450] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:48.085 23:14:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:28:49.462 23:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:49.462 23:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:49.462 23:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:49.462 23:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:49.462 23:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:49.462 23:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:49.462 23:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:49.462 23:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:49.462 "name": "raid_bdev1", 00:28:49.462 "uuid": "c423db2b-bb18-4617-acb1-ff1baf7511cf", 00:28:49.462 "strip_size_kb": 0, 00:28:49.462 "state": "online", 00:28:49.462 "raid_level": "raid1", 00:28:49.462 "superblock": true, 00:28:49.462 "num_base_bdevs": 4, 00:28:49.462 "num_base_bdevs_discovered": 4, 00:28:49.462 "num_base_bdevs_operational": 4, 00:28:49.462 "process": { 00:28:49.462 "type": "rebuild", 00:28:49.462 "target": "spare", 00:28:49.462 "progress": { 00:28:49.462 "blocks": 24576, 00:28:49.462 "percent": 38 00:28:49.462 } 00:28:49.462 }, 00:28:49.462 "base_bdevs_list": [ 00:28:49.462 { 00:28:49.462 "name": "spare", 00:28:49.462 "uuid": "d45ed1e2-3689-5fb8-8b97-d2ea8aa4b01e", 00:28:49.462 "is_configured": true, 00:28:49.462 "data_offset": 2048, 00:28:49.462 "data_size": 63488 00:28:49.462 }, 00:28:49.462 { 00:28:49.462 "name": "BaseBdev2", 00:28:49.462 "uuid": "2f05cb7f-bd02-5261-9e65-bd8420e3a2bd", 00:28:49.462 "is_configured": true, 00:28:49.462 "data_offset": 2048, 00:28:49.462 "data_size": 63488 00:28:49.462 }, 00:28:49.462 { 00:28:49.462 "name": "BaseBdev3", 00:28:49.462 "uuid": "6625ee70-ebe7-5778-8bc5-89fbf24a27a0", 00:28:49.462 "is_configured": true, 00:28:49.462 "data_offset": 2048, 00:28:49.462 "data_size": 63488 00:28:49.462 }, 00:28:49.462 { 00:28:49.462 "name": "BaseBdev4", 00:28:49.462 "uuid": "33f7f7e9-aaca-53fa-b91b-1d14a64535b4", 00:28:49.462 "is_configured": true, 00:28:49.462 "data_offset": 2048, 00:28:49.462 "data_size": 63488 00:28:49.462 } 00:28:49.462 ] 00:28:49.462 }' 00:28:49.462 23:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:49.462 23:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:49.462 23:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:49.462 23:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:49.462 23:14:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:49.721 [2024-07-13 23:14:39.041240] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:49.980 [2024-07-13 23:14:39.130480] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:49.980 [2024-07-13 23:14:39.130599] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:49.980 [2024-07-13 23:14:39.130626] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:49.980 [2024-07-13 23:14:39.130637] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:49.980 23:14:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:49.980 23:14:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:49.980 23:14:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:49.980 23:14:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:49.980 23:14:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:49.980 23:14:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:49.980 23:14:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:49.980 23:14:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:49.980 23:14:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:49.980 23:14:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:49.980 23:14:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:49.980 23:14:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:50.239 23:14:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:50.239 "name": "raid_bdev1", 00:28:50.239 "uuid": "c423db2b-bb18-4617-acb1-ff1baf7511cf", 00:28:50.239 "strip_size_kb": 0, 00:28:50.239 "state": "online", 00:28:50.239 "raid_level": "raid1", 00:28:50.239 "superblock": true, 00:28:50.239 "num_base_bdevs": 4, 00:28:50.239 "num_base_bdevs_discovered": 3, 00:28:50.239 "num_base_bdevs_operational": 3, 00:28:50.239 "base_bdevs_list": [ 00:28:50.239 { 00:28:50.239 "name": null, 00:28:50.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:50.239 "is_configured": false, 00:28:50.239 "data_offset": 2048, 00:28:50.239 "data_size": 63488 00:28:50.239 }, 00:28:50.239 { 00:28:50.239 "name": "BaseBdev2", 00:28:50.239 "uuid": "2f05cb7f-bd02-5261-9e65-bd8420e3a2bd", 00:28:50.239 "is_configured": true, 00:28:50.239 "data_offset": 2048, 00:28:50.239 "data_size": 63488 00:28:50.239 }, 00:28:50.239 { 00:28:50.239 "name": "BaseBdev3", 00:28:50.239 "uuid": "6625ee70-ebe7-5778-8bc5-89fbf24a27a0", 00:28:50.239 "is_configured": true, 00:28:50.239 "data_offset": 2048, 00:28:50.239 "data_size": 63488 00:28:50.239 }, 00:28:50.239 { 00:28:50.239 "name": "BaseBdev4", 00:28:50.239 "uuid": "33f7f7e9-aaca-53fa-b91b-1d14a64535b4", 00:28:50.239 "is_configured": true, 00:28:50.239 "data_offset": 2048, 00:28:50.239 "data_size": 63488 00:28:50.239 } 00:28:50.239 ] 00:28:50.239 }' 00:28:50.240 23:14:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:50.240 23:14:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:50.806 23:14:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:50.806 23:14:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:50.806 23:14:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:50.806 23:14:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:50.806 23:14:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:50.806 23:14:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:50.806 23:14:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:51.065 23:14:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:51.065 "name": "raid_bdev1", 00:28:51.065 "uuid": "c423db2b-bb18-4617-acb1-ff1baf7511cf", 00:28:51.065 "strip_size_kb": 0, 00:28:51.065 "state": "online", 00:28:51.065 "raid_level": "raid1", 00:28:51.065 "superblock": true, 00:28:51.065 "num_base_bdevs": 4, 00:28:51.065 "num_base_bdevs_discovered": 3, 00:28:51.065 "num_base_bdevs_operational": 3, 00:28:51.065 "base_bdevs_list": [ 00:28:51.065 { 00:28:51.065 "name": null, 00:28:51.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:51.065 "is_configured": false, 00:28:51.065 "data_offset": 2048, 00:28:51.065 "data_size": 63488 00:28:51.065 }, 00:28:51.065 { 00:28:51.065 "name": "BaseBdev2", 00:28:51.065 "uuid": "2f05cb7f-bd02-5261-9e65-bd8420e3a2bd", 00:28:51.065 "is_configured": true, 00:28:51.065 "data_offset": 2048, 00:28:51.065 "data_size": 63488 00:28:51.065 }, 00:28:51.065 { 00:28:51.065 "name": "BaseBdev3", 00:28:51.065 "uuid": "6625ee70-ebe7-5778-8bc5-89fbf24a27a0", 00:28:51.065 "is_configured": true, 00:28:51.065 "data_offset": 2048, 00:28:51.065 "data_size": 63488 00:28:51.065 }, 00:28:51.065 { 00:28:51.065 "name": "BaseBdev4", 00:28:51.065 "uuid": "33f7f7e9-aaca-53fa-b91b-1d14a64535b4", 00:28:51.065 "is_configured": true, 00:28:51.065 "data_offset": 2048, 00:28:51.065 "data_size": 63488 00:28:51.065 } 00:28:51.065 ] 00:28:51.065 }' 00:28:51.065 23:14:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:51.065 23:14:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:51.065 23:14:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:51.065 23:14:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:51.065 23:14:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:51.324 [2024-07-13 23:14:40.565332] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:51.324 [2024-07-13 23:14:40.570907] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e5c0 00:28:51.324 [2024-07-13 23:14:40.573080] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:51.324 23:14:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:52.261 23:14:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:52.261 23:14:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:52.261 23:14:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:52.261 23:14:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:52.261 23:14:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:52.261 23:14:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:52.261 23:14:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:52.520 23:14:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:52.520 "name": "raid_bdev1", 00:28:52.520 "uuid": "c423db2b-bb18-4617-acb1-ff1baf7511cf", 00:28:52.520 "strip_size_kb": 0, 00:28:52.520 "state": "online", 00:28:52.520 "raid_level": "raid1", 00:28:52.520 "superblock": true, 00:28:52.520 "num_base_bdevs": 4, 00:28:52.520 "num_base_bdevs_discovered": 4, 00:28:52.520 "num_base_bdevs_operational": 4, 00:28:52.520 "process": { 00:28:52.520 "type": "rebuild", 00:28:52.520 "target": "spare", 00:28:52.520 "progress": { 00:28:52.520 "blocks": 24576, 00:28:52.520 "percent": 38 00:28:52.520 } 00:28:52.520 }, 00:28:52.520 "base_bdevs_list": [ 00:28:52.520 { 00:28:52.520 "name": "spare", 00:28:52.520 "uuid": "d45ed1e2-3689-5fb8-8b97-d2ea8aa4b01e", 00:28:52.520 "is_configured": true, 00:28:52.520 "data_offset": 2048, 00:28:52.520 "data_size": 63488 00:28:52.520 }, 00:28:52.520 { 00:28:52.520 "name": "BaseBdev2", 00:28:52.520 "uuid": "2f05cb7f-bd02-5261-9e65-bd8420e3a2bd", 00:28:52.520 "is_configured": true, 00:28:52.520 "data_offset": 2048, 00:28:52.520 "data_size": 63488 00:28:52.520 }, 00:28:52.520 { 00:28:52.520 "name": "BaseBdev3", 00:28:52.520 "uuid": "6625ee70-ebe7-5778-8bc5-89fbf24a27a0", 00:28:52.520 "is_configured": true, 00:28:52.520 "data_offset": 2048, 00:28:52.520 "data_size": 63488 00:28:52.520 }, 00:28:52.520 { 00:28:52.520 "name": "BaseBdev4", 00:28:52.520 "uuid": "33f7f7e9-aaca-53fa-b91b-1d14a64535b4", 00:28:52.520 "is_configured": true, 00:28:52.520 "data_offset": 2048, 00:28:52.520 "data_size": 63488 00:28:52.520 } 00:28:52.520 ] 00:28:52.520 }' 00:28:52.520 23:14:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:52.520 23:14:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:52.520 23:14:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:52.778 23:14:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:52.778 23:14:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:28:52.778 23:14:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:28:52.778 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:28:52.779 23:14:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:28:52.779 23:14:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:28:52.779 23:14:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:28:52.779 23:14:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:28:52.779 [2024-07-13 23:14:42.183241] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:53.037 [2024-07-13 23:14:42.283872] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000c3e5c0 00:28:53.037 23:14:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:28:53.037 23:14:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:28:53.037 23:14:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:53.037 23:14:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:53.037 23:14:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:53.037 23:14:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:53.037 23:14:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:53.037 23:14:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:53.037 23:14:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:53.297 23:14:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:53.297 "name": "raid_bdev1", 00:28:53.297 "uuid": "c423db2b-bb18-4617-acb1-ff1baf7511cf", 00:28:53.297 "strip_size_kb": 0, 00:28:53.297 "state": "online", 00:28:53.297 "raid_level": "raid1", 00:28:53.297 "superblock": true, 00:28:53.297 "num_base_bdevs": 4, 00:28:53.297 "num_base_bdevs_discovered": 3, 00:28:53.297 "num_base_bdevs_operational": 3, 00:28:53.297 "process": { 00:28:53.297 "type": "rebuild", 00:28:53.297 "target": "spare", 00:28:53.297 "progress": { 00:28:53.297 "blocks": 36864, 00:28:53.297 "percent": 58 00:28:53.297 } 00:28:53.297 }, 00:28:53.297 "base_bdevs_list": [ 00:28:53.297 { 00:28:53.297 "name": "spare", 00:28:53.297 "uuid": "d45ed1e2-3689-5fb8-8b97-d2ea8aa4b01e", 00:28:53.297 "is_configured": true, 00:28:53.297 "data_offset": 2048, 00:28:53.297 "data_size": 63488 00:28:53.297 }, 00:28:53.297 { 00:28:53.297 "name": null, 00:28:53.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:53.297 "is_configured": false, 00:28:53.297 "data_offset": 2048, 00:28:53.297 "data_size": 63488 00:28:53.297 }, 00:28:53.297 { 00:28:53.297 "name": "BaseBdev3", 00:28:53.297 "uuid": "6625ee70-ebe7-5778-8bc5-89fbf24a27a0", 00:28:53.297 "is_configured": true, 00:28:53.297 "data_offset": 2048, 00:28:53.297 "data_size": 63488 00:28:53.297 }, 00:28:53.297 { 00:28:53.297 "name": "BaseBdev4", 00:28:53.297 "uuid": "33f7f7e9-aaca-53fa-b91b-1d14a64535b4", 00:28:53.297 "is_configured": true, 00:28:53.297 "data_offset": 2048, 00:28:53.297 "data_size": 63488 00:28:53.297 } 00:28:53.297 ] 00:28:53.297 }' 00:28:53.297 23:14:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:53.297 23:14:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:53.297 23:14:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:53.297 23:14:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:53.297 23:14:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=920 00:28:53.297 23:14:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:53.297 23:14:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:53.297 23:14:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:53.297 23:14:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:53.297 23:14:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:53.297 23:14:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:53.297 23:14:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:53.297 23:14:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:53.556 23:14:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:53.556 "name": "raid_bdev1", 00:28:53.556 "uuid": "c423db2b-bb18-4617-acb1-ff1baf7511cf", 00:28:53.556 "strip_size_kb": 0, 00:28:53.556 "state": "online", 00:28:53.556 "raid_level": "raid1", 00:28:53.556 "superblock": true, 00:28:53.556 "num_base_bdevs": 4, 00:28:53.556 "num_base_bdevs_discovered": 3, 00:28:53.556 "num_base_bdevs_operational": 3, 00:28:53.556 "process": { 00:28:53.556 "type": "rebuild", 00:28:53.556 "target": "spare", 00:28:53.556 "progress": { 00:28:53.556 "blocks": 43008, 00:28:53.556 "percent": 67 00:28:53.556 } 00:28:53.556 }, 00:28:53.556 "base_bdevs_list": [ 00:28:53.556 { 00:28:53.556 "name": "spare", 00:28:53.556 "uuid": "d45ed1e2-3689-5fb8-8b97-d2ea8aa4b01e", 00:28:53.556 "is_configured": true, 00:28:53.556 "data_offset": 2048, 00:28:53.556 "data_size": 63488 00:28:53.556 }, 00:28:53.556 { 00:28:53.556 "name": null, 00:28:53.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:53.556 "is_configured": false, 00:28:53.556 "data_offset": 2048, 00:28:53.556 "data_size": 63488 00:28:53.556 }, 00:28:53.556 { 00:28:53.556 "name": "BaseBdev3", 00:28:53.556 "uuid": "6625ee70-ebe7-5778-8bc5-89fbf24a27a0", 00:28:53.556 "is_configured": true, 00:28:53.556 "data_offset": 2048, 00:28:53.556 "data_size": 63488 00:28:53.556 }, 00:28:53.556 { 00:28:53.556 "name": "BaseBdev4", 00:28:53.556 "uuid": "33f7f7e9-aaca-53fa-b91b-1d14a64535b4", 00:28:53.556 "is_configured": true, 00:28:53.557 "data_offset": 2048, 00:28:53.557 "data_size": 63488 00:28:53.557 } 00:28:53.557 ] 00:28:53.557 }' 00:28:53.557 23:14:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:53.557 23:14:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:53.557 23:14:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:53.816 23:14:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:53.816 23:14:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:54.748 [2024-07-13 23:14:43.795702] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:54.748 [2024-07-13 23:14:43.795828] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:54.748 [2024-07-13 23:14:43.796051] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:54.748 23:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:54.748 23:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:54.748 23:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:54.748 23:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:54.748 23:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:54.748 23:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:54.748 23:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:54.748 23:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:55.006 23:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:55.006 "name": "raid_bdev1", 00:28:55.006 "uuid": "c423db2b-bb18-4617-acb1-ff1baf7511cf", 00:28:55.006 "strip_size_kb": 0, 00:28:55.006 "state": "online", 00:28:55.006 "raid_level": "raid1", 00:28:55.006 "superblock": true, 00:28:55.006 "num_base_bdevs": 4, 00:28:55.006 "num_base_bdevs_discovered": 3, 00:28:55.006 "num_base_bdevs_operational": 3, 00:28:55.006 "base_bdevs_list": [ 00:28:55.006 { 00:28:55.006 "name": "spare", 00:28:55.006 "uuid": "d45ed1e2-3689-5fb8-8b97-d2ea8aa4b01e", 00:28:55.006 "is_configured": true, 00:28:55.006 "data_offset": 2048, 00:28:55.006 "data_size": 63488 00:28:55.006 }, 00:28:55.006 { 00:28:55.006 "name": null, 00:28:55.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:55.006 "is_configured": false, 00:28:55.006 "data_offset": 2048, 00:28:55.006 "data_size": 63488 00:28:55.006 }, 00:28:55.006 { 00:28:55.006 "name": "BaseBdev3", 00:28:55.006 "uuid": "6625ee70-ebe7-5778-8bc5-89fbf24a27a0", 00:28:55.006 "is_configured": true, 00:28:55.006 "data_offset": 2048, 00:28:55.006 "data_size": 63488 00:28:55.006 }, 00:28:55.006 { 00:28:55.006 "name": "BaseBdev4", 00:28:55.006 "uuid": "33f7f7e9-aaca-53fa-b91b-1d14a64535b4", 00:28:55.006 "is_configured": true, 00:28:55.006 "data_offset": 2048, 00:28:55.006 "data_size": 63488 00:28:55.006 } 00:28:55.006 ] 00:28:55.006 }' 00:28:55.006 23:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:55.006 23:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:55.006 23:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:55.305 23:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:28:55.305 23:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:28:55.305 23:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:55.305 23:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:55.305 23:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:55.305 23:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:55.305 23:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:55.305 23:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:55.305 23:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:55.305 23:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:55.305 "name": "raid_bdev1", 00:28:55.305 "uuid": "c423db2b-bb18-4617-acb1-ff1baf7511cf", 00:28:55.305 "strip_size_kb": 0, 00:28:55.305 "state": "online", 00:28:55.305 "raid_level": "raid1", 00:28:55.305 "superblock": true, 00:28:55.305 "num_base_bdevs": 4, 00:28:55.305 "num_base_bdevs_discovered": 3, 00:28:55.305 "num_base_bdevs_operational": 3, 00:28:55.305 "base_bdevs_list": [ 00:28:55.305 { 00:28:55.305 "name": "spare", 00:28:55.305 "uuid": "d45ed1e2-3689-5fb8-8b97-d2ea8aa4b01e", 00:28:55.305 "is_configured": true, 00:28:55.305 "data_offset": 2048, 00:28:55.305 "data_size": 63488 00:28:55.305 }, 00:28:55.305 { 00:28:55.305 "name": null, 00:28:55.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:55.305 "is_configured": false, 00:28:55.305 "data_offset": 2048, 00:28:55.305 "data_size": 63488 00:28:55.305 }, 00:28:55.305 { 00:28:55.305 "name": "BaseBdev3", 00:28:55.305 "uuid": "6625ee70-ebe7-5778-8bc5-89fbf24a27a0", 00:28:55.305 "is_configured": true, 00:28:55.305 "data_offset": 2048, 00:28:55.305 "data_size": 63488 00:28:55.305 }, 00:28:55.305 { 00:28:55.305 "name": "BaseBdev4", 00:28:55.305 "uuid": "33f7f7e9-aaca-53fa-b91b-1d14a64535b4", 00:28:55.305 "is_configured": true, 00:28:55.305 "data_offset": 2048, 00:28:55.305 "data_size": 63488 00:28:55.305 } 00:28:55.305 ] 00:28:55.305 }' 00:28:55.305 23:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:55.305 23:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:55.305 23:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:55.566 23:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:55.566 23:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:55.566 23:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:55.566 23:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:55.566 23:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:55.566 23:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:55.566 23:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:55.566 23:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:55.566 23:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:55.566 23:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:55.566 23:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:55.567 23:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:55.567 23:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:55.825 23:14:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:55.825 "name": "raid_bdev1", 00:28:55.825 "uuid": "c423db2b-bb18-4617-acb1-ff1baf7511cf", 00:28:55.825 "strip_size_kb": 0, 00:28:55.825 "state": "online", 00:28:55.825 "raid_level": "raid1", 00:28:55.825 "superblock": true, 00:28:55.825 "num_base_bdevs": 4, 00:28:55.825 "num_base_bdevs_discovered": 3, 00:28:55.825 "num_base_bdevs_operational": 3, 00:28:55.825 "base_bdevs_list": [ 00:28:55.825 { 00:28:55.825 "name": "spare", 00:28:55.825 "uuid": "d45ed1e2-3689-5fb8-8b97-d2ea8aa4b01e", 00:28:55.825 "is_configured": true, 00:28:55.825 "data_offset": 2048, 00:28:55.825 "data_size": 63488 00:28:55.825 }, 00:28:55.825 { 00:28:55.825 "name": null, 00:28:55.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:55.825 "is_configured": false, 00:28:55.825 "data_offset": 2048, 00:28:55.825 "data_size": 63488 00:28:55.825 }, 00:28:55.825 { 00:28:55.825 "name": "BaseBdev3", 00:28:55.825 "uuid": "6625ee70-ebe7-5778-8bc5-89fbf24a27a0", 00:28:55.825 "is_configured": true, 00:28:55.825 "data_offset": 2048, 00:28:55.825 "data_size": 63488 00:28:55.825 }, 00:28:55.825 { 00:28:55.825 "name": "BaseBdev4", 00:28:55.825 "uuid": "33f7f7e9-aaca-53fa-b91b-1d14a64535b4", 00:28:55.825 "is_configured": true, 00:28:55.825 "data_offset": 2048, 00:28:55.825 "data_size": 63488 00:28:55.825 } 00:28:55.825 ] 00:28:55.825 }' 00:28:55.825 23:14:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:55.825 23:14:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:56.393 23:14:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:56.651 [2024-07-13 23:14:45.872073] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:56.651 [2024-07-13 23:14:45.872138] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:56.651 [2024-07-13 23:14:45.872295] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:56.651 [2024-07-13 23:14:45.872477] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:56.651 [2024-07-13 23:14:45.872516] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:28:56.651 23:14:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:56.651 23:14:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:28:56.910 23:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:28:56.910 23:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:28:56.910 23:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:28:56.910 23:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:28:56.910 23:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:56.910 23:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:28:56.910 23:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:56.910 23:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:56.910 23:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:56.910 23:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:28:56.910 23:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:56.910 23:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:56.910 23:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:28:57.168 /dev/nbd0 00:28:57.168 23:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:57.168 23:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:57.168 23:14:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:28:57.168 23:14:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:28:57.168 23:14:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:57.168 23:14:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:57.168 23:14:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:28:57.168 23:14:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:28:57.168 23:14:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:57.168 23:14:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:57.168 23:14:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:57.168 1+0 records in 00:28:57.168 1+0 records out 00:28:57.168 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461562 s, 8.9 MB/s 00:28:57.168 23:14:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:57.168 23:14:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:28:57.168 23:14:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:57.168 23:14:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:57.168 23:14:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:28:57.168 23:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:57.168 23:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:57.168 23:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:28:57.426 /dev/nbd1 00:28:57.427 23:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:57.427 23:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:57.427 23:14:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:28:57.427 23:14:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:28:57.427 23:14:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:57.427 23:14:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:57.427 23:14:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:28:57.427 23:14:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:28:57.427 23:14:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:57.427 23:14:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:57.427 23:14:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:57.427 1+0 records in 00:28:57.427 1+0 records out 00:28:57.427 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000741765 s, 5.5 MB/s 00:28:57.427 23:14:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:57.427 23:14:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:28:57.427 23:14:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:57.427 23:14:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:57.427 23:14:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:28:57.427 23:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:57.427 23:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:57.427 23:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:28:57.686 23:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:28:57.686 23:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:57.686 23:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:57.686 23:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:57.686 23:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:28:57.686 23:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:57.686 23:14:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:57.944 23:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:57.944 23:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:57.944 23:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:57.944 23:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:57.944 23:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:57.944 23:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:57.944 23:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:28:57.944 23:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:28:57.944 23:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:57.944 23:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:28:58.203 23:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:58.203 23:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:58.203 23:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:58.203 23:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:58.203 23:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:58.203 23:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:58.203 23:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:28:58.203 23:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:28:58.203 23:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:28:58.203 23:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:58.460 23:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:58.718 [2024-07-13 23:14:47.999935] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:58.718 [2024-07-13 23:14:48.000115] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:58.718 [2024-07-13 23:14:48.000167] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:28:58.718 [2024-07-13 23:14:48.000205] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:58.718 [2024-07-13 23:14:48.003412] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:58.718 [2024-07-13 23:14:48.003530] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:58.718 [2024-07-13 23:14:48.003657] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:58.718 [2024-07-13 23:14:48.003781] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:58.718 [2024-07-13 23:14:48.004029] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:58.718 [2024-07-13 23:14:48.004321] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:58.718 spare 00:28:58.718 23:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:58.718 23:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:58.718 23:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:58.718 23:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:58.718 23:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:58.718 23:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:58.718 23:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:58.718 23:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:58.718 23:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:58.718 23:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:58.718 23:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:58.718 23:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:58.718 [2024-07-13 23:14:48.104533] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:28:58.718 [2024-07-13 23:14:48.104578] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:58.718 [2024-07-13 23:14:48.104777] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caee40 00:28:58.718 [2024-07-13 23:14:48.105371] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:28:58.718 [2024-07-13 23:14:48.105400] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:28:58.718 [2024-07-13 23:14:48.105627] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:58.976 23:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:58.976 "name": "raid_bdev1", 00:28:58.976 "uuid": "c423db2b-bb18-4617-acb1-ff1baf7511cf", 00:28:58.976 "strip_size_kb": 0, 00:28:58.976 "state": "online", 00:28:58.976 "raid_level": "raid1", 00:28:58.976 "superblock": true, 00:28:58.976 "num_base_bdevs": 4, 00:28:58.976 "num_base_bdevs_discovered": 3, 00:28:58.976 "num_base_bdevs_operational": 3, 00:28:58.976 "base_bdevs_list": [ 00:28:58.976 { 00:28:58.976 "name": "spare", 00:28:58.976 "uuid": "d45ed1e2-3689-5fb8-8b97-d2ea8aa4b01e", 00:28:58.976 "is_configured": true, 00:28:58.976 "data_offset": 2048, 00:28:58.976 "data_size": 63488 00:28:58.976 }, 00:28:58.976 { 00:28:58.976 "name": null, 00:28:58.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:58.976 "is_configured": false, 00:28:58.976 "data_offset": 2048, 00:28:58.976 "data_size": 63488 00:28:58.976 }, 00:28:58.976 { 00:28:58.976 "name": "BaseBdev3", 00:28:58.976 "uuid": "6625ee70-ebe7-5778-8bc5-89fbf24a27a0", 00:28:58.976 "is_configured": true, 00:28:58.976 "data_offset": 2048, 00:28:58.976 "data_size": 63488 00:28:58.976 }, 00:28:58.976 { 00:28:58.976 "name": "BaseBdev4", 00:28:58.976 "uuid": "33f7f7e9-aaca-53fa-b91b-1d14a64535b4", 00:28:58.976 "is_configured": true, 00:28:58.976 "data_offset": 2048, 00:28:58.976 "data_size": 63488 00:28:58.976 } 00:28:58.976 ] 00:28:58.976 }' 00:28:58.976 23:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:58.976 23:14:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:59.541 23:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:59.541 23:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:59.541 23:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:59.541 23:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:59.541 23:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:59.541 23:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:59.541 23:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:59.799 23:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:59.799 "name": "raid_bdev1", 00:28:59.799 "uuid": "c423db2b-bb18-4617-acb1-ff1baf7511cf", 00:28:59.799 "strip_size_kb": 0, 00:28:59.799 "state": "online", 00:28:59.799 "raid_level": "raid1", 00:28:59.799 "superblock": true, 00:28:59.799 "num_base_bdevs": 4, 00:28:59.799 "num_base_bdevs_discovered": 3, 00:28:59.799 "num_base_bdevs_operational": 3, 00:28:59.799 "base_bdevs_list": [ 00:28:59.799 { 00:28:59.799 "name": "spare", 00:28:59.799 "uuid": "d45ed1e2-3689-5fb8-8b97-d2ea8aa4b01e", 00:28:59.799 "is_configured": true, 00:28:59.799 "data_offset": 2048, 00:28:59.799 "data_size": 63488 00:28:59.799 }, 00:28:59.799 { 00:28:59.799 "name": null, 00:28:59.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:59.799 "is_configured": false, 00:28:59.799 "data_offset": 2048, 00:28:59.799 "data_size": 63488 00:28:59.799 }, 00:28:59.799 { 00:28:59.799 "name": "BaseBdev3", 00:28:59.799 "uuid": "6625ee70-ebe7-5778-8bc5-89fbf24a27a0", 00:28:59.799 "is_configured": true, 00:28:59.800 "data_offset": 2048, 00:28:59.800 "data_size": 63488 00:28:59.800 }, 00:28:59.800 { 00:28:59.800 "name": "BaseBdev4", 00:28:59.800 "uuid": "33f7f7e9-aaca-53fa-b91b-1d14a64535b4", 00:28:59.800 "is_configured": true, 00:28:59.800 "data_offset": 2048, 00:28:59.800 "data_size": 63488 00:28:59.800 } 00:28:59.800 ] 00:28:59.800 }' 00:28:59.800 23:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:59.800 23:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:59.800 23:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:00.057 23:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:00.057 23:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:00.057 23:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:29:00.315 23:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:29:00.315 23:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:00.573 [2024-07-13 23:14:49.769134] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:00.573 23:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:00.573 23:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:00.573 23:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:00.573 23:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:00.573 23:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:00.573 23:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:00.573 23:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:00.573 23:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:00.573 23:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:00.573 23:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:00.573 23:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:00.573 23:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:00.831 23:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:00.831 "name": "raid_bdev1", 00:29:00.831 "uuid": "c423db2b-bb18-4617-acb1-ff1baf7511cf", 00:29:00.831 "strip_size_kb": 0, 00:29:00.831 "state": "online", 00:29:00.831 "raid_level": "raid1", 00:29:00.831 "superblock": true, 00:29:00.831 "num_base_bdevs": 4, 00:29:00.831 "num_base_bdevs_discovered": 2, 00:29:00.831 "num_base_bdevs_operational": 2, 00:29:00.831 "base_bdevs_list": [ 00:29:00.831 { 00:29:00.831 "name": null, 00:29:00.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:00.831 "is_configured": false, 00:29:00.831 "data_offset": 2048, 00:29:00.831 "data_size": 63488 00:29:00.831 }, 00:29:00.831 { 00:29:00.831 "name": null, 00:29:00.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:00.831 "is_configured": false, 00:29:00.831 "data_offset": 2048, 00:29:00.831 "data_size": 63488 00:29:00.831 }, 00:29:00.831 { 00:29:00.831 "name": "BaseBdev3", 00:29:00.831 "uuid": "6625ee70-ebe7-5778-8bc5-89fbf24a27a0", 00:29:00.831 "is_configured": true, 00:29:00.831 "data_offset": 2048, 00:29:00.831 "data_size": 63488 00:29:00.831 }, 00:29:00.831 { 00:29:00.831 "name": "BaseBdev4", 00:29:00.831 "uuid": "33f7f7e9-aaca-53fa-b91b-1d14a64535b4", 00:29:00.831 "is_configured": true, 00:29:00.831 "data_offset": 2048, 00:29:00.831 "data_size": 63488 00:29:00.831 } 00:29:00.831 ] 00:29:00.831 }' 00:29:00.831 23:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:00.831 23:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:01.399 23:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:01.658 [2024-07-13 23:14:50.885527] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:01.658 [2024-07-13 23:14:50.885885] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:29:01.658 [2024-07-13 23:14:50.885905] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:01.658 [2024-07-13 23:14:50.886017] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:01.658 [2024-07-13 23:14:50.891391] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caefe0 00:29:01.658 [2024-07-13 23:14:50.893649] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:01.658 23:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:29:02.595 23:14:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:02.595 23:14:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:02.595 23:14:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:02.595 23:14:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:02.595 23:14:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:02.595 23:14:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:02.595 23:14:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:02.854 23:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:02.854 "name": "raid_bdev1", 00:29:02.854 "uuid": "c423db2b-bb18-4617-acb1-ff1baf7511cf", 00:29:02.854 "strip_size_kb": 0, 00:29:02.854 "state": "online", 00:29:02.854 "raid_level": "raid1", 00:29:02.854 "superblock": true, 00:29:02.854 "num_base_bdevs": 4, 00:29:02.854 "num_base_bdevs_discovered": 3, 00:29:02.854 "num_base_bdevs_operational": 3, 00:29:02.854 "process": { 00:29:02.854 "type": "rebuild", 00:29:02.854 "target": "spare", 00:29:02.854 "progress": { 00:29:02.854 "blocks": 24576, 00:29:02.854 "percent": 38 00:29:02.854 } 00:29:02.854 }, 00:29:02.854 "base_bdevs_list": [ 00:29:02.854 { 00:29:02.854 "name": "spare", 00:29:02.854 "uuid": "d45ed1e2-3689-5fb8-8b97-d2ea8aa4b01e", 00:29:02.854 "is_configured": true, 00:29:02.854 "data_offset": 2048, 00:29:02.854 "data_size": 63488 00:29:02.854 }, 00:29:02.854 { 00:29:02.854 "name": null, 00:29:02.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:02.854 "is_configured": false, 00:29:02.854 "data_offset": 2048, 00:29:02.854 "data_size": 63488 00:29:02.854 }, 00:29:02.854 { 00:29:02.854 "name": "BaseBdev3", 00:29:02.854 "uuid": "6625ee70-ebe7-5778-8bc5-89fbf24a27a0", 00:29:02.854 "is_configured": true, 00:29:02.854 "data_offset": 2048, 00:29:02.854 "data_size": 63488 00:29:02.854 }, 00:29:02.854 { 00:29:02.854 "name": "BaseBdev4", 00:29:02.854 "uuid": "33f7f7e9-aaca-53fa-b91b-1d14a64535b4", 00:29:02.854 "is_configured": true, 00:29:02.854 "data_offset": 2048, 00:29:02.854 "data_size": 63488 00:29:02.854 } 00:29:02.854 ] 00:29:02.854 }' 00:29:02.854 23:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:02.854 23:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:02.854 23:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:03.112 23:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:03.112 23:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:03.371 [2024-07-13 23:14:52.532046] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:03.371 [2024-07-13 23:14:52.605182] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:03.371 [2024-07-13 23:14:52.605307] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:03.371 [2024-07-13 23:14:52.605332] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:03.371 [2024-07-13 23:14:52.605343] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:03.371 23:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:03.371 23:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:03.371 23:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:03.371 23:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:03.371 23:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:03.371 23:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:03.371 23:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:03.371 23:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:03.371 23:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:03.371 23:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:03.371 23:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:03.371 23:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:03.630 23:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:03.630 "name": "raid_bdev1", 00:29:03.630 "uuid": "c423db2b-bb18-4617-acb1-ff1baf7511cf", 00:29:03.630 "strip_size_kb": 0, 00:29:03.630 "state": "online", 00:29:03.630 "raid_level": "raid1", 00:29:03.630 "superblock": true, 00:29:03.630 "num_base_bdevs": 4, 00:29:03.630 "num_base_bdevs_discovered": 2, 00:29:03.630 "num_base_bdevs_operational": 2, 00:29:03.630 "base_bdevs_list": [ 00:29:03.630 { 00:29:03.631 "name": null, 00:29:03.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:03.631 "is_configured": false, 00:29:03.631 "data_offset": 2048, 00:29:03.631 "data_size": 63488 00:29:03.631 }, 00:29:03.631 { 00:29:03.631 "name": null, 00:29:03.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:03.631 "is_configured": false, 00:29:03.631 "data_offset": 2048, 00:29:03.631 "data_size": 63488 00:29:03.631 }, 00:29:03.631 { 00:29:03.631 "name": "BaseBdev3", 00:29:03.631 "uuid": "6625ee70-ebe7-5778-8bc5-89fbf24a27a0", 00:29:03.631 "is_configured": true, 00:29:03.631 "data_offset": 2048, 00:29:03.631 "data_size": 63488 00:29:03.631 }, 00:29:03.631 { 00:29:03.631 "name": "BaseBdev4", 00:29:03.631 "uuid": "33f7f7e9-aaca-53fa-b91b-1d14a64535b4", 00:29:03.631 "is_configured": true, 00:29:03.631 "data_offset": 2048, 00:29:03.631 "data_size": 63488 00:29:03.631 } 00:29:03.631 ] 00:29:03.631 }' 00:29:03.631 23:14:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:03.631 23:14:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:04.197 23:14:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:04.455 [2024-07-13 23:14:53.775013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:04.455 [2024-07-13 23:14:53.775168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:04.455 [2024-07-13 23:14:53.775224] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:29:04.455 [2024-07-13 23:14:53.775253] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:04.455 [2024-07-13 23:14:53.775849] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:04.455 [2024-07-13 23:14:53.775906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:04.455 [2024-07-13 23:14:53.776046] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:04.455 [2024-07-13 23:14:53.776070] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:29:04.455 [2024-07-13 23:14:53.776080] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:04.455 [2024-07-13 23:14:53.776147] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:04.455 [2024-07-13 23:14:53.781656] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caf320 00:29:04.455 spare 00:29:04.455 [2024-07-13 23:14:53.783729] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:04.455 23:14:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:29:05.397 23:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:05.397 23:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:05.397 23:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:05.397 23:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:05.397 23:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:05.655 23:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:05.655 23:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:05.655 23:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:05.655 "name": "raid_bdev1", 00:29:05.655 "uuid": "c423db2b-bb18-4617-acb1-ff1baf7511cf", 00:29:05.655 "strip_size_kb": 0, 00:29:05.655 "state": "online", 00:29:05.655 "raid_level": "raid1", 00:29:05.655 "superblock": true, 00:29:05.655 "num_base_bdevs": 4, 00:29:05.655 "num_base_bdevs_discovered": 3, 00:29:05.655 "num_base_bdevs_operational": 3, 00:29:05.655 "process": { 00:29:05.655 "type": "rebuild", 00:29:05.655 "target": "spare", 00:29:05.655 "progress": { 00:29:05.655 "blocks": 24576, 00:29:05.655 "percent": 38 00:29:05.655 } 00:29:05.655 }, 00:29:05.655 "base_bdevs_list": [ 00:29:05.655 { 00:29:05.655 "name": "spare", 00:29:05.655 "uuid": "d45ed1e2-3689-5fb8-8b97-d2ea8aa4b01e", 00:29:05.655 "is_configured": true, 00:29:05.655 "data_offset": 2048, 00:29:05.655 "data_size": 63488 00:29:05.655 }, 00:29:05.655 { 00:29:05.655 "name": null, 00:29:05.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:05.655 "is_configured": false, 00:29:05.655 "data_offset": 2048, 00:29:05.655 "data_size": 63488 00:29:05.655 }, 00:29:05.655 { 00:29:05.655 "name": "BaseBdev3", 00:29:05.655 "uuid": "6625ee70-ebe7-5778-8bc5-89fbf24a27a0", 00:29:05.655 "is_configured": true, 00:29:05.655 "data_offset": 2048, 00:29:05.655 "data_size": 63488 00:29:05.655 }, 00:29:05.655 { 00:29:05.655 "name": "BaseBdev4", 00:29:05.655 "uuid": "33f7f7e9-aaca-53fa-b91b-1d14a64535b4", 00:29:05.655 "is_configured": true, 00:29:05.655 "data_offset": 2048, 00:29:05.655 "data_size": 63488 00:29:05.655 } 00:29:05.655 ] 00:29:05.655 }' 00:29:05.655 23:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:05.914 23:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:05.914 23:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:05.914 23:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:05.914 23:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:06.174 [2024-07-13 23:14:55.362503] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:06.174 [2024-07-13 23:14:55.394369] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:06.174 [2024-07-13 23:14:55.394477] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:06.174 [2024-07-13 23:14:55.394516] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:06.174 [2024-07-13 23:14:55.394544] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:06.174 23:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:06.174 23:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:06.174 23:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:06.174 23:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:06.174 23:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:06.174 23:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:06.174 23:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:06.174 23:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:06.174 23:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:06.174 23:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:06.174 23:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:06.174 23:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:06.432 23:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:06.432 "name": "raid_bdev1", 00:29:06.432 "uuid": "c423db2b-bb18-4617-acb1-ff1baf7511cf", 00:29:06.432 "strip_size_kb": 0, 00:29:06.432 "state": "online", 00:29:06.432 "raid_level": "raid1", 00:29:06.432 "superblock": true, 00:29:06.432 "num_base_bdevs": 4, 00:29:06.433 "num_base_bdevs_discovered": 2, 00:29:06.433 "num_base_bdevs_operational": 2, 00:29:06.433 "base_bdevs_list": [ 00:29:06.433 { 00:29:06.433 "name": null, 00:29:06.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:06.433 "is_configured": false, 00:29:06.433 "data_offset": 2048, 00:29:06.433 "data_size": 63488 00:29:06.433 }, 00:29:06.433 { 00:29:06.433 "name": null, 00:29:06.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:06.433 "is_configured": false, 00:29:06.433 "data_offset": 2048, 00:29:06.433 "data_size": 63488 00:29:06.433 }, 00:29:06.433 { 00:29:06.433 "name": "BaseBdev3", 00:29:06.433 "uuid": "6625ee70-ebe7-5778-8bc5-89fbf24a27a0", 00:29:06.433 "is_configured": true, 00:29:06.433 "data_offset": 2048, 00:29:06.433 "data_size": 63488 00:29:06.433 }, 00:29:06.433 { 00:29:06.433 "name": "BaseBdev4", 00:29:06.433 "uuid": "33f7f7e9-aaca-53fa-b91b-1d14a64535b4", 00:29:06.433 "is_configured": true, 00:29:06.433 "data_offset": 2048, 00:29:06.433 "data_size": 63488 00:29:06.433 } 00:29:06.433 ] 00:29:06.433 }' 00:29:06.433 23:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:06.433 23:14:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:06.999 23:14:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:06.999 23:14:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:06.999 23:14:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:06.999 23:14:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:06.999 23:14:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:06.999 23:14:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:07.000 23:14:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:07.259 23:14:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:07.259 "name": "raid_bdev1", 00:29:07.259 "uuid": "c423db2b-bb18-4617-acb1-ff1baf7511cf", 00:29:07.259 "strip_size_kb": 0, 00:29:07.259 "state": "online", 00:29:07.259 "raid_level": "raid1", 00:29:07.259 "superblock": true, 00:29:07.259 "num_base_bdevs": 4, 00:29:07.259 "num_base_bdevs_discovered": 2, 00:29:07.259 "num_base_bdevs_operational": 2, 00:29:07.259 "base_bdevs_list": [ 00:29:07.259 { 00:29:07.259 "name": null, 00:29:07.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:07.259 "is_configured": false, 00:29:07.259 "data_offset": 2048, 00:29:07.259 "data_size": 63488 00:29:07.259 }, 00:29:07.259 { 00:29:07.259 "name": null, 00:29:07.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:07.259 "is_configured": false, 00:29:07.259 "data_offset": 2048, 00:29:07.259 "data_size": 63488 00:29:07.259 }, 00:29:07.259 { 00:29:07.259 "name": "BaseBdev3", 00:29:07.259 "uuid": "6625ee70-ebe7-5778-8bc5-89fbf24a27a0", 00:29:07.259 "is_configured": true, 00:29:07.259 "data_offset": 2048, 00:29:07.259 "data_size": 63488 00:29:07.259 }, 00:29:07.259 { 00:29:07.259 "name": "BaseBdev4", 00:29:07.259 "uuid": "33f7f7e9-aaca-53fa-b91b-1d14a64535b4", 00:29:07.259 "is_configured": true, 00:29:07.259 "data_offset": 2048, 00:29:07.259 "data_size": 63488 00:29:07.259 } 00:29:07.259 ] 00:29:07.259 }' 00:29:07.259 23:14:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:07.259 23:14:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:07.259 23:14:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:07.517 23:14:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:07.517 23:14:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:29:07.776 23:14:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:08.034 [2024-07-13 23:14:57.237782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:08.034 [2024-07-13 23:14:57.237914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:08.034 [2024-07-13 23:14:57.237994] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:29:08.034 [2024-07-13 23:14:57.238026] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:08.034 [2024-07-13 23:14:57.238642] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:08.034 [2024-07-13 23:14:57.238688] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:08.034 [2024-07-13 23:14:57.238792] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:29:08.034 [2024-07-13 23:14:57.238812] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:29:08.034 [2024-07-13 23:14:57.238821] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:08.034 BaseBdev1 00:29:08.034 23:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:29:08.970 23:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:08.970 23:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:08.970 23:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:08.970 23:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:08.970 23:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:08.970 23:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:08.970 23:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:08.970 23:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:08.970 23:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:08.970 23:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:08.970 23:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:08.970 23:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:09.229 23:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:09.229 "name": "raid_bdev1", 00:29:09.229 "uuid": "c423db2b-bb18-4617-acb1-ff1baf7511cf", 00:29:09.229 "strip_size_kb": 0, 00:29:09.229 "state": "online", 00:29:09.229 "raid_level": "raid1", 00:29:09.229 "superblock": true, 00:29:09.229 "num_base_bdevs": 4, 00:29:09.229 "num_base_bdevs_discovered": 2, 00:29:09.229 "num_base_bdevs_operational": 2, 00:29:09.229 "base_bdevs_list": [ 00:29:09.229 { 00:29:09.229 "name": null, 00:29:09.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:09.229 "is_configured": false, 00:29:09.229 "data_offset": 2048, 00:29:09.229 "data_size": 63488 00:29:09.229 }, 00:29:09.229 { 00:29:09.229 "name": null, 00:29:09.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:09.229 "is_configured": false, 00:29:09.229 "data_offset": 2048, 00:29:09.229 "data_size": 63488 00:29:09.229 }, 00:29:09.229 { 00:29:09.229 "name": "BaseBdev3", 00:29:09.229 "uuid": "6625ee70-ebe7-5778-8bc5-89fbf24a27a0", 00:29:09.229 "is_configured": true, 00:29:09.229 "data_offset": 2048, 00:29:09.229 "data_size": 63488 00:29:09.229 }, 00:29:09.229 { 00:29:09.229 "name": "BaseBdev4", 00:29:09.229 "uuid": "33f7f7e9-aaca-53fa-b91b-1d14a64535b4", 00:29:09.229 "is_configured": true, 00:29:09.229 "data_offset": 2048, 00:29:09.229 "data_size": 63488 00:29:09.229 } 00:29:09.229 ] 00:29:09.229 }' 00:29:09.229 23:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:09.229 23:14:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:09.796 23:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:09.796 23:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:09.796 23:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:09.796 23:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:09.796 23:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:09.796 23:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:09.796 23:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:10.055 23:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:10.055 "name": "raid_bdev1", 00:29:10.055 "uuid": "c423db2b-bb18-4617-acb1-ff1baf7511cf", 00:29:10.055 "strip_size_kb": 0, 00:29:10.055 "state": "online", 00:29:10.055 "raid_level": "raid1", 00:29:10.055 "superblock": true, 00:29:10.055 "num_base_bdevs": 4, 00:29:10.055 "num_base_bdevs_discovered": 2, 00:29:10.055 "num_base_bdevs_operational": 2, 00:29:10.055 "base_bdevs_list": [ 00:29:10.055 { 00:29:10.055 "name": null, 00:29:10.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:10.055 "is_configured": false, 00:29:10.055 "data_offset": 2048, 00:29:10.055 "data_size": 63488 00:29:10.055 }, 00:29:10.055 { 00:29:10.055 "name": null, 00:29:10.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:10.055 "is_configured": false, 00:29:10.055 "data_offset": 2048, 00:29:10.055 "data_size": 63488 00:29:10.055 }, 00:29:10.055 { 00:29:10.055 "name": "BaseBdev3", 00:29:10.055 "uuid": "6625ee70-ebe7-5778-8bc5-89fbf24a27a0", 00:29:10.055 "is_configured": true, 00:29:10.055 "data_offset": 2048, 00:29:10.055 "data_size": 63488 00:29:10.055 }, 00:29:10.055 { 00:29:10.055 "name": "BaseBdev4", 00:29:10.055 "uuid": "33f7f7e9-aaca-53fa-b91b-1d14a64535b4", 00:29:10.055 "is_configured": true, 00:29:10.055 "data_offset": 2048, 00:29:10.055 "data_size": 63488 00:29:10.055 } 00:29:10.055 ] 00:29:10.055 }' 00:29:10.055 23:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:10.055 23:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:10.055 23:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:10.055 23:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:10.055 23:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:10.055 23:14:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:29:10.055 23:14:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:10.055 23:14:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:10.055 23:14:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:10.055 23:14:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:10.055 23:14:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:10.056 23:14:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:10.056 23:14:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:10.056 23:14:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:10.056 23:14:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:29:10.056 23:14:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:10.314 [2024-07-13 23:14:59.654294] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:10.314 [2024-07-13 23:14:59.654590] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:29:10.314 [2024-07-13 23:14:59.654617] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:10.314 request: 00:29:10.314 { 00:29:10.314 "base_bdev": "BaseBdev1", 00:29:10.314 "raid_bdev": "raid_bdev1", 00:29:10.314 "method": "bdev_raid_add_base_bdev", 00:29:10.314 "req_id": 1 00:29:10.314 } 00:29:10.314 Got JSON-RPC error response 00:29:10.314 response: 00:29:10.314 { 00:29:10.314 "code": -22, 00:29:10.314 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:29:10.314 } 00:29:10.314 23:14:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:29:10.314 23:14:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:10.314 23:14:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:10.314 23:14:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:10.314 23:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:29:11.690 23:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:11.690 23:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:11.690 23:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:11.690 23:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:11.690 23:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:11.690 23:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:11.690 23:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:11.690 23:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:11.690 23:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:11.690 23:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:11.690 23:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:11.690 23:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:11.690 23:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:11.690 "name": "raid_bdev1", 00:29:11.690 "uuid": "c423db2b-bb18-4617-acb1-ff1baf7511cf", 00:29:11.690 "strip_size_kb": 0, 00:29:11.690 "state": "online", 00:29:11.690 "raid_level": "raid1", 00:29:11.690 "superblock": true, 00:29:11.690 "num_base_bdevs": 4, 00:29:11.690 "num_base_bdevs_discovered": 2, 00:29:11.690 "num_base_bdevs_operational": 2, 00:29:11.690 "base_bdevs_list": [ 00:29:11.690 { 00:29:11.690 "name": null, 00:29:11.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:11.690 "is_configured": false, 00:29:11.690 "data_offset": 2048, 00:29:11.690 "data_size": 63488 00:29:11.690 }, 00:29:11.690 { 00:29:11.690 "name": null, 00:29:11.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:11.690 "is_configured": false, 00:29:11.690 "data_offset": 2048, 00:29:11.690 "data_size": 63488 00:29:11.690 }, 00:29:11.690 { 00:29:11.690 "name": "BaseBdev3", 00:29:11.690 "uuid": "6625ee70-ebe7-5778-8bc5-89fbf24a27a0", 00:29:11.690 "is_configured": true, 00:29:11.690 "data_offset": 2048, 00:29:11.690 "data_size": 63488 00:29:11.690 }, 00:29:11.690 { 00:29:11.690 "name": "BaseBdev4", 00:29:11.690 "uuid": "33f7f7e9-aaca-53fa-b91b-1d14a64535b4", 00:29:11.690 "is_configured": true, 00:29:11.690 "data_offset": 2048, 00:29:11.690 "data_size": 63488 00:29:11.690 } 00:29:11.690 ] 00:29:11.690 }' 00:29:11.690 23:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:11.690 23:15:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:12.257 23:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:12.257 23:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:12.257 23:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:12.257 23:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:12.257 23:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:12.257 23:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:12.257 23:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:12.515 23:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:12.515 "name": "raid_bdev1", 00:29:12.515 "uuid": "c423db2b-bb18-4617-acb1-ff1baf7511cf", 00:29:12.515 "strip_size_kb": 0, 00:29:12.515 "state": "online", 00:29:12.515 "raid_level": "raid1", 00:29:12.515 "superblock": true, 00:29:12.515 "num_base_bdevs": 4, 00:29:12.515 "num_base_bdevs_discovered": 2, 00:29:12.515 "num_base_bdevs_operational": 2, 00:29:12.516 "base_bdevs_list": [ 00:29:12.516 { 00:29:12.516 "name": null, 00:29:12.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:12.516 "is_configured": false, 00:29:12.516 "data_offset": 2048, 00:29:12.516 "data_size": 63488 00:29:12.516 }, 00:29:12.516 { 00:29:12.516 "name": null, 00:29:12.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:12.516 "is_configured": false, 00:29:12.516 "data_offset": 2048, 00:29:12.516 "data_size": 63488 00:29:12.516 }, 00:29:12.516 { 00:29:12.516 "name": "BaseBdev3", 00:29:12.516 "uuid": "6625ee70-ebe7-5778-8bc5-89fbf24a27a0", 00:29:12.516 "is_configured": true, 00:29:12.516 "data_offset": 2048, 00:29:12.516 "data_size": 63488 00:29:12.516 }, 00:29:12.516 { 00:29:12.516 "name": "BaseBdev4", 00:29:12.516 "uuid": "33f7f7e9-aaca-53fa-b91b-1d14a64535b4", 00:29:12.516 "is_configured": true, 00:29:12.516 "data_offset": 2048, 00:29:12.516 "data_size": 63488 00:29:12.516 } 00:29:12.516 ] 00:29:12.516 }' 00:29:12.516 23:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:12.516 23:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:12.516 23:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:12.516 23:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:12.516 23:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 157228 00:29:12.516 23:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 157228 ']' 00:29:12.516 23:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 157228 00:29:12.516 23:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:29:12.775 23:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:12.775 23:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 157228 00:29:12.775 23:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:12.775 23:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:12.775 23:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 157228' 00:29:12.775 killing process with pid 157228 00:29:12.775 23:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 157228 00:29:12.775 Received shutdown signal, test time was about 60.000000 seconds 00:29:12.775 00:29:12.775 Latency(us) 00:29:12.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.775 =================================================================================================================== 00:29:12.775 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:12.775 [2024-07-13 23:15:01.938480] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:12.775 [2024-07-13 23:15:01.938616] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:12.775 23:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 157228 00:29:12.775 [2024-07-13 23:15:01.938716] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:12.775 [2024-07-13 23:15:01.938731] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:29:12.775 [2024-07-13 23:15:01.984912] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:29:13.033 00:29:13.033 real 0m40.118s 00:29:13.033 user 0m59.635s 00:29:13.033 sys 0m5.805s 00:29:13.033 ************************************ 00:29:13.033 END TEST raid_rebuild_test_sb 00:29:13.033 ************************************ 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:13.033 23:15:02 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:29:13.033 23:15:02 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:29:13.033 23:15:02 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:29:13.033 23:15:02 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:13.033 23:15:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:13.033 ************************************ 00:29:13.033 START TEST raid_rebuild_test_io 00:29:13.033 ************************************ 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 false true true 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=158202 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 158202 /var/tmp/spdk-raid.sock 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@829 -- # '[' -z 158202 ']' 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:13.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:13.033 23:15:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:13.033 [2024-07-13 23:15:02.357881] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:29:13.033 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:13.033 Zero copy mechanism will not be used. 00:29:13.034 [2024-07-13 23:15:02.358121] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158202 ] 00:29:13.292 [2024-07-13 23:15:02.499767] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.292 [2024-07-13 23:15:02.567854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.292 [2024-07-13 23:15:02.621348] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:14.227 23:15:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:14.227 23:15:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # return 0 00:29:14.227 23:15:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:14.227 23:15:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:14.227 BaseBdev1_malloc 00:29:14.227 23:15:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:14.485 [2024-07-13 23:15:03.837739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:14.485 [2024-07-13 23:15:03.838025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:14.485 [2024-07-13 23:15:03.838207] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:29:14.485 [2024-07-13 23:15:03.838377] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:14.485 [2024-07-13 23:15:03.841012] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:14.485 [2024-07-13 23:15:03.841196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:14.485 BaseBdev1 00:29:14.485 23:15:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:14.485 23:15:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:14.742 BaseBdev2_malloc 00:29:14.742 23:15:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:15.000 [2024-07-13 23:15:04.288518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:15.000 [2024-07-13 23:15:04.288782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:15.000 [2024-07-13 23:15:04.288878] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:29:15.000 [2024-07-13 23:15:04.289153] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:15.000 [2024-07-13 23:15:04.291505] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:15.000 [2024-07-13 23:15:04.291703] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:15.000 BaseBdev2 00:29:15.000 23:15:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:15.000 23:15:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:15.258 BaseBdev3_malloc 00:29:15.258 23:15:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:15.516 [2024-07-13 23:15:04.751168] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:15.516 [2024-07-13 23:15:04.751431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:15.516 [2024-07-13 23:15:04.751617] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:29:15.516 [2024-07-13 23:15:04.751773] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:15.516 [2024-07-13 23:15:04.754411] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:15.516 [2024-07-13 23:15:04.754590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:15.516 BaseBdev3 00:29:15.516 23:15:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:15.516 23:15:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:15.775 BaseBdev4_malloc 00:29:15.775 23:15:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:29:16.033 [2024-07-13 23:15:05.201605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:29:16.033 [2024-07-13 23:15:05.201834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:16.033 [2024-07-13 23:15:05.201941] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:29:16.033 [2024-07-13 23:15:05.202196] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:16.033 [2024-07-13 23:15:05.204609] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:16.033 [2024-07-13 23:15:05.204853] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:16.034 BaseBdev4 00:29:16.034 23:15:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:29:16.034 spare_malloc 00:29:16.034 23:15:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:16.299 spare_delay 00:29:16.299 23:15:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:16.586 [2024-07-13 23:15:05.853235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:16.586 [2024-07-13 23:15:05.853539] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:16.586 [2024-07-13 23:15:05.853639] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:29:16.586 [2024-07-13 23:15:05.853790] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:16.586 [2024-07-13 23:15:05.856274] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:16.586 [2024-07-13 23:15:05.856476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:16.586 spare 00:29:16.586 23:15:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:29:16.844 [2024-07-13 23:15:06.069657] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:16.844 [2024-07-13 23:15:06.072128] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:16.844 [2024-07-13 23:15:06.072334] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:16.844 [2024-07-13 23:15:06.072517] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:16.844 [2024-07-13 23:15:06.072815] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:29:16.844 [2024-07-13 23:15:06.072979] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:29:16.844 [2024-07-13 23:15:06.073339] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:29:16.844 [2024-07-13 23:15:06.074102] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:29:16.844 [2024-07-13 23:15:06.074248] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:29:16.844 [2024-07-13 23:15:06.074693] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:16.844 23:15:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:29:16.844 23:15:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:16.844 23:15:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:16.844 23:15:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:16.844 23:15:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:16.844 23:15:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:16.844 23:15:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:16.844 23:15:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:16.844 23:15:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:16.844 23:15:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:16.844 23:15:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:16.844 23:15:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:17.101 23:15:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:17.101 "name": "raid_bdev1", 00:29:17.101 "uuid": "046a9ed5-559a-48bd-9397-10e027234133", 00:29:17.101 "strip_size_kb": 0, 00:29:17.101 "state": "online", 00:29:17.101 "raid_level": "raid1", 00:29:17.101 "superblock": false, 00:29:17.101 "num_base_bdevs": 4, 00:29:17.101 "num_base_bdevs_discovered": 4, 00:29:17.101 "num_base_bdevs_operational": 4, 00:29:17.101 "base_bdevs_list": [ 00:29:17.101 { 00:29:17.101 "name": "BaseBdev1", 00:29:17.101 "uuid": "a150636a-e420-536e-9933-ada3e383de76", 00:29:17.101 "is_configured": true, 00:29:17.101 "data_offset": 0, 00:29:17.101 "data_size": 65536 00:29:17.101 }, 00:29:17.101 { 00:29:17.101 "name": "BaseBdev2", 00:29:17.101 "uuid": "8ddb39fa-9797-5e52-9ee8-5ca150ba4ae7", 00:29:17.101 "is_configured": true, 00:29:17.101 "data_offset": 0, 00:29:17.101 "data_size": 65536 00:29:17.101 }, 00:29:17.101 { 00:29:17.101 "name": "BaseBdev3", 00:29:17.101 "uuid": "fe7805b3-3353-5393-9d3b-3080a31c41bd", 00:29:17.101 "is_configured": true, 00:29:17.101 "data_offset": 0, 00:29:17.101 "data_size": 65536 00:29:17.101 }, 00:29:17.101 { 00:29:17.101 "name": "BaseBdev4", 00:29:17.101 "uuid": "ab2fb780-a8f3-5cd4-82e5-1f6ab0e1dd37", 00:29:17.101 "is_configured": true, 00:29:17.101 "data_offset": 0, 00:29:17.101 "data_size": 65536 00:29:17.102 } 00:29:17.102 ] 00:29:17.102 }' 00:29:17.102 23:15:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:17.102 23:15:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:17.667 23:15:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:17.667 23:15:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:29:17.926 [2024-07-13 23:15:07.219191] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:17.926 23:15:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:29:17.926 23:15:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:17.926 23:15:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:18.184 23:15:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:29:18.184 23:15:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:29:18.184 23:15:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:29:18.184 23:15:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:29:18.184 [2024-07-13 23:15:07.570227] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:29:18.184 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:18.184 Zero copy mechanism will not be used. 00:29:18.184 Running I/O for 60 seconds... 00:29:18.443 [2024-07-13 23:15:07.690667] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:18.443 [2024-07-13 23:15:07.697145] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002870 00:29:18.443 23:15:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:18.443 23:15:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:18.443 23:15:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:18.443 23:15:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:18.443 23:15:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:18.443 23:15:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:18.443 23:15:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:18.443 23:15:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:18.443 23:15:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:18.443 23:15:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:18.443 23:15:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:18.443 23:15:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:18.700 23:15:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:18.700 "name": "raid_bdev1", 00:29:18.700 "uuid": "046a9ed5-559a-48bd-9397-10e027234133", 00:29:18.700 "strip_size_kb": 0, 00:29:18.700 "state": "online", 00:29:18.700 "raid_level": "raid1", 00:29:18.700 "superblock": false, 00:29:18.700 "num_base_bdevs": 4, 00:29:18.700 "num_base_bdevs_discovered": 3, 00:29:18.700 "num_base_bdevs_operational": 3, 00:29:18.700 "base_bdevs_list": [ 00:29:18.700 { 00:29:18.700 "name": null, 00:29:18.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:18.700 "is_configured": false, 00:29:18.700 "data_offset": 0, 00:29:18.700 "data_size": 65536 00:29:18.700 }, 00:29:18.700 { 00:29:18.700 "name": "BaseBdev2", 00:29:18.700 "uuid": "8ddb39fa-9797-5e52-9ee8-5ca150ba4ae7", 00:29:18.700 "is_configured": true, 00:29:18.700 "data_offset": 0, 00:29:18.700 "data_size": 65536 00:29:18.700 }, 00:29:18.700 { 00:29:18.700 "name": "BaseBdev3", 00:29:18.700 "uuid": "fe7805b3-3353-5393-9d3b-3080a31c41bd", 00:29:18.700 "is_configured": true, 00:29:18.700 "data_offset": 0, 00:29:18.700 "data_size": 65536 00:29:18.700 }, 00:29:18.700 { 00:29:18.700 "name": "BaseBdev4", 00:29:18.700 "uuid": "ab2fb780-a8f3-5cd4-82e5-1f6ab0e1dd37", 00:29:18.700 "is_configured": true, 00:29:18.700 "data_offset": 0, 00:29:18.700 "data_size": 65536 00:29:18.700 } 00:29:18.700 ] 00:29:18.700 }' 00:29:18.700 23:15:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:18.701 23:15:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:19.265 23:15:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:19.524 [2024-07-13 23:15:08.778325] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:19.524 [2024-07-13 23:15:08.828585] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:29:19.524 23:15:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:29:19.524 [2024-07-13 23:15:08.831616] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:19.782 [2024-07-13 23:15:08.965880] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:19.782 [2024-07-13 23:15:09.106856] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:20.040 [2024-07-13 23:15:09.442870] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:20.298 [2024-07-13 23:15:09.571242] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:20.556 23:15:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:20.556 23:15:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:20.556 23:15:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:20.556 23:15:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:20.556 23:15:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:20.556 23:15:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:20.556 23:15:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:20.556 [2024-07-13 23:15:09.926589] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:20.556 [2024-07-13 23:15:09.928183] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:20.815 23:15:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:20.815 "name": "raid_bdev1", 00:29:20.815 "uuid": "046a9ed5-559a-48bd-9397-10e027234133", 00:29:20.815 "strip_size_kb": 0, 00:29:20.815 "state": "online", 00:29:20.815 "raid_level": "raid1", 00:29:20.815 "superblock": false, 00:29:20.815 "num_base_bdevs": 4, 00:29:20.815 "num_base_bdevs_discovered": 4, 00:29:20.815 "num_base_bdevs_operational": 4, 00:29:20.815 "process": { 00:29:20.815 "type": "rebuild", 00:29:20.815 "target": "spare", 00:29:20.815 "progress": { 00:29:20.815 "blocks": 14336, 00:29:20.815 "percent": 21 00:29:20.815 } 00:29:20.815 }, 00:29:20.815 "base_bdevs_list": [ 00:29:20.815 { 00:29:20.815 "name": "spare", 00:29:20.815 "uuid": "593b74fb-0e2a-5d24-9c2a-2bd281fc2509", 00:29:20.815 "is_configured": true, 00:29:20.815 "data_offset": 0, 00:29:20.815 "data_size": 65536 00:29:20.815 }, 00:29:20.815 { 00:29:20.815 "name": "BaseBdev2", 00:29:20.815 "uuid": "8ddb39fa-9797-5e52-9ee8-5ca150ba4ae7", 00:29:20.815 "is_configured": true, 00:29:20.815 "data_offset": 0, 00:29:20.815 "data_size": 65536 00:29:20.815 }, 00:29:20.815 { 00:29:20.815 "name": "BaseBdev3", 00:29:20.815 "uuid": "fe7805b3-3353-5393-9d3b-3080a31c41bd", 00:29:20.815 "is_configured": true, 00:29:20.815 "data_offset": 0, 00:29:20.815 "data_size": 65536 00:29:20.815 }, 00:29:20.815 { 00:29:20.815 "name": "BaseBdev4", 00:29:20.815 "uuid": "ab2fb780-a8f3-5cd4-82e5-1f6ab0e1dd37", 00:29:20.815 "is_configured": true, 00:29:20.815 "data_offset": 0, 00:29:20.815 "data_size": 65536 00:29:20.815 } 00:29:20.815 ] 00:29:20.815 }' 00:29:20.815 23:15:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:20.815 23:15:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:20.815 23:15:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:20.815 [2024-07-13 23:15:10.140627] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:29:20.815 [2024-07-13 23:15:10.141145] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:29:20.815 23:15:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:20.815 23:15:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:21.073 [2024-07-13 23:15:10.415139] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:21.073 [2024-07-13 23:15:10.464079] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:29:21.331 [2024-07-13 23:15:10.502140] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:21.331 [2024-07-13 23:15:10.515345] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:21.331 [2024-07-13 23:15:10.515594] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:21.331 [2024-07-13 23:15:10.515652] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:21.331 [2024-07-13 23:15:10.545620] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002870 00:29:21.331 23:15:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:21.331 23:15:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:21.332 23:15:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:21.332 23:15:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:21.332 23:15:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:21.332 23:15:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:21.332 23:15:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:21.332 23:15:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:21.332 23:15:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:21.332 23:15:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:21.332 23:15:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:21.332 23:15:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:21.590 23:15:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:21.590 "name": "raid_bdev1", 00:29:21.590 "uuid": "046a9ed5-559a-48bd-9397-10e027234133", 00:29:21.590 "strip_size_kb": 0, 00:29:21.590 "state": "online", 00:29:21.590 "raid_level": "raid1", 00:29:21.590 "superblock": false, 00:29:21.590 "num_base_bdevs": 4, 00:29:21.590 "num_base_bdevs_discovered": 3, 00:29:21.590 "num_base_bdevs_operational": 3, 00:29:21.590 "base_bdevs_list": [ 00:29:21.590 { 00:29:21.590 "name": null, 00:29:21.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:21.590 "is_configured": false, 00:29:21.590 "data_offset": 0, 00:29:21.590 "data_size": 65536 00:29:21.590 }, 00:29:21.590 { 00:29:21.590 "name": "BaseBdev2", 00:29:21.590 "uuid": "8ddb39fa-9797-5e52-9ee8-5ca150ba4ae7", 00:29:21.590 "is_configured": true, 00:29:21.590 "data_offset": 0, 00:29:21.590 "data_size": 65536 00:29:21.590 }, 00:29:21.590 { 00:29:21.590 "name": "BaseBdev3", 00:29:21.590 "uuid": "fe7805b3-3353-5393-9d3b-3080a31c41bd", 00:29:21.590 "is_configured": true, 00:29:21.590 "data_offset": 0, 00:29:21.590 "data_size": 65536 00:29:21.590 }, 00:29:21.590 { 00:29:21.590 "name": "BaseBdev4", 00:29:21.590 "uuid": "ab2fb780-a8f3-5cd4-82e5-1f6ab0e1dd37", 00:29:21.590 "is_configured": true, 00:29:21.590 "data_offset": 0, 00:29:21.590 "data_size": 65536 00:29:21.590 } 00:29:21.590 ] 00:29:21.590 }' 00:29:21.590 23:15:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:21.590 23:15:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:22.156 23:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:22.156 23:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:22.156 23:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:22.156 23:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:22.156 23:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:22.156 23:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:22.156 23:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:22.414 23:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:22.414 "name": "raid_bdev1", 00:29:22.414 "uuid": "046a9ed5-559a-48bd-9397-10e027234133", 00:29:22.414 "strip_size_kb": 0, 00:29:22.414 "state": "online", 00:29:22.414 "raid_level": "raid1", 00:29:22.414 "superblock": false, 00:29:22.414 "num_base_bdevs": 4, 00:29:22.414 "num_base_bdevs_discovered": 3, 00:29:22.414 "num_base_bdevs_operational": 3, 00:29:22.414 "base_bdevs_list": [ 00:29:22.414 { 00:29:22.414 "name": null, 00:29:22.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:22.414 "is_configured": false, 00:29:22.414 "data_offset": 0, 00:29:22.414 "data_size": 65536 00:29:22.414 }, 00:29:22.414 { 00:29:22.415 "name": "BaseBdev2", 00:29:22.415 "uuid": "8ddb39fa-9797-5e52-9ee8-5ca150ba4ae7", 00:29:22.415 "is_configured": true, 00:29:22.415 "data_offset": 0, 00:29:22.415 "data_size": 65536 00:29:22.415 }, 00:29:22.415 { 00:29:22.415 "name": "BaseBdev3", 00:29:22.415 "uuid": "fe7805b3-3353-5393-9d3b-3080a31c41bd", 00:29:22.415 "is_configured": true, 00:29:22.415 "data_offset": 0, 00:29:22.415 "data_size": 65536 00:29:22.415 }, 00:29:22.415 { 00:29:22.415 "name": "BaseBdev4", 00:29:22.415 "uuid": "ab2fb780-a8f3-5cd4-82e5-1f6ab0e1dd37", 00:29:22.415 "is_configured": true, 00:29:22.415 "data_offset": 0, 00:29:22.415 "data_size": 65536 00:29:22.415 } 00:29:22.415 ] 00:29:22.415 }' 00:29:22.415 23:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:22.415 23:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:22.415 23:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:22.673 23:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:22.673 23:15:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:22.673 [2024-07-13 23:15:12.024685] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:22.931 23:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:22.931 [2024-07-13 23:15:12.102548] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002ae0 00:29:22.931 [2024-07-13 23:15:12.105040] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:22.931 [2024-07-13 23:15:12.233407] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:22.931 [2024-07-13 23:15:12.241199] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:23.190 [2024-07-13 23:15:12.452086] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:23.190 [2024-07-13 23:15:12.453266] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:23.756 [2024-07-13 23:15:12.921318] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:23.756 23:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:23.756 23:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:23.756 23:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:23.756 23:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:23.756 23:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:23.756 23:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:23.756 23:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:24.020 23:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:24.020 "name": "raid_bdev1", 00:29:24.020 "uuid": "046a9ed5-559a-48bd-9397-10e027234133", 00:29:24.020 "strip_size_kb": 0, 00:29:24.020 "state": "online", 00:29:24.020 "raid_level": "raid1", 00:29:24.020 "superblock": false, 00:29:24.020 "num_base_bdevs": 4, 00:29:24.020 "num_base_bdevs_discovered": 4, 00:29:24.020 "num_base_bdevs_operational": 4, 00:29:24.020 "process": { 00:29:24.020 "type": "rebuild", 00:29:24.020 "target": "spare", 00:29:24.020 "progress": { 00:29:24.020 "blocks": 14336, 00:29:24.020 "percent": 21 00:29:24.020 } 00:29:24.020 }, 00:29:24.020 "base_bdevs_list": [ 00:29:24.020 { 00:29:24.020 "name": "spare", 00:29:24.020 "uuid": "593b74fb-0e2a-5d24-9c2a-2bd281fc2509", 00:29:24.020 "is_configured": true, 00:29:24.020 "data_offset": 0, 00:29:24.020 "data_size": 65536 00:29:24.020 }, 00:29:24.020 { 00:29:24.020 "name": "BaseBdev2", 00:29:24.020 "uuid": "8ddb39fa-9797-5e52-9ee8-5ca150ba4ae7", 00:29:24.020 "is_configured": true, 00:29:24.020 "data_offset": 0, 00:29:24.020 "data_size": 65536 00:29:24.020 }, 00:29:24.020 { 00:29:24.020 "name": "BaseBdev3", 00:29:24.020 "uuid": "fe7805b3-3353-5393-9d3b-3080a31c41bd", 00:29:24.020 "is_configured": true, 00:29:24.020 "data_offset": 0, 00:29:24.020 "data_size": 65536 00:29:24.020 }, 00:29:24.020 { 00:29:24.020 "name": "BaseBdev4", 00:29:24.020 "uuid": "ab2fb780-a8f3-5cd4-82e5-1f6ab0e1dd37", 00:29:24.020 "is_configured": true, 00:29:24.020 "data_offset": 0, 00:29:24.020 "data_size": 65536 00:29:24.020 } 00:29:24.020 ] 00:29:24.020 }' 00:29:24.020 23:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:24.020 [2024-07-13 23:15:13.388456] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:29:24.020 [2024-07-13 23:15:13.388894] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:29:24.020 23:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:24.020 23:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:24.282 23:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:24.282 23:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:29:24.282 23:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:29:24.282 23:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:29:24.282 23:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:29:24.282 23:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:29:24.282 [2024-07-13 23:15:13.679314] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:24.540 [2024-07-13 23:15:13.711647] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:29:24.540 [2024-07-13 23:15:13.713275] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:29:24.540 [2024-07-13 23:15:13.822980] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002870 00:29:24.540 [2024-07-13 23:15:13.823258] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002ae0 00:29:24.540 [2024-07-13 23:15:13.839881] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:29:24.540 23:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:29:24.540 23:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:29:24.540 23:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:24.540 23:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:24.540 23:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:24.540 23:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:24.540 23:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:24.540 23:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:24.540 23:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:24.799 [2024-07-13 23:15:14.053323] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:29:24.799 23:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:24.799 "name": "raid_bdev1", 00:29:24.799 "uuid": "046a9ed5-559a-48bd-9397-10e027234133", 00:29:24.799 "strip_size_kb": 0, 00:29:24.799 "state": "online", 00:29:24.799 "raid_level": "raid1", 00:29:24.799 "superblock": false, 00:29:24.799 "num_base_bdevs": 4, 00:29:24.799 "num_base_bdevs_discovered": 3, 00:29:24.799 "num_base_bdevs_operational": 3, 00:29:24.799 "process": { 00:29:24.799 "type": "rebuild", 00:29:24.799 "target": "spare", 00:29:24.799 "progress": { 00:29:24.799 "blocks": 22528, 00:29:24.799 "percent": 34 00:29:24.799 } 00:29:24.799 }, 00:29:24.799 "base_bdevs_list": [ 00:29:24.799 { 00:29:24.799 "name": "spare", 00:29:24.799 "uuid": "593b74fb-0e2a-5d24-9c2a-2bd281fc2509", 00:29:24.799 "is_configured": true, 00:29:24.799 "data_offset": 0, 00:29:24.799 "data_size": 65536 00:29:24.799 }, 00:29:24.799 { 00:29:24.799 "name": null, 00:29:24.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:24.799 "is_configured": false, 00:29:24.799 "data_offset": 0, 00:29:24.799 "data_size": 65536 00:29:24.799 }, 00:29:24.799 { 00:29:24.799 "name": "BaseBdev3", 00:29:24.799 "uuid": "fe7805b3-3353-5393-9d3b-3080a31c41bd", 00:29:24.799 "is_configured": true, 00:29:24.799 "data_offset": 0, 00:29:24.799 "data_size": 65536 00:29:24.799 }, 00:29:24.799 { 00:29:24.799 "name": "BaseBdev4", 00:29:24.799 "uuid": "ab2fb780-a8f3-5cd4-82e5-1f6ab0e1dd37", 00:29:24.799 "is_configured": true, 00:29:24.799 "data_offset": 0, 00:29:24.799 "data_size": 65536 00:29:24.799 } 00:29:24.799 ] 00:29:24.799 }' 00:29:24.799 23:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:24.799 23:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:24.799 23:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:25.057 23:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:25.057 23:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=952 00:29:25.057 23:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:25.057 23:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:25.057 23:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:25.058 23:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:25.058 23:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:25.058 23:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:25.058 23:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:25.058 23:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:25.316 23:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:25.316 "name": "raid_bdev1", 00:29:25.316 "uuid": "046a9ed5-559a-48bd-9397-10e027234133", 00:29:25.316 "strip_size_kb": 0, 00:29:25.316 "state": "online", 00:29:25.316 "raid_level": "raid1", 00:29:25.316 "superblock": false, 00:29:25.316 "num_base_bdevs": 4, 00:29:25.316 "num_base_bdevs_discovered": 3, 00:29:25.316 "num_base_bdevs_operational": 3, 00:29:25.316 "process": { 00:29:25.316 "type": "rebuild", 00:29:25.316 "target": "spare", 00:29:25.316 "progress": { 00:29:25.316 "blocks": 26624, 00:29:25.316 "percent": 40 00:29:25.316 } 00:29:25.316 }, 00:29:25.316 "base_bdevs_list": [ 00:29:25.316 { 00:29:25.316 "name": "spare", 00:29:25.316 "uuid": "593b74fb-0e2a-5d24-9c2a-2bd281fc2509", 00:29:25.316 "is_configured": true, 00:29:25.316 "data_offset": 0, 00:29:25.316 "data_size": 65536 00:29:25.316 }, 00:29:25.316 { 00:29:25.316 "name": null, 00:29:25.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:25.316 "is_configured": false, 00:29:25.316 "data_offset": 0, 00:29:25.316 "data_size": 65536 00:29:25.316 }, 00:29:25.316 { 00:29:25.316 "name": "BaseBdev3", 00:29:25.316 "uuid": "fe7805b3-3353-5393-9d3b-3080a31c41bd", 00:29:25.316 "is_configured": true, 00:29:25.316 "data_offset": 0, 00:29:25.316 "data_size": 65536 00:29:25.316 }, 00:29:25.316 { 00:29:25.316 "name": "BaseBdev4", 00:29:25.316 "uuid": "ab2fb780-a8f3-5cd4-82e5-1f6ab0e1dd37", 00:29:25.316 "is_configured": true, 00:29:25.316 "data_offset": 0, 00:29:25.316 "data_size": 65536 00:29:25.316 } 00:29:25.316 ] 00:29:25.316 }' 00:29:25.316 23:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:25.316 23:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:25.316 23:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:25.316 23:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:25.316 23:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:25.316 [2024-07-13 23:15:14.705135] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:29:25.576 [2024-07-13 23:15:14.821704] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:29:25.836 [2024-07-13 23:15:15.064675] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:29:25.836 [2024-07-13 23:15:15.199324] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:29:26.404 23:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:26.404 23:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:26.404 23:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:26.404 23:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:26.404 23:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:26.404 23:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:26.404 23:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:26.404 23:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:26.404 [2024-07-13 23:15:15.664573] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:29:26.664 23:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:26.664 "name": "raid_bdev1", 00:29:26.664 "uuid": "046a9ed5-559a-48bd-9397-10e027234133", 00:29:26.664 "strip_size_kb": 0, 00:29:26.664 "state": "online", 00:29:26.664 "raid_level": "raid1", 00:29:26.664 "superblock": false, 00:29:26.664 "num_base_bdevs": 4, 00:29:26.664 "num_base_bdevs_discovered": 3, 00:29:26.664 "num_base_bdevs_operational": 3, 00:29:26.664 "process": { 00:29:26.664 "type": "rebuild", 00:29:26.664 "target": "spare", 00:29:26.664 "progress": { 00:29:26.664 "blocks": 47104, 00:29:26.664 "percent": 71 00:29:26.664 } 00:29:26.664 }, 00:29:26.664 "base_bdevs_list": [ 00:29:26.664 { 00:29:26.664 "name": "spare", 00:29:26.664 "uuid": "593b74fb-0e2a-5d24-9c2a-2bd281fc2509", 00:29:26.664 "is_configured": true, 00:29:26.664 "data_offset": 0, 00:29:26.664 "data_size": 65536 00:29:26.664 }, 00:29:26.664 { 00:29:26.664 "name": null, 00:29:26.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:26.664 "is_configured": false, 00:29:26.664 "data_offset": 0, 00:29:26.664 "data_size": 65536 00:29:26.664 }, 00:29:26.664 { 00:29:26.664 "name": "BaseBdev3", 00:29:26.664 "uuid": "fe7805b3-3353-5393-9d3b-3080a31c41bd", 00:29:26.664 "is_configured": true, 00:29:26.664 "data_offset": 0, 00:29:26.664 "data_size": 65536 00:29:26.664 }, 00:29:26.664 { 00:29:26.665 "name": "BaseBdev4", 00:29:26.665 "uuid": "ab2fb780-a8f3-5cd4-82e5-1f6ab0e1dd37", 00:29:26.665 "is_configured": true, 00:29:26.665 "data_offset": 0, 00:29:26.665 "data_size": 65536 00:29:26.665 } 00:29:26.665 ] 00:29:26.665 }' 00:29:26.665 23:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:26.665 23:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:26.665 23:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:26.665 23:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:26.665 23:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:27.602 [2024-07-13 23:15:16.760491] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:27.602 [2024-07-13 23:15:16.860509] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:27.602 [2024-07-13 23:15:16.862610] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:27.602 23:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:27.602 23:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:27.602 23:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:27.602 23:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:27.602 23:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:27.602 23:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:27.602 23:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:27.602 23:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:28.199 23:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:28.199 "name": "raid_bdev1", 00:29:28.199 "uuid": "046a9ed5-559a-48bd-9397-10e027234133", 00:29:28.199 "strip_size_kb": 0, 00:29:28.199 "state": "online", 00:29:28.199 "raid_level": "raid1", 00:29:28.199 "superblock": false, 00:29:28.199 "num_base_bdevs": 4, 00:29:28.199 "num_base_bdevs_discovered": 3, 00:29:28.199 "num_base_bdevs_operational": 3, 00:29:28.199 "base_bdevs_list": [ 00:29:28.199 { 00:29:28.199 "name": "spare", 00:29:28.199 "uuid": "593b74fb-0e2a-5d24-9c2a-2bd281fc2509", 00:29:28.199 "is_configured": true, 00:29:28.199 "data_offset": 0, 00:29:28.199 "data_size": 65536 00:29:28.199 }, 00:29:28.199 { 00:29:28.199 "name": null, 00:29:28.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:28.199 "is_configured": false, 00:29:28.199 "data_offset": 0, 00:29:28.199 "data_size": 65536 00:29:28.199 }, 00:29:28.199 { 00:29:28.199 "name": "BaseBdev3", 00:29:28.199 "uuid": "fe7805b3-3353-5393-9d3b-3080a31c41bd", 00:29:28.199 "is_configured": true, 00:29:28.199 "data_offset": 0, 00:29:28.199 "data_size": 65536 00:29:28.199 }, 00:29:28.199 { 00:29:28.199 "name": "BaseBdev4", 00:29:28.199 "uuid": "ab2fb780-a8f3-5cd4-82e5-1f6ab0e1dd37", 00:29:28.199 "is_configured": true, 00:29:28.199 "data_offset": 0, 00:29:28.199 "data_size": 65536 00:29:28.199 } 00:29:28.199 ] 00:29:28.199 }' 00:29:28.199 23:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:28.199 23:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:28.199 23:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:28.199 23:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:29:28.199 23:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:29:28.199 23:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:28.199 23:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:28.199 23:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:28.199 23:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:28.199 23:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:28.199 23:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:28.199 23:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:28.458 23:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:28.458 "name": "raid_bdev1", 00:29:28.458 "uuid": "046a9ed5-559a-48bd-9397-10e027234133", 00:29:28.458 "strip_size_kb": 0, 00:29:28.458 "state": "online", 00:29:28.458 "raid_level": "raid1", 00:29:28.458 "superblock": false, 00:29:28.458 "num_base_bdevs": 4, 00:29:28.458 "num_base_bdevs_discovered": 3, 00:29:28.458 "num_base_bdevs_operational": 3, 00:29:28.458 "base_bdevs_list": [ 00:29:28.458 { 00:29:28.458 "name": "spare", 00:29:28.458 "uuid": "593b74fb-0e2a-5d24-9c2a-2bd281fc2509", 00:29:28.458 "is_configured": true, 00:29:28.458 "data_offset": 0, 00:29:28.458 "data_size": 65536 00:29:28.458 }, 00:29:28.458 { 00:29:28.458 "name": null, 00:29:28.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:28.458 "is_configured": false, 00:29:28.458 "data_offset": 0, 00:29:28.458 "data_size": 65536 00:29:28.458 }, 00:29:28.458 { 00:29:28.458 "name": "BaseBdev3", 00:29:28.458 "uuid": "fe7805b3-3353-5393-9d3b-3080a31c41bd", 00:29:28.458 "is_configured": true, 00:29:28.458 "data_offset": 0, 00:29:28.458 "data_size": 65536 00:29:28.458 }, 00:29:28.458 { 00:29:28.458 "name": "BaseBdev4", 00:29:28.458 "uuid": "ab2fb780-a8f3-5cd4-82e5-1f6ab0e1dd37", 00:29:28.458 "is_configured": true, 00:29:28.458 "data_offset": 0, 00:29:28.458 "data_size": 65536 00:29:28.458 } 00:29:28.458 ] 00:29:28.458 }' 00:29:28.458 23:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:28.458 23:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:28.458 23:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:28.458 23:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:28.458 23:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:28.458 23:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:28.458 23:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:28.458 23:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:28.458 23:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:28.458 23:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:28.458 23:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:28.458 23:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:28.458 23:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:28.458 23:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:28.458 23:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:28.458 23:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:28.717 23:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:28.717 "name": "raid_bdev1", 00:29:28.717 "uuid": "046a9ed5-559a-48bd-9397-10e027234133", 00:29:28.717 "strip_size_kb": 0, 00:29:28.717 "state": "online", 00:29:28.718 "raid_level": "raid1", 00:29:28.718 "superblock": false, 00:29:28.718 "num_base_bdevs": 4, 00:29:28.718 "num_base_bdevs_discovered": 3, 00:29:28.718 "num_base_bdevs_operational": 3, 00:29:28.718 "base_bdevs_list": [ 00:29:28.718 { 00:29:28.718 "name": "spare", 00:29:28.718 "uuid": "593b74fb-0e2a-5d24-9c2a-2bd281fc2509", 00:29:28.718 "is_configured": true, 00:29:28.718 "data_offset": 0, 00:29:28.718 "data_size": 65536 00:29:28.718 }, 00:29:28.718 { 00:29:28.718 "name": null, 00:29:28.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:28.718 "is_configured": false, 00:29:28.718 "data_offset": 0, 00:29:28.718 "data_size": 65536 00:29:28.718 }, 00:29:28.718 { 00:29:28.718 "name": "BaseBdev3", 00:29:28.718 "uuid": "fe7805b3-3353-5393-9d3b-3080a31c41bd", 00:29:28.718 "is_configured": true, 00:29:28.718 "data_offset": 0, 00:29:28.718 "data_size": 65536 00:29:28.718 }, 00:29:28.718 { 00:29:28.718 "name": "BaseBdev4", 00:29:28.718 "uuid": "ab2fb780-a8f3-5cd4-82e5-1f6ab0e1dd37", 00:29:28.718 "is_configured": true, 00:29:28.718 "data_offset": 0, 00:29:28.718 "data_size": 65536 00:29:28.718 } 00:29:28.718 ] 00:29:28.718 }' 00:29:28.718 23:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:28.718 23:15:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:29.654 23:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:29.654 [2024-07-13 23:15:18.902963] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:29.654 [2024-07-13 23:15:18.903015] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:29.654 00:29:29.654 Latency(us) 00:29:29.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.654 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:29:29.654 raid_bdev1 : 11.37 105.45 316.36 0.00 0.00 13090.09 283.00 111053.73 00:29:29.654 =================================================================================================================== 00:29:29.654 Total : 105.45 316.36 0.00 0.00 13090.09 283.00 111053.73 00:29:29.654 [2024-07-13 23:15:18.946540] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:29.654 [2024-07-13 23:15:18.946607] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:29.654 [2024-07-13 23:15:18.946733] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:29.654 [2024-07-13 23:15:18.946750] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:29:29.654 0 00:29:29.654 23:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:29.654 23:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:29:29.913 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:29:29.913 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:29:29.913 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:29:29.913 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:29:29.913 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:29.913 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:29:29.913 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:29.913 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:29.913 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:29.913 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:29:29.913 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:29.913 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:29.913 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:29:30.172 /dev/nbd0 00:29:30.172 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:30.172 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:30.172 23:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:29:30.172 23:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:29:30.172 23:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:30.172 23:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:30.173 23:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:29:30.173 23:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:29:30.173 23:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:30.173 23:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:30.173 23:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:30.173 1+0 records in 00:29:30.173 1+0 records out 00:29:30.173 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000681748 s, 6.0 MB/s 00:29:30.173 23:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:30.173 23:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:29:30.173 23:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:30.173 23:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:30.173 23:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:29:30.173 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:30.173 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:30.173 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:29:30.173 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:29:30.173 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # continue 00:29:30.173 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:29:30.173 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:29:30.173 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:29:30.173 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:30.173 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:29:30.173 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:30.173 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:29:30.173 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:30.173 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:29:30.173 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:30.173 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:30.173 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:29:30.432 /dev/nbd1 00:29:30.432 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:30.432 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:30.432 23:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:29:30.432 23:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:29:30.432 23:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:30.432 23:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:30.432 23:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:29:30.432 23:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:29:30.432 23:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:30.432 23:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:30.432 23:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:30.432 1+0 records in 00:29:30.432 1+0 records out 00:29:30.432 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227514 s, 18.0 MB/s 00:29:30.432 23:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:30.432 23:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:29:30.432 23:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:30.432 23:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:30.432 23:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:29:30.432 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:30.432 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:30.432 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:29:30.691 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:29:30.691 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:30.691 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:29:30.691 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:30.691 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:29:30.691 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:30.691 23:15:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:30.950 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:30.950 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:30.950 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:30.950 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:30.950 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:30.950 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:30.950 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:29:30.950 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:30.950 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:29:30.950 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:29:30.950 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:29:30.950 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:30.950 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:29:30.950 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:30.950 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:29:30.950 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:30.950 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:29:30.950 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:30.950 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:30.950 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:29:31.208 /dev/nbd1 00:29:31.208 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:31.208 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:31.208 23:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:29:31.208 23:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:29:31.208 23:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:31.208 23:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:31.208 23:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:29:31.208 23:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:29:31.208 23:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:31.208 23:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:31.208 23:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:31.208 1+0 records in 00:29:31.208 1+0 records out 00:29:31.208 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326545 s, 12.5 MB/s 00:29:31.208 23:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:31.209 23:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:29:31.209 23:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:31.209 23:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:31.209 23:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:29:31.209 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:31.209 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:31.209 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:29:31.209 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:29:31.209 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:31.209 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:29:31.209 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:31.209 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:29:31.209 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:31.209 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:31.467 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:31.467 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:31.467 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:31.467 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:31.467 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:31.467 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:31.467 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:29:31.467 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:31.467 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:31.467 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:31.467 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:31.467 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:31.467 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:29:31.467 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:31.467 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:31.726 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:31.726 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:31.726 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:31.726 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:31.726 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:31.726 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:31.726 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:29:31.726 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:31.726 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:29:31.726 23:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 158202 00:29:31.726 23:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@948 -- # '[' -z 158202 ']' 00:29:31.726 23:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # kill -0 158202 00:29:31.726 23:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # uname 00:29:31.726 23:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:31.726 23:15:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 158202 00:29:31.726 killing process with pid 158202 00:29:31.726 Received shutdown signal, test time was about 13.448974 seconds 00:29:31.726 00:29:31.726 Latency(us) 00:29:31.726 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:31.726 =================================================================================================================== 00:29:31.726 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:31.726 23:15:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:31.726 23:15:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:31.726 23:15:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 158202' 00:29:31.726 23:15:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@967 -- # kill 158202 00:29:31.726 23:15:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # wait 158202 00:29:31.726 [2024-07-13 23:15:21.021849] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:31.726 [2024-07-13 23:15:21.077035] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:32.294 23:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:29:32.294 00:29:32.294 real 0m19.100s 00:29:32.294 user 0m30.698s 00:29:32.294 sys 0m2.424s 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:32.295 ************************************ 00:29:32.295 END TEST raid_rebuild_test_io 00:29:32.295 ************************************ 00:29:32.295 23:15:21 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:29:32.295 23:15:21 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:29:32.295 23:15:21 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:29:32.295 23:15:21 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:32.295 23:15:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:32.295 ************************************ 00:29:32.295 START TEST raid_rebuild_test_sb_io 00:29:32.295 ************************************ 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 true true true 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=158714 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 158714 /var/tmp/spdk-raid.sock 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@829 -- # '[' -z 158714 ']' 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:32.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:32.295 23:15:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:32.295 [2024-07-13 23:15:21.545389] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:29:32.295 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:32.295 Zero copy mechanism will not be used. 00:29:32.295 [2024-07-13 23:15:21.545779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158714 ] 00:29:32.295 [2024-07-13 23:15:21.695284] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.554 [2024-07-13 23:15:21.797459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.554 [2024-07-13 23:15:21.870141] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:33.122 23:15:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:33.122 23:15:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # return 0 00:29:33.122 23:15:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:33.122 23:15:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:33.381 BaseBdev1_malloc 00:29:33.381 23:15:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:33.640 [2024-07-13 23:15:22.922115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:33.640 [2024-07-13 23:15:22.922466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:33.640 [2024-07-13 23:15:22.922653] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:29:33.640 [2024-07-13 23:15:22.922888] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:33.640 [2024-07-13 23:15:22.925946] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:33.640 [2024-07-13 23:15:22.926179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:33.640 BaseBdev1 00:29:33.640 23:15:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:33.640 23:15:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:33.899 BaseBdev2_malloc 00:29:33.899 23:15:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:34.157 [2024-07-13 23:15:23.360450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:34.157 [2024-07-13 23:15:23.360802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:34.157 [2024-07-13 23:15:23.361008] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:29:34.157 [2024-07-13 23:15:23.361162] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:34.157 [2024-07-13 23:15:23.363729] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:34.157 [2024-07-13 23:15:23.363923] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:34.157 BaseBdev2 00:29:34.157 23:15:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:34.157 23:15:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:34.415 BaseBdev3_malloc 00:29:34.415 23:15:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:34.673 [2024-07-13 23:15:23.862286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:34.673 [2024-07-13 23:15:23.862664] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:34.673 [2024-07-13 23:15:23.862892] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:29:34.673 [2024-07-13 23:15:23.863065] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:34.673 [2024-07-13 23:15:23.865858] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:34.673 [2024-07-13 23:15:23.866068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:34.673 BaseBdev3 00:29:34.673 23:15:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:34.673 23:15:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:34.931 BaseBdev4_malloc 00:29:34.931 23:15:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:29:35.190 [2024-07-13 23:15:24.384218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:29:35.190 [2024-07-13 23:15:24.384630] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:35.190 [2024-07-13 23:15:24.384875] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:29:35.190 [2024-07-13 23:15:24.385108] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:35.190 [2024-07-13 23:15:24.387939] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:35.190 [2024-07-13 23:15:24.388143] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:35.190 BaseBdev4 00:29:35.190 23:15:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:29:35.449 spare_malloc 00:29:35.449 23:15:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:35.449 spare_delay 00:29:35.449 23:15:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:35.707 [2024-07-13 23:15:25.024069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:35.707 [2024-07-13 23:15:25.024417] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:35.707 [2024-07-13 23:15:25.024596] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:29:35.707 [2024-07-13 23:15:25.024760] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:35.707 [2024-07-13 23:15:25.027522] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:35.707 [2024-07-13 23:15:25.027729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:35.707 spare 00:29:35.707 23:15:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:29:35.967 [2024-07-13 23:15:25.236181] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:35.967 [2024-07-13 23:15:25.238502] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:35.967 [2024-07-13 23:15:25.238724] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:35.967 [2024-07-13 23:15:25.238832] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:35.967 [2024-07-13 23:15:25.239184] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:29:35.967 [2024-07-13 23:15:25.239312] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:35.967 [2024-07-13 23:15:25.239556] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:29:35.967 [2024-07-13 23:15:25.240190] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:29:35.967 [2024-07-13 23:15:25.240352] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:29:35.967 [2024-07-13 23:15:25.240674] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:35.967 23:15:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:29:35.967 23:15:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:35.967 23:15:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:35.967 23:15:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:35.967 23:15:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:35.967 23:15:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:35.967 23:15:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:35.967 23:15:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:35.967 23:15:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:35.967 23:15:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:35.967 23:15:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:35.968 23:15:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:36.226 23:15:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:36.226 "name": "raid_bdev1", 00:29:36.226 "uuid": "90c5691d-5f7e-4e7a-b7ed-70790ef54c27", 00:29:36.227 "strip_size_kb": 0, 00:29:36.227 "state": "online", 00:29:36.227 "raid_level": "raid1", 00:29:36.227 "superblock": true, 00:29:36.227 "num_base_bdevs": 4, 00:29:36.227 "num_base_bdevs_discovered": 4, 00:29:36.227 "num_base_bdevs_operational": 4, 00:29:36.227 "base_bdevs_list": [ 00:29:36.227 { 00:29:36.227 "name": "BaseBdev1", 00:29:36.227 "uuid": "2c559022-79f6-50d4-b778-59f451c26533", 00:29:36.227 "is_configured": true, 00:29:36.227 "data_offset": 2048, 00:29:36.227 "data_size": 63488 00:29:36.227 }, 00:29:36.227 { 00:29:36.227 "name": "BaseBdev2", 00:29:36.227 "uuid": "10c4b706-c7d8-59d9-b864-90e4204ca590", 00:29:36.227 "is_configured": true, 00:29:36.227 "data_offset": 2048, 00:29:36.227 "data_size": 63488 00:29:36.227 }, 00:29:36.227 { 00:29:36.227 "name": "BaseBdev3", 00:29:36.227 "uuid": "7c0e895c-b126-5c90-b5d1-1b5aaad968c7", 00:29:36.227 "is_configured": true, 00:29:36.227 "data_offset": 2048, 00:29:36.227 "data_size": 63488 00:29:36.227 }, 00:29:36.227 { 00:29:36.227 "name": "BaseBdev4", 00:29:36.227 "uuid": "0dc2d448-e0d8-5bc7-8254-3291034b45e8", 00:29:36.227 "is_configured": true, 00:29:36.227 "data_offset": 2048, 00:29:36.227 "data_size": 63488 00:29:36.227 } 00:29:36.227 ] 00:29:36.227 }' 00:29:36.227 23:15:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:36.227 23:15:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:36.795 23:15:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:29:36.795 23:15:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:37.054 [2024-07-13 23:15:26.421255] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:37.054 23:15:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:29:37.054 23:15:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:37.054 23:15:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:37.313 23:15:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:29:37.313 23:15:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:29:37.313 23:15:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:29:37.313 23:15:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:29:37.572 [2024-07-13 23:15:26.820484] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:29:37.572 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:37.572 Zero copy mechanism will not be used. 00:29:37.572 Running I/O for 60 seconds... 00:29:37.831 [2024-07-13 23:15:26.999743] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:37.831 [2024-07-13 23:15:27.012507] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002870 00:29:37.831 23:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:37.831 23:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:37.831 23:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:37.831 23:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:37.831 23:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:37.831 23:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:37.831 23:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:37.831 23:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:37.831 23:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:37.831 23:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:37.831 23:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:37.831 23:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:38.090 23:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:38.090 "name": "raid_bdev1", 00:29:38.090 "uuid": "90c5691d-5f7e-4e7a-b7ed-70790ef54c27", 00:29:38.090 "strip_size_kb": 0, 00:29:38.090 "state": "online", 00:29:38.090 "raid_level": "raid1", 00:29:38.090 "superblock": true, 00:29:38.090 "num_base_bdevs": 4, 00:29:38.090 "num_base_bdevs_discovered": 3, 00:29:38.090 "num_base_bdevs_operational": 3, 00:29:38.090 "base_bdevs_list": [ 00:29:38.090 { 00:29:38.090 "name": null, 00:29:38.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:38.090 "is_configured": false, 00:29:38.090 "data_offset": 2048, 00:29:38.090 "data_size": 63488 00:29:38.090 }, 00:29:38.090 { 00:29:38.090 "name": "BaseBdev2", 00:29:38.090 "uuid": "10c4b706-c7d8-59d9-b864-90e4204ca590", 00:29:38.090 "is_configured": true, 00:29:38.090 "data_offset": 2048, 00:29:38.090 "data_size": 63488 00:29:38.090 }, 00:29:38.090 { 00:29:38.090 "name": "BaseBdev3", 00:29:38.090 "uuid": "7c0e895c-b126-5c90-b5d1-1b5aaad968c7", 00:29:38.090 "is_configured": true, 00:29:38.090 "data_offset": 2048, 00:29:38.090 "data_size": 63488 00:29:38.090 }, 00:29:38.090 { 00:29:38.090 "name": "BaseBdev4", 00:29:38.090 "uuid": "0dc2d448-e0d8-5bc7-8254-3291034b45e8", 00:29:38.090 "is_configured": true, 00:29:38.090 "data_offset": 2048, 00:29:38.090 "data_size": 63488 00:29:38.090 } 00:29:38.090 ] 00:29:38.090 }' 00:29:38.090 23:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:38.090 23:15:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:38.657 23:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:38.916 [2024-07-13 23:15:28.094873] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:38.916 23:15:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:29:38.916 [2024-07-13 23:15:28.153038] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:29:38.916 [2024-07-13 23:15:28.156088] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:38.916 [2024-07-13 23:15:28.283591] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:38.916 [2024-07-13 23:15:28.285372] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:39.174 [2024-07-13 23:15:28.511363] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:39.174 [2024-07-13 23:15:28.512472] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:39.740 [2024-07-13 23:15:28.862319] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:39.740 [2024-07-13 23:15:28.863264] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:39.740 [2024-07-13 23:15:28.986490] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:40.001 23:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:40.001 23:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:40.001 23:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:40.001 23:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:40.001 23:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:40.001 23:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:40.001 23:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:40.001 23:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:40.001 "name": "raid_bdev1", 00:29:40.001 "uuid": "90c5691d-5f7e-4e7a-b7ed-70790ef54c27", 00:29:40.001 "strip_size_kb": 0, 00:29:40.001 "state": "online", 00:29:40.001 "raid_level": "raid1", 00:29:40.001 "superblock": true, 00:29:40.001 "num_base_bdevs": 4, 00:29:40.001 "num_base_bdevs_discovered": 4, 00:29:40.001 "num_base_bdevs_operational": 4, 00:29:40.001 "process": { 00:29:40.001 "type": "rebuild", 00:29:40.001 "target": "spare", 00:29:40.001 "progress": { 00:29:40.001 "blocks": 14336, 00:29:40.001 "percent": 22 00:29:40.001 } 00:29:40.001 }, 00:29:40.001 "base_bdevs_list": [ 00:29:40.001 { 00:29:40.001 "name": "spare", 00:29:40.001 "uuid": "05b46bdb-85aa-5b43-ada7-754c58ad7e76", 00:29:40.001 "is_configured": true, 00:29:40.001 "data_offset": 2048, 00:29:40.001 "data_size": 63488 00:29:40.001 }, 00:29:40.001 { 00:29:40.001 "name": "BaseBdev2", 00:29:40.001 "uuid": "10c4b706-c7d8-59d9-b864-90e4204ca590", 00:29:40.001 "is_configured": true, 00:29:40.001 "data_offset": 2048, 00:29:40.001 "data_size": 63488 00:29:40.001 }, 00:29:40.001 { 00:29:40.001 "name": "BaseBdev3", 00:29:40.001 "uuid": "7c0e895c-b126-5c90-b5d1-1b5aaad968c7", 00:29:40.001 "is_configured": true, 00:29:40.001 "data_offset": 2048, 00:29:40.001 "data_size": 63488 00:29:40.001 }, 00:29:40.001 { 00:29:40.001 "name": "BaseBdev4", 00:29:40.001 "uuid": "0dc2d448-e0d8-5bc7-8254-3291034b45e8", 00:29:40.001 "is_configured": true, 00:29:40.001 "data_offset": 2048, 00:29:40.001 "data_size": 63488 00:29:40.001 } 00:29:40.001 ] 00:29:40.001 }' 00:29:40.001 23:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:40.001 [2024-07-13 23:15:29.400980] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:29:40.264 23:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:40.264 23:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:40.264 23:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:40.264 23:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:40.522 [2024-07-13 23:15:29.699223] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:40.522 [2024-07-13 23:15:29.733124] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:29:40.522 [2024-07-13 23:15:29.733938] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:29:40.522 [2024-07-13 23:15:29.740999] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:40.522 [2024-07-13 23:15:29.752019] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:40.522 [2024-07-13 23:15:29.752209] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:40.522 [2024-07-13 23:15:29.752265] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:40.522 [2024-07-13 23:15:29.774982] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002870 00:29:40.522 23:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:40.522 23:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:40.522 23:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:40.522 23:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:40.522 23:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:40.522 23:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:40.522 23:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:40.522 23:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:40.522 23:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:40.522 23:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:40.522 23:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:40.522 23:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:40.780 23:15:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:40.780 "name": "raid_bdev1", 00:29:40.780 "uuid": "90c5691d-5f7e-4e7a-b7ed-70790ef54c27", 00:29:40.780 "strip_size_kb": 0, 00:29:40.780 "state": "online", 00:29:40.780 "raid_level": "raid1", 00:29:40.780 "superblock": true, 00:29:40.780 "num_base_bdevs": 4, 00:29:40.780 "num_base_bdevs_discovered": 3, 00:29:40.780 "num_base_bdevs_operational": 3, 00:29:40.780 "base_bdevs_list": [ 00:29:40.780 { 00:29:40.780 "name": null, 00:29:40.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:40.780 "is_configured": false, 00:29:40.780 "data_offset": 2048, 00:29:40.780 "data_size": 63488 00:29:40.780 }, 00:29:40.780 { 00:29:40.780 "name": "BaseBdev2", 00:29:40.780 "uuid": "10c4b706-c7d8-59d9-b864-90e4204ca590", 00:29:40.780 "is_configured": true, 00:29:40.780 "data_offset": 2048, 00:29:40.780 "data_size": 63488 00:29:40.780 }, 00:29:40.780 { 00:29:40.780 "name": "BaseBdev3", 00:29:40.780 "uuid": "7c0e895c-b126-5c90-b5d1-1b5aaad968c7", 00:29:40.780 "is_configured": true, 00:29:40.780 "data_offset": 2048, 00:29:40.780 "data_size": 63488 00:29:40.780 }, 00:29:40.780 { 00:29:40.780 "name": "BaseBdev4", 00:29:40.780 "uuid": "0dc2d448-e0d8-5bc7-8254-3291034b45e8", 00:29:40.780 "is_configured": true, 00:29:40.780 "data_offset": 2048, 00:29:40.780 "data_size": 63488 00:29:40.780 } 00:29:40.780 ] 00:29:40.780 }' 00:29:40.780 23:15:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:40.780 23:15:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:41.347 23:15:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:41.347 23:15:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:41.347 23:15:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:41.347 23:15:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:41.347 23:15:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:41.347 23:15:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:41.347 23:15:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:41.914 23:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:41.914 "name": "raid_bdev1", 00:29:41.914 "uuid": "90c5691d-5f7e-4e7a-b7ed-70790ef54c27", 00:29:41.914 "strip_size_kb": 0, 00:29:41.914 "state": "online", 00:29:41.914 "raid_level": "raid1", 00:29:41.914 "superblock": true, 00:29:41.914 "num_base_bdevs": 4, 00:29:41.914 "num_base_bdevs_discovered": 3, 00:29:41.914 "num_base_bdevs_operational": 3, 00:29:41.914 "base_bdevs_list": [ 00:29:41.914 { 00:29:41.914 "name": null, 00:29:41.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:41.914 "is_configured": false, 00:29:41.914 "data_offset": 2048, 00:29:41.914 "data_size": 63488 00:29:41.914 }, 00:29:41.914 { 00:29:41.914 "name": "BaseBdev2", 00:29:41.914 "uuid": "10c4b706-c7d8-59d9-b864-90e4204ca590", 00:29:41.914 "is_configured": true, 00:29:41.914 "data_offset": 2048, 00:29:41.914 "data_size": 63488 00:29:41.914 }, 00:29:41.914 { 00:29:41.914 "name": "BaseBdev3", 00:29:41.914 "uuid": "7c0e895c-b126-5c90-b5d1-1b5aaad968c7", 00:29:41.914 "is_configured": true, 00:29:41.914 "data_offset": 2048, 00:29:41.914 "data_size": 63488 00:29:41.914 }, 00:29:41.914 { 00:29:41.914 "name": "BaseBdev4", 00:29:41.914 "uuid": "0dc2d448-e0d8-5bc7-8254-3291034b45e8", 00:29:41.914 "is_configured": true, 00:29:41.914 "data_offset": 2048, 00:29:41.914 "data_size": 63488 00:29:41.914 } 00:29:41.914 ] 00:29:41.914 }' 00:29:41.914 23:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:41.914 23:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:41.914 23:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:41.914 23:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:41.914 23:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:42.172 [2024-07-13 23:15:31.371167] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:42.172 [2024-07-13 23:15:31.403699] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002ae0 00:29:42.172 [2024-07-13 23:15:31.406214] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:42.172 23:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:42.172 [2024-07-13 23:15:31.539553] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:42.431 [2024-07-13 23:15:31.667574] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:42.431 [2024-07-13 23:15:31.668292] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:42.690 [2024-07-13 23:15:32.007671] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:42.948 [2024-07-13 23:15:32.140203] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:43.207 23:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:43.207 23:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:43.207 23:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:43.207 23:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:43.207 23:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:43.207 23:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:43.207 23:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:43.207 [2024-07-13 23:15:32.517439] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:43.465 23:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:43.465 "name": "raid_bdev1", 00:29:43.465 "uuid": "90c5691d-5f7e-4e7a-b7ed-70790ef54c27", 00:29:43.465 "strip_size_kb": 0, 00:29:43.465 "state": "online", 00:29:43.465 "raid_level": "raid1", 00:29:43.465 "superblock": true, 00:29:43.465 "num_base_bdevs": 4, 00:29:43.465 "num_base_bdevs_discovered": 4, 00:29:43.465 "num_base_bdevs_operational": 4, 00:29:43.465 "process": { 00:29:43.465 "type": "rebuild", 00:29:43.465 "target": "spare", 00:29:43.465 "progress": { 00:29:43.465 "blocks": 14336, 00:29:43.465 "percent": 22 00:29:43.465 } 00:29:43.465 }, 00:29:43.466 "base_bdevs_list": [ 00:29:43.466 { 00:29:43.466 "name": "spare", 00:29:43.466 "uuid": "05b46bdb-85aa-5b43-ada7-754c58ad7e76", 00:29:43.466 "is_configured": true, 00:29:43.466 "data_offset": 2048, 00:29:43.466 "data_size": 63488 00:29:43.466 }, 00:29:43.466 { 00:29:43.466 "name": "BaseBdev2", 00:29:43.466 "uuid": "10c4b706-c7d8-59d9-b864-90e4204ca590", 00:29:43.466 "is_configured": true, 00:29:43.466 "data_offset": 2048, 00:29:43.466 "data_size": 63488 00:29:43.466 }, 00:29:43.466 { 00:29:43.466 "name": "BaseBdev3", 00:29:43.466 "uuid": "7c0e895c-b126-5c90-b5d1-1b5aaad968c7", 00:29:43.466 "is_configured": true, 00:29:43.466 "data_offset": 2048, 00:29:43.466 "data_size": 63488 00:29:43.466 }, 00:29:43.466 { 00:29:43.466 "name": "BaseBdev4", 00:29:43.466 "uuid": "0dc2d448-e0d8-5bc7-8254-3291034b45e8", 00:29:43.466 "is_configured": true, 00:29:43.466 "data_offset": 2048, 00:29:43.466 "data_size": 63488 00:29:43.466 } 00:29:43.466 ] 00:29:43.466 }' 00:29:43.466 23:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:43.466 [2024-07-13 23:15:32.739649] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:29:43.466 23:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:43.466 23:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:43.466 23:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:43.466 23:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:29:43.466 23:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:29:43.466 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:29:43.466 23:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:29:43.466 23:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:29:43.466 23:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:29:43.466 23:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:29:43.724 [2024-07-13 23:15:32.994333] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:29:43.724 [2024-07-13 23:15:32.995045] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:29:43.724 [2024-07-13 23:15:33.039069] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:43.982 [2024-07-13 23:15:33.220919] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:29:44.240 [2024-07-13 23:15:33.423231] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002870 00:29:44.240 [2024-07-13 23:15:33.423390] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002ae0 00:29:44.240 [2024-07-13 23:15:33.425380] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:29:44.240 23:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:29:44.240 23:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:29:44.240 23:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:44.240 23:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:44.240 23:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:44.240 23:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:44.240 23:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:44.240 23:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:44.240 23:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:44.497 [2024-07-13 23:15:33.671087] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:29:44.497 23:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:44.497 "name": "raid_bdev1", 00:29:44.497 "uuid": "90c5691d-5f7e-4e7a-b7ed-70790ef54c27", 00:29:44.497 "strip_size_kb": 0, 00:29:44.497 "state": "online", 00:29:44.497 "raid_level": "raid1", 00:29:44.497 "superblock": true, 00:29:44.497 "num_base_bdevs": 4, 00:29:44.497 "num_base_bdevs_discovered": 3, 00:29:44.497 "num_base_bdevs_operational": 3, 00:29:44.497 "process": { 00:29:44.497 "type": "rebuild", 00:29:44.497 "target": "spare", 00:29:44.497 "progress": { 00:29:44.497 "blocks": 26624, 00:29:44.497 "percent": 41 00:29:44.497 } 00:29:44.497 }, 00:29:44.497 "base_bdevs_list": [ 00:29:44.497 { 00:29:44.497 "name": "spare", 00:29:44.497 "uuid": "05b46bdb-85aa-5b43-ada7-754c58ad7e76", 00:29:44.497 "is_configured": true, 00:29:44.497 "data_offset": 2048, 00:29:44.497 "data_size": 63488 00:29:44.497 }, 00:29:44.497 { 00:29:44.497 "name": null, 00:29:44.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:44.497 "is_configured": false, 00:29:44.497 "data_offset": 2048, 00:29:44.497 "data_size": 63488 00:29:44.497 }, 00:29:44.497 { 00:29:44.497 "name": "BaseBdev3", 00:29:44.497 "uuid": "7c0e895c-b126-5c90-b5d1-1b5aaad968c7", 00:29:44.497 "is_configured": true, 00:29:44.497 "data_offset": 2048, 00:29:44.497 "data_size": 63488 00:29:44.497 }, 00:29:44.497 { 00:29:44.497 "name": "BaseBdev4", 00:29:44.497 "uuid": "0dc2d448-e0d8-5bc7-8254-3291034b45e8", 00:29:44.497 "is_configured": true, 00:29:44.497 "data_offset": 2048, 00:29:44.497 "data_size": 63488 00:29:44.497 } 00:29:44.497 ] 00:29:44.497 }' 00:29:44.497 23:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:44.497 23:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:44.497 23:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:44.497 23:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:44.497 23:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=971 00:29:44.497 23:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:44.497 23:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:44.497 23:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:44.497 23:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:44.497 23:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:44.497 23:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:44.497 23:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:44.497 23:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:44.756 [2024-07-13 23:15:34.046059] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:29:44.756 23:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:44.756 "name": "raid_bdev1", 00:29:44.756 "uuid": "90c5691d-5f7e-4e7a-b7ed-70790ef54c27", 00:29:44.756 "strip_size_kb": 0, 00:29:44.756 "state": "online", 00:29:44.756 "raid_level": "raid1", 00:29:44.756 "superblock": true, 00:29:44.756 "num_base_bdevs": 4, 00:29:44.756 "num_base_bdevs_discovered": 3, 00:29:44.756 "num_base_bdevs_operational": 3, 00:29:44.756 "process": { 00:29:44.756 "type": "rebuild", 00:29:44.756 "target": "spare", 00:29:44.756 "progress": { 00:29:44.756 "blocks": 32768, 00:29:44.756 "percent": 51 00:29:44.756 } 00:29:44.756 }, 00:29:44.756 "base_bdevs_list": [ 00:29:44.756 { 00:29:44.756 "name": "spare", 00:29:44.756 "uuid": "05b46bdb-85aa-5b43-ada7-754c58ad7e76", 00:29:44.756 "is_configured": true, 00:29:44.756 "data_offset": 2048, 00:29:44.756 "data_size": 63488 00:29:44.756 }, 00:29:44.756 { 00:29:44.756 "name": null, 00:29:44.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:44.756 "is_configured": false, 00:29:44.756 "data_offset": 2048, 00:29:44.756 "data_size": 63488 00:29:44.756 }, 00:29:44.756 { 00:29:44.756 "name": "BaseBdev3", 00:29:44.756 "uuid": "7c0e895c-b126-5c90-b5d1-1b5aaad968c7", 00:29:44.756 "is_configured": true, 00:29:44.756 "data_offset": 2048, 00:29:44.756 "data_size": 63488 00:29:44.756 }, 00:29:44.756 { 00:29:44.756 "name": "BaseBdev4", 00:29:44.756 "uuid": "0dc2d448-e0d8-5bc7-8254-3291034b45e8", 00:29:44.756 "is_configured": true, 00:29:44.756 "data_offset": 2048, 00:29:44.756 "data_size": 63488 00:29:44.756 } 00:29:44.756 ] 00:29:44.756 }' 00:29:44.756 23:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:44.756 23:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:44.756 23:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:45.014 [2024-07-13 23:15:34.163077] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:29:45.014 23:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:45.014 23:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:45.014 [2024-07-13 23:15:34.385021] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:29:45.273 [2024-07-13 23:15:34.486734] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:29:45.531 [2024-07-13 23:15:34.849385] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:29:45.789 [2024-07-13 23:15:35.064387] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:29:46.047 23:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:46.047 23:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:46.047 23:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:46.047 23:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:46.047 23:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:46.047 23:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:46.047 23:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:46.047 23:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:46.047 [2024-07-13 23:15:35.282605] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:29:46.305 23:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:46.305 "name": "raid_bdev1", 00:29:46.305 "uuid": "90c5691d-5f7e-4e7a-b7ed-70790ef54c27", 00:29:46.305 "strip_size_kb": 0, 00:29:46.305 "state": "online", 00:29:46.305 "raid_level": "raid1", 00:29:46.305 "superblock": true, 00:29:46.305 "num_base_bdevs": 4, 00:29:46.305 "num_base_bdevs_discovered": 3, 00:29:46.305 "num_base_bdevs_operational": 3, 00:29:46.305 "process": { 00:29:46.305 "type": "rebuild", 00:29:46.305 "target": "spare", 00:29:46.305 "progress": { 00:29:46.305 "blocks": 53248, 00:29:46.305 "percent": 83 00:29:46.305 } 00:29:46.305 }, 00:29:46.305 "base_bdevs_list": [ 00:29:46.305 { 00:29:46.305 "name": "spare", 00:29:46.305 "uuid": "05b46bdb-85aa-5b43-ada7-754c58ad7e76", 00:29:46.305 "is_configured": true, 00:29:46.305 "data_offset": 2048, 00:29:46.305 "data_size": 63488 00:29:46.305 }, 00:29:46.305 { 00:29:46.305 "name": null, 00:29:46.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:46.305 "is_configured": false, 00:29:46.305 "data_offset": 2048, 00:29:46.305 "data_size": 63488 00:29:46.305 }, 00:29:46.305 { 00:29:46.305 "name": "BaseBdev3", 00:29:46.305 "uuid": "7c0e895c-b126-5c90-b5d1-1b5aaad968c7", 00:29:46.305 "is_configured": true, 00:29:46.305 "data_offset": 2048, 00:29:46.305 "data_size": 63488 00:29:46.305 }, 00:29:46.305 { 00:29:46.305 "name": "BaseBdev4", 00:29:46.305 "uuid": "0dc2d448-e0d8-5bc7-8254-3291034b45e8", 00:29:46.305 "is_configured": true, 00:29:46.305 "data_offset": 2048, 00:29:46.305 "data_size": 63488 00:29:46.305 } 00:29:46.305 ] 00:29:46.305 }' 00:29:46.305 23:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:46.305 23:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:46.305 23:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:46.305 23:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:46.305 23:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:46.565 [2024-07-13 23:15:35.933571] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:46.823 [2024-07-13 23:15:36.040593] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:46.823 [2024-07-13 23:15:36.044836] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:47.389 23:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:47.389 23:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:47.389 23:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:47.389 23:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:47.389 23:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:47.389 23:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:47.389 23:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:47.389 23:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:47.648 23:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:47.648 "name": "raid_bdev1", 00:29:47.648 "uuid": "90c5691d-5f7e-4e7a-b7ed-70790ef54c27", 00:29:47.648 "strip_size_kb": 0, 00:29:47.648 "state": "online", 00:29:47.648 "raid_level": "raid1", 00:29:47.648 "superblock": true, 00:29:47.648 "num_base_bdevs": 4, 00:29:47.648 "num_base_bdevs_discovered": 3, 00:29:47.648 "num_base_bdevs_operational": 3, 00:29:47.648 "base_bdevs_list": [ 00:29:47.648 { 00:29:47.648 "name": "spare", 00:29:47.648 "uuid": "05b46bdb-85aa-5b43-ada7-754c58ad7e76", 00:29:47.648 "is_configured": true, 00:29:47.648 "data_offset": 2048, 00:29:47.648 "data_size": 63488 00:29:47.648 }, 00:29:47.648 { 00:29:47.648 "name": null, 00:29:47.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:47.648 "is_configured": false, 00:29:47.648 "data_offset": 2048, 00:29:47.648 "data_size": 63488 00:29:47.648 }, 00:29:47.648 { 00:29:47.648 "name": "BaseBdev3", 00:29:47.648 "uuid": "7c0e895c-b126-5c90-b5d1-1b5aaad968c7", 00:29:47.648 "is_configured": true, 00:29:47.648 "data_offset": 2048, 00:29:47.648 "data_size": 63488 00:29:47.648 }, 00:29:47.648 { 00:29:47.648 "name": "BaseBdev4", 00:29:47.648 "uuid": "0dc2d448-e0d8-5bc7-8254-3291034b45e8", 00:29:47.648 "is_configured": true, 00:29:47.648 "data_offset": 2048, 00:29:47.648 "data_size": 63488 00:29:47.648 } 00:29:47.648 ] 00:29:47.648 }' 00:29:47.648 23:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:47.648 23:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:47.648 23:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:47.648 23:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:29:47.648 23:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:29:47.648 23:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:47.648 23:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:47.648 23:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:47.648 23:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:47.648 23:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:47.649 23:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:47.649 23:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:47.908 23:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:47.908 "name": "raid_bdev1", 00:29:47.908 "uuid": "90c5691d-5f7e-4e7a-b7ed-70790ef54c27", 00:29:47.908 "strip_size_kb": 0, 00:29:47.908 "state": "online", 00:29:47.908 "raid_level": "raid1", 00:29:47.908 "superblock": true, 00:29:47.908 "num_base_bdevs": 4, 00:29:47.908 "num_base_bdevs_discovered": 3, 00:29:47.908 "num_base_bdevs_operational": 3, 00:29:47.908 "base_bdevs_list": [ 00:29:47.908 { 00:29:47.908 "name": "spare", 00:29:47.908 "uuid": "05b46bdb-85aa-5b43-ada7-754c58ad7e76", 00:29:47.908 "is_configured": true, 00:29:47.908 "data_offset": 2048, 00:29:47.908 "data_size": 63488 00:29:47.908 }, 00:29:47.908 { 00:29:47.908 "name": null, 00:29:47.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:47.908 "is_configured": false, 00:29:47.908 "data_offset": 2048, 00:29:47.908 "data_size": 63488 00:29:47.908 }, 00:29:47.908 { 00:29:47.908 "name": "BaseBdev3", 00:29:47.908 "uuid": "7c0e895c-b126-5c90-b5d1-1b5aaad968c7", 00:29:47.908 "is_configured": true, 00:29:47.908 "data_offset": 2048, 00:29:47.908 "data_size": 63488 00:29:47.908 }, 00:29:47.908 { 00:29:47.908 "name": "BaseBdev4", 00:29:47.908 "uuid": "0dc2d448-e0d8-5bc7-8254-3291034b45e8", 00:29:47.908 "is_configured": true, 00:29:47.908 "data_offset": 2048, 00:29:47.908 "data_size": 63488 00:29:47.908 } 00:29:47.908 ] 00:29:47.908 }' 00:29:47.908 23:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:47.908 23:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:47.908 23:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:48.166 23:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:48.166 23:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:48.166 23:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:48.166 23:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:48.166 23:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:48.166 23:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:48.166 23:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:48.166 23:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:48.166 23:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:48.166 23:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:48.166 23:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:48.166 23:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:48.166 23:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:48.425 23:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:48.425 "name": "raid_bdev1", 00:29:48.425 "uuid": "90c5691d-5f7e-4e7a-b7ed-70790ef54c27", 00:29:48.425 "strip_size_kb": 0, 00:29:48.425 "state": "online", 00:29:48.425 "raid_level": "raid1", 00:29:48.425 "superblock": true, 00:29:48.425 "num_base_bdevs": 4, 00:29:48.425 "num_base_bdevs_discovered": 3, 00:29:48.425 "num_base_bdevs_operational": 3, 00:29:48.425 "base_bdevs_list": [ 00:29:48.425 { 00:29:48.425 "name": "spare", 00:29:48.425 "uuid": "05b46bdb-85aa-5b43-ada7-754c58ad7e76", 00:29:48.425 "is_configured": true, 00:29:48.425 "data_offset": 2048, 00:29:48.425 "data_size": 63488 00:29:48.425 }, 00:29:48.425 { 00:29:48.425 "name": null, 00:29:48.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:48.425 "is_configured": false, 00:29:48.425 "data_offset": 2048, 00:29:48.425 "data_size": 63488 00:29:48.425 }, 00:29:48.425 { 00:29:48.425 "name": "BaseBdev3", 00:29:48.425 "uuid": "7c0e895c-b126-5c90-b5d1-1b5aaad968c7", 00:29:48.425 "is_configured": true, 00:29:48.425 "data_offset": 2048, 00:29:48.425 "data_size": 63488 00:29:48.425 }, 00:29:48.425 { 00:29:48.425 "name": "BaseBdev4", 00:29:48.425 "uuid": "0dc2d448-e0d8-5bc7-8254-3291034b45e8", 00:29:48.425 "is_configured": true, 00:29:48.425 "data_offset": 2048, 00:29:48.425 "data_size": 63488 00:29:48.425 } 00:29:48.425 ] 00:29:48.425 }' 00:29:48.425 23:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:48.425 23:15:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:48.992 23:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:49.251 [2024-07-13 23:15:38.499444] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:49.251 [2024-07-13 23:15:38.499690] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:49.251 00:29:49.251 Latency(us) 00:29:49.251 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:49.251 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:29:49.251 raid_bdev1 : 11.75 110.23 330.70 0.00 0.00 12692.52 283.00 118679.74 00:29:49.251 =================================================================================================================== 00:29:49.251 Total : 110.23 330.70 0.00 0.00 12692.52 283.00 118679.74 00:29:49.251 [2024-07-13 23:15:38.574900] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:49.251 [2024-07-13 23:15:38.575114] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:49.251 0 00:29:49.251 [2024-07-13 23:15:38.575276] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:49.251 [2024-07-13 23:15:38.575307] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:29:49.251 23:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:49.251 23:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:29:49.510 23:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:29:49.510 23:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:29:49.510 23:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:29:49.510 23:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:29:49.510 23:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:49.510 23:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:29:49.510 23:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:49.510 23:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:49.510 23:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:49.510 23:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:29:49.510 23:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:49.510 23:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:49.510 23:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:29:49.769 /dev/nbd0 00:29:49.769 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:49.769 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:49.769 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:29:49.769 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:29:49.769 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:49.769 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:49.769 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:29:49.769 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:29:49.769 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:49.769 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:49.769 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:49.769 1+0 records in 00:29:49.769 1+0 records out 00:29:49.769 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000624048 s, 6.6 MB/s 00:29:49.769 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:49.769 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:29:49.769 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:49.769 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:49.769 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:29:49.769 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:49.769 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:49.769 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:29:49.769 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:29:49.769 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # continue 00:29:49.769 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:29:49.769 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:29:49.769 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:29:49.769 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:49.769 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:29:49.769 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:49.769 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:29:49.769 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:49.769 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:29:49.769 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:49.769 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:49.769 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:29:50.028 /dev/nbd1 00:29:50.028 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:50.028 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:50.028 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:29:50.028 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:29:50.028 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:50.028 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:50.028 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:29:50.028 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:29:50.028 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:50.028 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:50.028 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:50.028 1+0 records in 00:29:50.028 1+0 records out 00:29:50.028 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346258 s, 11.8 MB/s 00:29:50.287 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:50.287 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:29:50.287 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:50.287 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:50.287 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:29:50.287 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:50.287 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:50.287 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:29:50.287 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:29:50.287 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:50.287 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:29:50.287 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:50.287 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:29:50.287 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:50.287 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:50.559 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:50.559 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:50.559 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:50.559 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:50.559 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:50.559 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:50.559 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:29:50.559 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:50.559 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:29:50.559 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:29:50.559 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:29:50.559 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:50.559 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:29:50.559 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:50.559 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:29:50.559 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:50.559 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:29:50.559 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:50.559 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:50.559 23:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:29:50.830 /dev/nbd1 00:29:50.830 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:50.830 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:50.830 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:29:50.830 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:29:50.830 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:50.830 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:50.830 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:29:50.830 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:29:50.830 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:50.830 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:50.830 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:50.830 1+0 records in 00:29:50.830 1+0 records out 00:29:50.830 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000588102 s, 7.0 MB/s 00:29:50.830 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:50.830 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:29:50.830 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:50.830 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:50.830 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:29:50.830 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:50.830 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:50.830 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:29:50.830 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:29:50.830 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:50.830 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:29:50.830 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:50.830 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:29:50.830 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:50.830 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:51.088 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:51.088 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:51.088 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:51.088 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:51.088 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:51.088 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:51.088 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:29:51.088 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:51.088 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:51.088 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:51.088 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:51.088 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:51.088 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:29:51.088 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:51.088 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:51.347 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:51.347 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:51.347 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:51.347 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:51.347 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:51.347 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:51.347 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:29:51.347 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:51.347 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:29:51.347 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:51.606 23:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:51.864 [2024-07-13 23:15:41.088950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:51.864 [2024-07-13 23:15:41.089221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:51.864 [2024-07-13 23:15:41.089396] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:29:51.864 [2024-07-13 23:15:41.089572] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:51.864 [2024-07-13 23:15:41.092111] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:51.864 [2024-07-13 23:15:41.092288] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:51.864 [2024-07-13 23:15:41.092483] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:51.864 [2024-07-13 23:15:41.092629] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:51.864 [2024-07-13 23:15:41.092923] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:51.864 [2024-07-13 23:15:41.093251] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:51.864 spare 00:29:51.864 23:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:51.864 23:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:51.864 23:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:51.864 23:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:51.864 23:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:51.864 23:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:51.864 23:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:51.864 23:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:51.864 23:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:51.864 23:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:51.864 23:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:51.864 23:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:51.864 [2024-07-13 23:15:41.193483] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:29:51.864 [2024-07-13 23:15:41.193646] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:51.864 [2024-07-13 23:15:41.193855] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000033af0 00:29:51.864 [2024-07-13 23:15:41.194457] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:29:51.864 [2024-07-13 23:15:41.194623] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:29:51.864 [2024-07-13 23:15:41.194854] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:52.121 23:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:52.121 "name": "raid_bdev1", 00:29:52.121 "uuid": "90c5691d-5f7e-4e7a-b7ed-70790ef54c27", 00:29:52.121 "strip_size_kb": 0, 00:29:52.121 "state": "online", 00:29:52.121 "raid_level": "raid1", 00:29:52.121 "superblock": true, 00:29:52.121 "num_base_bdevs": 4, 00:29:52.121 "num_base_bdevs_discovered": 3, 00:29:52.121 "num_base_bdevs_operational": 3, 00:29:52.121 "base_bdevs_list": [ 00:29:52.121 { 00:29:52.121 "name": "spare", 00:29:52.121 "uuid": "05b46bdb-85aa-5b43-ada7-754c58ad7e76", 00:29:52.121 "is_configured": true, 00:29:52.121 "data_offset": 2048, 00:29:52.121 "data_size": 63488 00:29:52.121 }, 00:29:52.121 { 00:29:52.121 "name": null, 00:29:52.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:52.121 "is_configured": false, 00:29:52.121 "data_offset": 2048, 00:29:52.121 "data_size": 63488 00:29:52.121 }, 00:29:52.121 { 00:29:52.121 "name": "BaseBdev3", 00:29:52.121 "uuid": "7c0e895c-b126-5c90-b5d1-1b5aaad968c7", 00:29:52.121 "is_configured": true, 00:29:52.121 "data_offset": 2048, 00:29:52.121 "data_size": 63488 00:29:52.121 }, 00:29:52.121 { 00:29:52.121 "name": "BaseBdev4", 00:29:52.122 "uuid": "0dc2d448-e0d8-5bc7-8254-3291034b45e8", 00:29:52.122 "is_configured": true, 00:29:52.122 "data_offset": 2048, 00:29:52.122 "data_size": 63488 00:29:52.122 } 00:29:52.122 ] 00:29:52.122 }' 00:29:52.122 23:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:52.122 23:15:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:52.685 23:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:52.685 23:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:52.685 23:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:52.685 23:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:52.685 23:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:52.685 23:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:52.685 23:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:53.028 23:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:53.028 "name": "raid_bdev1", 00:29:53.028 "uuid": "90c5691d-5f7e-4e7a-b7ed-70790ef54c27", 00:29:53.028 "strip_size_kb": 0, 00:29:53.028 "state": "online", 00:29:53.028 "raid_level": "raid1", 00:29:53.028 "superblock": true, 00:29:53.028 "num_base_bdevs": 4, 00:29:53.028 "num_base_bdevs_discovered": 3, 00:29:53.028 "num_base_bdevs_operational": 3, 00:29:53.028 "base_bdevs_list": [ 00:29:53.028 { 00:29:53.028 "name": "spare", 00:29:53.028 "uuid": "05b46bdb-85aa-5b43-ada7-754c58ad7e76", 00:29:53.028 "is_configured": true, 00:29:53.028 "data_offset": 2048, 00:29:53.028 "data_size": 63488 00:29:53.028 }, 00:29:53.028 { 00:29:53.028 "name": null, 00:29:53.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:53.028 "is_configured": false, 00:29:53.028 "data_offset": 2048, 00:29:53.028 "data_size": 63488 00:29:53.028 }, 00:29:53.028 { 00:29:53.028 "name": "BaseBdev3", 00:29:53.028 "uuid": "7c0e895c-b126-5c90-b5d1-1b5aaad968c7", 00:29:53.028 "is_configured": true, 00:29:53.028 "data_offset": 2048, 00:29:53.028 "data_size": 63488 00:29:53.028 }, 00:29:53.028 { 00:29:53.028 "name": "BaseBdev4", 00:29:53.028 "uuid": "0dc2d448-e0d8-5bc7-8254-3291034b45e8", 00:29:53.028 "is_configured": true, 00:29:53.028 "data_offset": 2048, 00:29:53.028 "data_size": 63488 00:29:53.028 } 00:29:53.028 ] 00:29:53.028 }' 00:29:53.028 23:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:53.028 23:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:53.028 23:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:53.028 23:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:53.028 23:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:53.028 23:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:29:53.286 23:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:29:53.286 23:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:53.544 [2024-07-13 23:15:42.950007] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:53.802 23:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:53.802 23:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:53.802 23:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:53.802 23:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:53.802 23:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:53.802 23:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:53.802 23:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:53.802 23:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:53.802 23:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:53.802 23:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:53.802 23:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:53.802 23:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:54.060 23:15:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:54.060 "name": "raid_bdev1", 00:29:54.060 "uuid": "90c5691d-5f7e-4e7a-b7ed-70790ef54c27", 00:29:54.060 "strip_size_kb": 0, 00:29:54.060 "state": "online", 00:29:54.060 "raid_level": "raid1", 00:29:54.060 "superblock": true, 00:29:54.060 "num_base_bdevs": 4, 00:29:54.060 "num_base_bdevs_discovered": 2, 00:29:54.060 "num_base_bdevs_operational": 2, 00:29:54.060 "base_bdevs_list": [ 00:29:54.060 { 00:29:54.060 "name": null, 00:29:54.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:54.060 "is_configured": false, 00:29:54.060 "data_offset": 2048, 00:29:54.060 "data_size": 63488 00:29:54.060 }, 00:29:54.060 { 00:29:54.060 "name": null, 00:29:54.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:54.060 "is_configured": false, 00:29:54.060 "data_offset": 2048, 00:29:54.060 "data_size": 63488 00:29:54.060 }, 00:29:54.060 { 00:29:54.060 "name": "BaseBdev3", 00:29:54.060 "uuid": "7c0e895c-b126-5c90-b5d1-1b5aaad968c7", 00:29:54.060 "is_configured": true, 00:29:54.060 "data_offset": 2048, 00:29:54.060 "data_size": 63488 00:29:54.060 }, 00:29:54.060 { 00:29:54.060 "name": "BaseBdev4", 00:29:54.060 "uuid": "0dc2d448-e0d8-5bc7-8254-3291034b45e8", 00:29:54.060 "is_configured": true, 00:29:54.060 "data_offset": 2048, 00:29:54.060 "data_size": 63488 00:29:54.060 } 00:29:54.060 ] 00:29:54.060 }' 00:29:54.060 23:15:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:54.060 23:15:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:54.627 23:15:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:54.886 [2024-07-13 23:15:44.083474] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:54.886 [2024-07-13 23:15:44.083914] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:29:54.886 [2024-07-13 23:15:44.084079] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:54.886 [2024-07-13 23:15:44.084243] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:54.886 [2024-07-13 23:15:44.089313] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000033c90 00:29:54.886 [2024-07-13 23:15:44.091516] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:54.886 23:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:29:55.820 23:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:55.820 23:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:55.820 23:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:55.820 23:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:55.820 23:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:55.820 23:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:55.820 23:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:56.078 23:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:56.078 "name": "raid_bdev1", 00:29:56.078 "uuid": "90c5691d-5f7e-4e7a-b7ed-70790ef54c27", 00:29:56.078 "strip_size_kb": 0, 00:29:56.078 "state": "online", 00:29:56.078 "raid_level": "raid1", 00:29:56.078 "superblock": true, 00:29:56.078 "num_base_bdevs": 4, 00:29:56.078 "num_base_bdevs_discovered": 3, 00:29:56.078 "num_base_bdevs_operational": 3, 00:29:56.078 "process": { 00:29:56.078 "type": "rebuild", 00:29:56.078 "target": "spare", 00:29:56.078 "progress": { 00:29:56.078 "blocks": 24576, 00:29:56.078 "percent": 38 00:29:56.078 } 00:29:56.078 }, 00:29:56.078 "base_bdevs_list": [ 00:29:56.078 { 00:29:56.078 "name": "spare", 00:29:56.078 "uuid": "05b46bdb-85aa-5b43-ada7-754c58ad7e76", 00:29:56.078 "is_configured": true, 00:29:56.078 "data_offset": 2048, 00:29:56.078 "data_size": 63488 00:29:56.078 }, 00:29:56.078 { 00:29:56.078 "name": null, 00:29:56.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:56.078 "is_configured": false, 00:29:56.078 "data_offset": 2048, 00:29:56.078 "data_size": 63488 00:29:56.078 }, 00:29:56.078 { 00:29:56.078 "name": "BaseBdev3", 00:29:56.078 "uuid": "7c0e895c-b126-5c90-b5d1-1b5aaad968c7", 00:29:56.078 "is_configured": true, 00:29:56.078 "data_offset": 2048, 00:29:56.078 "data_size": 63488 00:29:56.078 }, 00:29:56.078 { 00:29:56.078 "name": "BaseBdev4", 00:29:56.078 "uuid": "0dc2d448-e0d8-5bc7-8254-3291034b45e8", 00:29:56.078 "is_configured": true, 00:29:56.078 "data_offset": 2048, 00:29:56.078 "data_size": 63488 00:29:56.078 } 00:29:56.078 ] 00:29:56.078 }' 00:29:56.078 23:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:56.078 23:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:56.078 23:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:56.078 23:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:56.078 23:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:56.337 [2024-07-13 23:15:45.678186] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:56.337 [2024-07-13 23:15:45.700379] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:56.337 [2024-07-13 23:15:45.700597] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:56.337 [2024-07-13 23:15:45.700732] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:56.337 [2024-07-13 23:15:45.700780] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:56.337 23:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:56.337 23:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:56.337 23:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:56.337 23:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:56.337 23:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:56.337 23:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:56.337 23:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:56.337 23:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:56.337 23:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:56.337 23:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:56.337 23:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:56.337 23:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:56.596 23:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:56.596 "name": "raid_bdev1", 00:29:56.596 "uuid": "90c5691d-5f7e-4e7a-b7ed-70790ef54c27", 00:29:56.596 "strip_size_kb": 0, 00:29:56.596 "state": "online", 00:29:56.596 "raid_level": "raid1", 00:29:56.596 "superblock": true, 00:29:56.596 "num_base_bdevs": 4, 00:29:56.596 "num_base_bdevs_discovered": 2, 00:29:56.596 "num_base_bdevs_operational": 2, 00:29:56.596 "base_bdevs_list": [ 00:29:56.596 { 00:29:56.596 "name": null, 00:29:56.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:56.596 "is_configured": false, 00:29:56.596 "data_offset": 2048, 00:29:56.596 "data_size": 63488 00:29:56.596 }, 00:29:56.596 { 00:29:56.596 "name": null, 00:29:56.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:56.596 "is_configured": false, 00:29:56.596 "data_offset": 2048, 00:29:56.596 "data_size": 63488 00:29:56.596 }, 00:29:56.596 { 00:29:56.596 "name": "BaseBdev3", 00:29:56.596 "uuid": "7c0e895c-b126-5c90-b5d1-1b5aaad968c7", 00:29:56.596 "is_configured": true, 00:29:56.596 "data_offset": 2048, 00:29:56.596 "data_size": 63488 00:29:56.596 }, 00:29:56.596 { 00:29:56.596 "name": "BaseBdev4", 00:29:56.596 "uuid": "0dc2d448-e0d8-5bc7-8254-3291034b45e8", 00:29:56.596 "is_configured": true, 00:29:56.596 "data_offset": 2048, 00:29:56.596 "data_size": 63488 00:29:56.596 } 00:29:56.596 ] 00:29:56.596 }' 00:29:56.596 23:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:56.596 23:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:57.532 23:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:57.532 [2024-07-13 23:15:46.782291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:57.532 [2024-07-13 23:15:46.782555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:57.532 [2024-07-13 23:15:46.782635] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:29:57.532 [2024-07-13 23:15:46.782760] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:57.532 [2024-07-13 23:15:46.783306] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:57.532 [2024-07-13 23:15:46.783448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:57.532 [2024-07-13 23:15:46.783665] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:57.532 [2024-07-13 23:15:46.783802] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:29:57.532 [2024-07-13 23:15:46.783900] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:57.532 [2024-07-13 23:15:46.784053] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:57.532 [2024-07-13 23:15:46.788868] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000033fd0 00:29:57.532 spare 00:29:57.532 [2024-07-13 23:15:46.791185] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:57.532 23:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:29:58.468 23:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:58.468 23:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:58.468 23:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:58.468 23:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:58.468 23:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:58.468 23:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:58.468 23:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:58.726 23:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:58.726 "name": "raid_bdev1", 00:29:58.726 "uuid": "90c5691d-5f7e-4e7a-b7ed-70790ef54c27", 00:29:58.726 "strip_size_kb": 0, 00:29:58.726 "state": "online", 00:29:58.726 "raid_level": "raid1", 00:29:58.726 "superblock": true, 00:29:58.726 "num_base_bdevs": 4, 00:29:58.726 "num_base_bdevs_discovered": 3, 00:29:58.726 "num_base_bdevs_operational": 3, 00:29:58.726 "process": { 00:29:58.726 "type": "rebuild", 00:29:58.726 "target": "spare", 00:29:58.726 "progress": { 00:29:58.726 "blocks": 24576, 00:29:58.726 "percent": 38 00:29:58.726 } 00:29:58.726 }, 00:29:58.726 "base_bdevs_list": [ 00:29:58.726 { 00:29:58.726 "name": "spare", 00:29:58.726 "uuid": "05b46bdb-85aa-5b43-ada7-754c58ad7e76", 00:29:58.726 "is_configured": true, 00:29:58.726 "data_offset": 2048, 00:29:58.726 "data_size": 63488 00:29:58.726 }, 00:29:58.726 { 00:29:58.726 "name": null, 00:29:58.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:58.726 "is_configured": false, 00:29:58.726 "data_offset": 2048, 00:29:58.726 "data_size": 63488 00:29:58.726 }, 00:29:58.726 { 00:29:58.726 "name": "BaseBdev3", 00:29:58.726 "uuid": "7c0e895c-b126-5c90-b5d1-1b5aaad968c7", 00:29:58.726 "is_configured": true, 00:29:58.726 "data_offset": 2048, 00:29:58.726 "data_size": 63488 00:29:58.726 }, 00:29:58.726 { 00:29:58.726 "name": "BaseBdev4", 00:29:58.726 "uuid": "0dc2d448-e0d8-5bc7-8254-3291034b45e8", 00:29:58.726 "is_configured": true, 00:29:58.726 "data_offset": 2048, 00:29:58.726 "data_size": 63488 00:29:58.726 } 00:29:58.726 ] 00:29:58.726 }' 00:29:58.726 23:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:58.726 23:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:58.726 23:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:58.726 23:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:58.726 23:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:58.985 [2024-07-13 23:15:48.373367] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:59.244 [2024-07-13 23:15:48.400784] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:59.244 [2024-07-13 23:15:48.401040] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:59.244 [2024-07-13 23:15:48.401104] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:59.244 [2024-07-13 23:15:48.401206] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:59.244 23:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:59.244 23:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:59.244 23:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:59.244 23:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:59.244 23:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:59.245 23:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:59.245 23:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:59.245 23:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:59.245 23:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:59.245 23:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:59.245 23:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:59.245 23:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:59.245 23:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:59.245 "name": "raid_bdev1", 00:29:59.245 "uuid": "90c5691d-5f7e-4e7a-b7ed-70790ef54c27", 00:29:59.245 "strip_size_kb": 0, 00:29:59.245 "state": "online", 00:29:59.245 "raid_level": "raid1", 00:29:59.245 "superblock": true, 00:29:59.245 "num_base_bdevs": 4, 00:29:59.245 "num_base_bdevs_discovered": 2, 00:29:59.245 "num_base_bdevs_operational": 2, 00:29:59.245 "base_bdevs_list": [ 00:29:59.245 { 00:29:59.245 "name": null, 00:29:59.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:59.245 "is_configured": false, 00:29:59.245 "data_offset": 2048, 00:29:59.245 "data_size": 63488 00:29:59.245 }, 00:29:59.245 { 00:29:59.245 "name": null, 00:29:59.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:59.245 "is_configured": false, 00:29:59.245 "data_offset": 2048, 00:29:59.245 "data_size": 63488 00:29:59.245 }, 00:29:59.245 { 00:29:59.245 "name": "BaseBdev3", 00:29:59.245 "uuid": "7c0e895c-b126-5c90-b5d1-1b5aaad968c7", 00:29:59.245 "is_configured": true, 00:29:59.245 "data_offset": 2048, 00:29:59.245 "data_size": 63488 00:29:59.245 }, 00:29:59.245 { 00:29:59.245 "name": "BaseBdev4", 00:29:59.245 "uuid": "0dc2d448-e0d8-5bc7-8254-3291034b45e8", 00:29:59.245 "is_configured": true, 00:29:59.245 "data_offset": 2048, 00:29:59.245 "data_size": 63488 00:29:59.245 } 00:29:59.245 ] 00:29:59.245 }' 00:29:59.245 23:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:59.245 23:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:00.181 23:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:00.181 23:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:00.181 23:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:00.181 23:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:00.181 23:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:00.181 23:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:00.181 23:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:00.181 23:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:00.181 "name": "raid_bdev1", 00:30:00.181 "uuid": "90c5691d-5f7e-4e7a-b7ed-70790ef54c27", 00:30:00.181 "strip_size_kb": 0, 00:30:00.181 "state": "online", 00:30:00.181 "raid_level": "raid1", 00:30:00.181 "superblock": true, 00:30:00.181 "num_base_bdevs": 4, 00:30:00.181 "num_base_bdevs_discovered": 2, 00:30:00.181 "num_base_bdevs_operational": 2, 00:30:00.181 "base_bdevs_list": [ 00:30:00.181 { 00:30:00.181 "name": null, 00:30:00.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:00.181 "is_configured": false, 00:30:00.181 "data_offset": 2048, 00:30:00.181 "data_size": 63488 00:30:00.181 }, 00:30:00.181 { 00:30:00.181 "name": null, 00:30:00.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:00.181 "is_configured": false, 00:30:00.181 "data_offset": 2048, 00:30:00.181 "data_size": 63488 00:30:00.181 }, 00:30:00.181 { 00:30:00.181 "name": "BaseBdev3", 00:30:00.181 "uuid": "7c0e895c-b126-5c90-b5d1-1b5aaad968c7", 00:30:00.181 "is_configured": true, 00:30:00.181 "data_offset": 2048, 00:30:00.181 "data_size": 63488 00:30:00.181 }, 00:30:00.181 { 00:30:00.181 "name": "BaseBdev4", 00:30:00.181 "uuid": "0dc2d448-e0d8-5bc7-8254-3291034b45e8", 00:30:00.181 "is_configured": true, 00:30:00.181 "data_offset": 2048, 00:30:00.181 "data_size": 63488 00:30:00.181 } 00:30:00.181 ] 00:30:00.181 }' 00:30:00.181 23:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:00.181 23:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:00.181 23:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:00.181 23:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:00.181 23:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:30:00.440 23:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:00.699 [2024-07-13 23:15:49.994857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:00.699 [2024-07-13 23:15:49.995112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:00.699 [2024-07-13 23:15:49.995217] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:30:00.699 [2024-07-13 23:15:49.995450] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:00.699 [2024-07-13 23:15:49.995945] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:00.699 [2024-07-13 23:15:49.996125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:00.699 [2024-07-13 23:15:49.996334] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:30:00.699 [2024-07-13 23:15:49.996452] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:30:00.699 [2024-07-13 23:15:49.996621] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:00.699 BaseBdev1 00:30:00.699 23:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:30:01.634 23:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:01.634 23:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:01.634 23:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:01.634 23:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:01.634 23:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:01.634 23:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:01.634 23:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:01.634 23:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:01.634 23:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:01.634 23:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:01.634 23:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:01.634 23:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:01.892 23:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:01.892 "name": "raid_bdev1", 00:30:01.892 "uuid": "90c5691d-5f7e-4e7a-b7ed-70790ef54c27", 00:30:01.892 "strip_size_kb": 0, 00:30:01.892 "state": "online", 00:30:01.892 "raid_level": "raid1", 00:30:01.892 "superblock": true, 00:30:01.892 "num_base_bdevs": 4, 00:30:01.892 "num_base_bdevs_discovered": 2, 00:30:01.892 "num_base_bdevs_operational": 2, 00:30:01.892 "base_bdevs_list": [ 00:30:01.892 { 00:30:01.892 "name": null, 00:30:01.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:01.892 "is_configured": false, 00:30:01.892 "data_offset": 2048, 00:30:01.892 "data_size": 63488 00:30:01.892 }, 00:30:01.892 { 00:30:01.892 "name": null, 00:30:01.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:01.892 "is_configured": false, 00:30:01.892 "data_offset": 2048, 00:30:01.892 "data_size": 63488 00:30:01.892 }, 00:30:01.892 { 00:30:01.892 "name": "BaseBdev3", 00:30:01.892 "uuid": "7c0e895c-b126-5c90-b5d1-1b5aaad968c7", 00:30:01.892 "is_configured": true, 00:30:01.892 "data_offset": 2048, 00:30:01.892 "data_size": 63488 00:30:01.892 }, 00:30:01.892 { 00:30:01.892 "name": "BaseBdev4", 00:30:01.892 "uuid": "0dc2d448-e0d8-5bc7-8254-3291034b45e8", 00:30:01.892 "is_configured": true, 00:30:01.892 "data_offset": 2048, 00:30:01.892 "data_size": 63488 00:30:01.892 } 00:30:01.892 ] 00:30:01.892 }' 00:30:01.893 23:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:01.893 23:15:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:02.828 23:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:02.828 23:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:02.828 23:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:02.828 23:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:02.828 23:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:02.828 23:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:02.828 23:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:02.828 23:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:02.828 "name": "raid_bdev1", 00:30:02.828 "uuid": "90c5691d-5f7e-4e7a-b7ed-70790ef54c27", 00:30:02.828 "strip_size_kb": 0, 00:30:02.828 "state": "online", 00:30:02.828 "raid_level": "raid1", 00:30:02.828 "superblock": true, 00:30:02.828 "num_base_bdevs": 4, 00:30:02.828 "num_base_bdevs_discovered": 2, 00:30:02.828 "num_base_bdevs_operational": 2, 00:30:02.828 "base_bdevs_list": [ 00:30:02.828 { 00:30:02.828 "name": null, 00:30:02.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:02.828 "is_configured": false, 00:30:02.828 "data_offset": 2048, 00:30:02.828 "data_size": 63488 00:30:02.828 }, 00:30:02.828 { 00:30:02.828 "name": null, 00:30:02.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:02.851 "is_configured": false, 00:30:02.851 "data_offset": 2048, 00:30:02.851 "data_size": 63488 00:30:02.851 }, 00:30:02.851 { 00:30:02.851 "name": "BaseBdev3", 00:30:02.851 "uuid": "7c0e895c-b126-5c90-b5d1-1b5aaad968c7", 00:30:02.851 "is_configured": true, 00:30:02.851 "data_offset": 2048, 00:30:02.851 "data_size": 63488 00:30:02.851 }, 00:30:02.851 { 00:30:02.851 "name": "BaseBdev4", 00:30:02.851 "uuid": "0dc2d448-e0d8-5bc7-8254-3291034b45e8", 00:30:02.851 "is_configured": true, 00:30:02.851 "data_offset": 2048, 00:30:02.851 "data_size": 63488 00:30:02.851 } 00:30:02.851 ] 00:30:02.851 }' 00:30:02.851 23:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:02.851 23:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:02.851 23:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:02.851 23:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:02.851 23:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:02.851 23:15:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # local es=0 00:30:02.852 23:15:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:02.852 23:15:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:02.852 23:15:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:02.852 23:15:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:02.852 23:15:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:02.852 23:15:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:02.852 23:15:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:02.852 23:15:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:02.852 23:15:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:30:02.852 23:15:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:03.110 [2024-07-13 23:15:52.481025] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:03.110 [2024-07-13 23:15:52.481411] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:30:03.110 [2024-07-13 23:15:52.481595] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:03.110 request: 00:30:03.110 { 00:30:03.110 "base_bdev": "BaseBdev1", 00:30:03.110 "raid_bdev": "raid_bdev1", 00:30:03.110 "method": "bdev_raid_add_base_bdev", 00:30:03.110 "req_id": 1 00:30:03.110 } 00:30:03.110 Got JSON-RPC error response 00:30:03.110 response: 00:30:03.110 { 00:30:03.110 "code": -22, 00:30:03.110 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:30:03.110 } 00:30:03.110 23:15:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # es=1 00:30:03.110 23:15:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:03.110 23:15:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:03.110 23:15:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:03.110 23:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:30:04.483 23:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:04.483 23:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:04.483 23:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:04.483 23:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:04.483 23:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:04.483 23:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:04.483 23:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:04.483 23:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:04.483 23:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:04.483 23:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:04.483 23:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:04.483 23:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:04.483 23:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:04.483 "name": "raid_bdev1", 00:30:04.483 "uuid": "90c5691d-5f7e-4e7a-b7ed-70790ef54c27", 00:30:04.483 "strip_size_kb": 0, 00:30:04.483 "state": "online", 00:30:04.483 "raid_level": "raid1", 00:30:04.483 "superblock": true, 00:30:04.483 "num_base_bdevs": 4, 00:30:04.483 "num_base_bdevs_discovered": 2, 00:30:04.483 "num_base_bdevs_operational": 2, 00:30:04.483 "base_bdevs_list": [ 00:30:04.483 { 00:30:04.483 "name": null, 00:30:04.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:04.483 "is_configured": false, 00:30:04.483 "data_offset": 2048, 00:30:04.483 "data_size": 63488 00:30:04.483 }, 00:30:04.483 { 00:30:04.483 "name": null, 00:30:04.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:04.483 "is_configured": false, 00:30:04.483 "data_offset": 2048, 00:30:04.483 "data_size": 63488 00:30:04.483 }, 00:30:04.483 { 00:30:04.483 "name": "BaseBdev3", 00:30:04.483 "uuid": "7c0e895c-b126-5c90-b5d1-1b5aaad968c7", 00:30:04.483 "is_configured": true, 00:30:04.483 "data_offset": 2048, 00:30:04.483 "data_size": 63488 00:30:04.483 }, 00:30:04.483 { 00:30:04.483 "name": "BaseBdev4", 00:30:04.483 "uuid": "0dc2d448-e0d8-5bc7-8254-3291034b45e8", 00:30:04.483 "is_configured": true, 00:30:04.483 "data_offset": 2048, 00:30:04.483 "data_size": 63488 00:30:04.483 } 00:30:04.483 ] 00:30:04.483 }' 00:30:04.483 23:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:04.483 23:15:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:05.049 23:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:05.049 23:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:05.050 23:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:05.050 23:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:05.050 23:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:05.050 23:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:05.050 23:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:05.308 23:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:05.308 "name": "raid_bdev1", 00:30:05.308 "uuid": "90c5691d-5f7e-4e7a-b7ed-70790ef54c27", 00:30:05.308 "strip_size_kb": 0, 00:30:05.308 "state": "online", 00:30:05.308 "raid_level": "raid1", 00:30:05.308 "superblock": true, 00:30:05.308 "num_base_bdevs": 4, 00:30:05.308 "num_base_bdevs_discovered": 2, 00:30:05.308 "num_base_bdevs_operational": 2, 00:30:05.308 "base_bdevs_list": [ 00:30:05.308 { 00:30:05.308 "name": null, 00:30:05.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:05.308 "is_configured": false, 00:30:05.308 "data_offset": 2048, 00:30:05.308 "data_size": 63488 00:30:05.308 }, 00:30:05.308 { 00:30:05.308 "name": null, 00:30:05.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:05.308 "is_configured": false, 00:30:05.308 "data_offset": 2048, 00:30:05.308 "data_size": 63488 00:30:05.308 }, 00:30:05.308 { 00:30:05.308 "name": "BaseBdev3", 00:30:05.308 "uuid": "7c0e895c-b126-5c90-b5d1-1b5aaad968c7", 00:30:05.308 "is_configured": true, 00:30:05.308 "data_offset": 2048, 00:30:05.308 "data_size": 63488 00:30:05.308 }, 00:30:05.308 { 00:30:05.308 "name": "BaseBdev4", 00:30:05.308 "uuid": "0dc2d448-e0d8-5bc7-8254-3291034b45e8", 00:30:05.308 "is_configured": true, 00:30:05.308 "data_offset": 2048, 00:30:05.308 "data_size": 63488 00:30:05.308 } 00:30:05.308 ] 00:30:05.308 }' 00:30:05.308 23:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:05.308 23:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:05.308 23:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:05.308 23:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:05.308 23:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 158714 00:30:05.308 23:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@948 -- # '[' -z 158714 ']' 00:30:05.308 23:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # kill -0 158714 00:30:05.308 23:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # uname 00:30:05.308 23:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:05.308 23:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 158714 00:30:05.566 killing process with pid 158714 00:30:05.566 Received shutdown signal, test time was about 27.905804 seconds 00:30:05.566 00:30:05.566 Latency(us) 00:30:05.566 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:05.566 =================================================================================================================== 00:30:05.566 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:05.566 23:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:05.566 23:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:05.566 23:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 158714' 00:30:05.566 23:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@967 -- # kill 158714 00:30:05.567 23:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # wait 158714 00:30:05.567 [2024-07-13 23:15:54.729149] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:05.567 [2024-07-13 23:15:54.729363] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:05.567 [2024-07-13 23:15:54.729558] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:05.567 [2024-07-13 23:15:54.729721] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:30:05.567 [2024-07-13 23:15:54.773691] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:05.826 23:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:30:05.826 00:30:05.826 real 0m33.553s 00:30:05.826 user 0m54.826s 00:30:05.826 ************************************ 00:30:05.826 END TEST raid_rebuild_test_sb_io 00:30:05.826 ************************************ 00:30:05.826 sys 0m3.937s 00:30:05.826 23:15:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:05.826 23:15:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:05.826 23:15:55 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:30:05.826 23:15:55 bdev_raid -- bdev/bdev_raid.sh@884 -- # '[' y == y ']' 00:30:05.826 23:15:55 bdev_raid -- bdev/bdev_raid.sh@885 -- # for n in {3..4} 00:30:05.826 23:15:55 bdev_raid -- bdev/bdev_raid.sh@886 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:30:05.826 23:15:55 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:30:05.826 23:15:55 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:05.826 23:15:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:05.826 ************************************ 00:30:05.826 START TEST raid5f_state_function_test 00:30:05.826 ************************************ 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid5f 3 false 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=159628 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 159628' 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:30:05.826 Process raid pid: 159628 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 159628 /var/tmp/spdk-raid.sock 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 159628 ']' 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:05.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:05.826 23:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:05.826 [2024-07-13 23:15:55.150575] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:30:05.826 [2024-07-13 23:15:55.150823] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:06.085 [2024-07-13 23:15:55.293268] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.085 [2024-07-13 23:15:55.364330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.085 [2024-07-13 23:15:55.418534] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:07.019 23:15:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:07.019 23:15:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:30:07.019 23:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:30:07.019 [2024-07-13 23:15:56.335681] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:07.019 [2024-07-13 23:15:56.335778] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:07.019 [2024-07-13 23:15:56.335811] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:07.019 [2024-07-13 23:15:56.335829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:07.019 [2024-07-13 23:15:56.335837] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:07.019 [2024-07-13 23:15:56.335875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:07.019 23:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:07.019 23:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:07.019 23:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:07.019 23:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:07.019 23:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:07.019 23:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:07.019 23:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:07.019 23:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:07.019 23:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:07.019 23:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:07.019 23:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:07.019 23:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:07.277 23:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:07.277 "name": "Existed_Raid", 00:30:07.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:07.277 "strip_size_kb": 64, 00:30:07.277 "state": "configuring", 00:30:07.277 "raid_level": "raid5f", 00:30:07.277 "superblock": false, 00:30:07.277 "num_base_bdevs": 3, 00:30:07.277 "num_base_bdevs_discovered": 0, 00:30:07.277 "num_base_bdevs_operational": 3, 00:30:07.277 "base_bdevs_list": [ 00:30:07.277 { 00:30:07.277 "name": "BaseBdev1", 00:30:07.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:07.277 "is_configured": false, 00:30:07.277 "data_offset": 0, 00:30:07.278 "data_size": 0 00:30:07.278 }, 00:30:07.278 { 00:30:07.278 "name": "BaseBdev2", 00:30:07.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:07.278 "is_configured": false, 00:30:07.278 "data_offset": 0, 00:30:07.278 "data_size": 0 00:30:07.278 }, 00:30:07.278 { 00:30:07.278 "name": "BaseBdev3", 00:30:07.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:07.278 "is_configured": false, 00:30:07.278 "data_offset": 0, 00:30:07.278 "data_size": 0 00:30:07.278 } 00:30:07.278 ] 00:30:07.278 }' 00:30:07.278 23:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:07.278 23:15:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:08.212 23:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:08.212 [2024-07-13 23:15:57.507762] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:08.212 [2024-07-13 23:15:57.507830] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:30:08.212 23:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:30:08.471 [2024-07-13 23:15:57.771837] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:08.471 [2024-07-13 23:15:57.771933] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:08.471 [2024-07-13 23:15:57.771963] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:08.471 [2024-07-13 23:15:57.771982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:08.471 [2024-07-13 23:15:57.771991] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:08.471 [2024-07-13 23:15:57.772014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:08.471 23:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:30:08.729 [2024-07-13 23:15:58.042759] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:08.729 BaseBdev1 00:30:08.729 23:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:30:08.729 23:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:30:08.729 23:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:08.729 23:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:30:08.729 23:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:08.730 23:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:08.730 23:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:08.988 23:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:09.247 [ 00:30:09.247 { 00:30:09.247 "name": "BaseBdev1", 00:30:09.247 "aliases": [ 00:30:09.247 "b16fd555-ecb0-4af4-911c-ef53ec19590c" 00:30:09.247 ], 00:30:09.247 "product_name": "Malloc disk", 00:30:09.247 "block_size": 512, 00:30:09.247 "num_blocks": 65536, 00:30:09.247 "uuid": "b16fd555-ecb0-4af4-911c-ef53ec19590c", 00:30:09.247 "assigned_rate_limits": { 00:30:09.247 "rw_ios_per_sec": 0, 00:30:09.247 "rw_mbytes_per_sec": 0, 00:30:09.247 "r_mbytes_per_sec": 0, 00:30:09.247 "w_mbytes_per_sec": 0 00:30:09.247 }, 00:30:09.247 "claimed": true, 00:30:09.247 "claim_type": "exclusive_write", 00:30:09.247 "zoned": false, 00:30:09.247 "supported_io_types": { 00:30:09.247 "read": true, 00:30:09.247 "write": true, 00:30:09.247 "unmap": true, 00:30:09.247 "flush": true, 00:30:09.247 "reset": true, 00:30:09.247 "nvme_admin": false, 00:30:09.247 "nvme_io": false, 00:30:09.247 "nvme_io_md": false, 00:30:09.247 "write_zeroes": true, 00:30:09.247 "zcopy": true, 00:30:09.247 "get_zone_info": false, 00:30:09.247 "zone_management": false, 00:30:09.247 "zone_append": false, 00:30:09.247 "compare": false, 00:30:09.247 "compare_and_write": false, 00:30:09.247 "abort": true, 00:30:09.247 "seek_hole": false, 00:30:09.247 "seek_data": false, 00:30:09.247 "copy": true, 00:30:09.247 "nvme_iov_md": false 00:30:09.247 }, 00:30:09.247 "memory_domains": [ 00:30:09.247 { 00:30:09.247 "dma_device_id": "system", 00:30:09.247 "dma_device_type": 1 00:30:09.247 }, 00:30:09.247 { 00:30:09.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:09.247 "dma_device_type": 2 00:30:09.247 } 00:30:09.247 ], 00:30:09.247 "driver_specific": {} 00:30:09.247 } 00:30:09.247 ] 00:30:09.247 23:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:30:09.247 23:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:09.247 23:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:09.247 23:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:09.247 23:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:09.247 23:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:09.247 23:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:09.248 23:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:09.248 23:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:09.248 23:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:09.248 23:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:09.248 23:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:09.248 23:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:09.506 23:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:09.506 "name": "Existed_Raid", 00:30:09.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:09.506 "strip_size_kb": 64, 00:30:09.506 "state": "configuring", 00:30:09.506 "raid_level": "raid5f", 00:30:09.506 "superblock": false, 00:30:09.506 "num_base_bdevs": 3, 00:30:09.506 "num_base_bdevs_discovered": 1, 00:30:09.506 "num_base_bdevs_operational": 3, 00:30:09.506 "base_bdevs_list": [ 00:30:09.506 { 00:30:09.506 "name": "BaseBdev1", 00:30:09.506 "uuid": "b16fd555-ecb0-4af4-911c-ef53ec19590c", 00:30:09.506 "is_configured": true, 00:30:09.506 "data_offset": 0, 00:30:09.506 "data_size": 65536 00:30:09.506 }, 00:30:09.506 { 00:30:09.506 "name": "BaseBdev2", 00:30:09.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:09.506 "is_configured": false, 00:30:09.506 "data_offset": 0, 00:30:09.506 "data_size": 0 00:30:09.506 }, 00:30:09.506 { 00:30:09.506 "name": "BaseBdev3", 00:30:09.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:09.506 "is_configured": false, 00:30:09.506 "data_offset": 0, 00:30:09.506 "data_size": 0 00:30:09.506 } 00:30:09.506 ] 00:30:09.506 }' 00:30:09.506 23:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:09.506 23:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:10.078 23:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:10.354 [2024-07-13 23:15:59.679179] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:10.354 [2024-07-13 23:15:59.679272] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:30:10.354 23:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:30:10.612 [2024-07-13 23:15:59.939275] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:10.612 [2024-07-13 23:15:59.941530] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:10.612 [2024-07-13 23:15:59.941605] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:10.612 [2024-07-13 23:15:59.941635] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:10.612 [2024-07-13 23:15:59.941661] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:10.612 23:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:30:10.612 23:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:30:10.612 23:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:10.612 23:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:10.612 23:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:10.612 23:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:10.612 23:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:10.612 23:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:10.612 23:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:10.612 23:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:10.612 23:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:10.612 23:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:10.612 23:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:10.612 23:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:10.870 23:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:10.870 "name": "Existed_Raid", 00:30:10.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:10.870 "strip_size_kb": 64, 00:30:10.870 "state": "configuring", 00:30:10.870 "raid_level": "raid5f", 00:30:10.870 "superblock": false, 00:30:10.870 "num_base_bdevs": 3, 00:30:10.870 "num_base_bdevs_discovered": 1, 00:30:10.870 "num_base_bdevs_operational": 3, 00:30:10.870 "base_bdevs_list": [ 00:30:10.870 { 00:30:10.870 "name": "BaseBdev1", 00:30:10.870 "uuid": "b16fd555-ecb0-4af4-911c-ef53ec19590c", 00:30:10.870 "is_configured": true, 00:30:10.870 "data_offset": 0, 00:30:10.870 "data_size": 65536 00:30:10.870 }, 00:30:10.870 { 00:30:10.870 "name": "BaseBdev2", 00:30:10.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:10.870 "is_configured": false, 00:30:10.870 "data_offset": 0, 00:30:10.870 "data_size": 0 00:30:10.870 }, 00:30:10.870 { 00:30:10.870 "name": "BaseBdev3", 00:30:10.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:10.870 "is_configured": false, 00:30:10.870 "data_offset": 0, 00:30:10.870 "data_size": 0 00:30:10.870 } 00:30:10.870 ] 00:30:10.870 }' 00:30:10.870 23:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:10.870 23:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:11.805 23:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:30:11.805 [2024-07-13 23:16:01.136496] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:11.805 BaseBdev2 00:30:11.805 23:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:30:11.805 23:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:30:11.805 23:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:11.805 23:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:30:11.805 23:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:11.805 23:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:11.805 23:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:12.063 23:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:12.322 [ 00:30:12.322 { 00:30:12.322 "name": "BaseBdev2", 00:30:12.322 "aliases": [ 00:30:12.322 "df3638d8-a602-48d1-b7db-21050936ff00" 00:30:12.322 ], 00:30:12.322 "product_name": "Malloc disk", 00:30:12.322 "block_size": 512, 00:30:12.322 "num_blocks": 65536, 00:30:12.322 "uuid": "df3638d8-a602-48d1-b7db-21050936ff00", 00:30:12.322 "assigned_rate_limits": { 00:30:12.322 "rw_ios_per_sec": 0, 00:30:12.322 "rw_mbytes_per_sec": 0, 00:30:12.322 "r_mbytes_per_sec": 0, 00:30:12.322 "w_mbytes_per_sec": 0 00:30:12.322 }, 00:30:12.322 "claimed": true, 00:30:12.322 "claim_type": "exclusive_write", 00:30:12.322 "zoned": false, 00:30:12.322 "supported_io_types": { 00:30:12.322 "read": true, 00:30:12.322 "write": true, 00:30:12.322 "unmap": true, 00:30:12.322 "flush": true, 00:30:12.322 "reset": true, 00:30:12.322 "nvme_admin": false, 00:30:12.322 "nvme_io": false, 00:30:12.322 "nvme_io_md": false, 00:30:12.322 "write_zeroes": true, 00:30:12.322 "zcopy": true, 00:30:12.322 "get_zone_info": false, 00:30:12.322 "zone_management": false, 00:30:12.322 "zone_append": false, 00:30:12.322 "compare": false, 00:30:12.322 "compare_and_write": false, 00:30:12.322 "abort": true, 00:30:12.322 "seek_hole": false, 00:30:12.322 "seek_data": false, 00:30:12.322 "copy": true, 00:30:12.322 "nvme_iov_md": false 00:30:12.322 }, 00:30:12.322 "memory_domains": [ 00:30:12.322 { 00:30:12.322 "dma_device_id": "system", 00:30:12.322 "dma_device_type": 1 00:30:12.322 }, 00:30:12.322 { 00:30:12.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:12.322 "dma_device_type": 2 00:30:12.322 } 00:30:12.322 ], 00:30:12.322 "driver_specific": {} 00:30:12.322 } 00:30:12.322 ] 00:30:12.322 23:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:30:12.322 23:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:30:12.322 23:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:30:12.322 23:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:12.322 23:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:12.322 23:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:12.322 23:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:12.322 23:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:12.322 23:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:12.322 23:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:12.322 23:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:12.322 23:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:12.322 23:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:12.322 23:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:12.322 23:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:12.579 23:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:12.579 "name": "Existed_Raid", 00:30:12.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:12.580 "strip_size_kb": 64, 00:30:12.580 "state": "configuring", 00:30:12.580 "raid_level": "raid5f", 00:30:12.580 "superblock": false, 00:30:12.580 "num_base_bdevs": 3, 00:30:12.580 "num_base_bdevs_discovered": 2, 00:30:12.580 "num_base_bdevs_operational": 3, 00:30:12.580 "base_bdevs_list": [ 00:30:12.580 { 00:30:12.580 "name": "BaseBdev1", 00:30:12.580 "uuid": "b16fd555-ecb0-4af4-911c-ef53ec19590c", 00:30:12.580 "is_configured": true, 00:30:12.580 "data_offset": 0, 00:30:12.580 "data_size": 65536 00:30:12.580 }, 00:30:12.580 { 00:30:12.580 "name": "BaseBdev2", 00:30:12.580 "uuid": "df3638d8-a602-48d1-b7db-21050936ff00", 00:30:12.580 "is_configured": true, 00:30:12.580 "data_offset": 0, 00:30:12.580 "data_size": 65536 00:30:12.580 }, 00:30:12.580 { 00:30:12.580 "name": "BaseBdev3", 00:30:12.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:12.580 "is_configured": false, 00:30:12.580 "data_offset": 0, 00:30:12.580 "data_size": 0 00:30:12.580 } 00:30:12.580 ] 00:30:12.580 }' 00:30:12.580 23:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:12.580 23:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:13.516 23:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:30:13.516 [2024-07-13 23:16:02.862102] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:13.516 [2024-07-13 23:16:02.862200] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:30:13.516 [2024-07-13 23:16:02.862211] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:30:13.516 [2024-07-13 23:16:02.862359] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:30:13.516 [2024-07-13 23:16:02.863352] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:30:13.516 [2024-07-13 23:16:02.863379] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:30:13.516 [2024-07-13 23:16:02.863719] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:13.516 BaseBdev3 00:30:13.516 23:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:30:13.516 23:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:30:13.516 23:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:13.516 23:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:30:13.516 23:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:13.516 23:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:13.516 23:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:13.775 23:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:14.032 [ 00:30:14.032 { 00:30:14.032 "name": "BaseBdev3", 00:30:14.032 "aliases": [ 00:30:14.032 "2d072523-253b-4503-a188-ae8b0d53c5e3" 00:30:14.032 ], 00:30:14.032 "product_name": "Malloc disk", 00:30:14.032 "block_size": 512, 00:30:14.032 "num_blocks": 65536, 00:30:14.032 "uuid": "2d072523-253b-4503-a188-ae8b0d53c5e3", 00:30:14.032 "assigned_rate_limits": { 00:30:14.032 "rw_ios_per_sec": 0, 00:30:14.032 "rw_mbytes_per_sec": 0, 00:30:14.032 "r_mbytes_per_sec": 0, 00:30:14.032 "w_mbytes_per_sec": 0 00:30:14.032 }, 00:30:14.032 "claimed": true, 00:30:14.032 "claim_type": "exclusive_write", 00:30:14.032 "zoned": false, 00:30:14.032 "supported_io_types": { 00:30:14.032 "read": true, 00:30:14.032 "write": true, 00:30:14.032 "unmap": true, 00:30:14.032 "flush": true, 00:30:14.032 "reset": true, 00:30:14.032 "nvme_admin": false, 00:30:14.032 "nvme_io": false, 00:30:14.032 "nvme_io_md": false, 00:30:14.032 "write_zeroes": true, 00:30:14.032 "zcopy": true, 00:30:14.032 "get_zone_info": false, 00:30:14.032 "zone_management": false, 00:30:14.032 "zone_append": false, 00:30:14.032 "compare": false, 00:30:14.032 "compare_and_write": false, 00:30:14.032 "abort": true, 00:30:14.032 "seek_hole": false, 00:30:14.032 "seek_data": false, 00:30:14.032 "copy": true, 00:30:14.032 "nvme_iov_md": false 00:30:14.032 }, 00:30:14.032 "memory_domains": [ 00:30:14.032 { 00:30:14.032 "dma_device_id": "system", 00:30:14.032 "dma_device_type": 1 00:30:14.032 }, 00:30:14.032 { 00:30:14.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:14.032 "dma_device_type": 2 00:30:14.032 } 00:30:14.032 ], 00:30:14.032 "driver_specific": {} 00:30:14.032 } 00:30:14.032 ] 00:30:14.032 23:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:30:14.032 23:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:30:14.032 23:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:30:14.032 23:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:30:14.032 23:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:14.032 23:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:14.032 23:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:14.032 23:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:14.032 23:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:14.032 23:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:14.032 23:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:14.032 23:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:14.032 23:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:14.032 23:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:14.032 23:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:14.289 23:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:14.289 "name": "Existed_Raid", 00:30:14.289 "uuid": "efeecc43-1340-4e30-8ae7-66398253a3be", 00:30:14.289 "strip_size_kb": 64, 00:30:14.289 "state": "online", 00:30:14.289 "raid_level": "raid5f", 00:30:14.289 "superblock": false, 00:30:14.289 "num_base_bdevs": 3, 00:30:14.289 "num_base_bdevs_discovered": 3, 00:30:14.289 "num_base_bdevs_operational": 3, 00:30:14.289 "base_bdevs_list": [ 00:30:14.289 { 00:30:14.289 "name": "BaseBdev1", 00:30:14.289 "uuid": "b16fd555-ecb0-4af4-911c-ef53ec19590c", 00:30:14.289 "is_configured": true, 00:30:14.289 "data_offset": 0, 00:30:14.289 "data_size": 65536 00:30:14.289 }, 00:30:14.289 { 00:30:14.289 "name": "BaseBdev2", 00:30:14.289 "uuid": "df3638d8-a602-48d1-b7db-21050936ff00", 00:30:14.289 "is_configured": true, 00:30:14.289 "data_offset": 0, 00:30:14.289 "data_size": 65536 00:30:14.289 }, 00:30:14.289 { 00:30:14.289 "name": "BaseBdev3", 00:30:14.289 "uuid": "2d072523-253b-4503-a188-ae8b0d53c5e3", 00:30:14.289 "is_configured": true, 00:30:14.289 "data_offset": 0, 00:30:14.289 "data_size": 65536 00:30:14.289 } 00:30:14.289 ] 00:30:14.289 }' 00:30:14.289 23:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:14.289 23:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:14.854 23:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:30:14.854 23:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:30:14.854 23:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:30:14.854 23:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:30:14.854 23:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:30:14.854 23:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:30:14.854 23:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:30:14.854 23:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:30:15.111 [2024-07-13 23:16:04.495280] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:15.111 23:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:30:15.111 "name": "Existed_Raid", 00:30:15.111 "aliases": [ 00:30:15.111 "efeecc43-1340-4e30-8ae7-66398253a3be" 00:30:15.111 ], 00:30:15.111 "product_name": "Raid Volume", 00:30:15.111 "block_size": 512, 00:30:15.111 "num_blocks": 131072, 00:30:15.111 "uuid": "efeecc43-1340-4e30-8ae7-66398253a3be", 00:30:15.111 "assigned_rate_limits": { 00:30:15.111 "rw_ios_per_sec": 0, 00:30:15.111 "rw_mbytes_per_sec": 0, 00:30:15.111 "r_mbytes_per_sec": 0, 00:30:15.111 "w_mbytes_per_sec": 0 00:30:15.111 }, 00:30:15.111 "claimed": false, 00:30:15.111 "zoned": false, 00:30:15.111 "supported_io_types": { 00:30:15.111 "read": true, 00:30:15.111 "write": true, 00:30:15.111 "unmap": false, 00:30:15.111 "flush": false, 00:30:15.111 "reset": true, 00:30:15.111 "nvme_admin": false, 00:30:15.111 "nvme_io": false, 00:30:15.111 "nvme_io_md": false, 00:30:15.111 "write_zeroes": true, 00:30:15.111 "zcopy": false, 00:30:15.111 "get_zone_info": false, 00:30:15.111 "zone_management": false, 00:30:15.111 "zone_append": false, 00:30:15.111 "compare": false, 00:30:15.111 "compare_and_write": false, 00:30:15.111 "abort": false, 00:30:15.111 "seek_hole": false, 00:30:15.111 "seek_data": false, 00:30:15.111 "copy": false, 00:30:15.111 "nvme_iov_md": false 00:30:15.111 }, 00:30:15.111 "driver_specific": { 00:30:15.111 "raid": { 00:30:15.111 "uuid": "efeecc43-1340-4e30-8ae7-66398253a3be", 00:30:15.111 "strip_size_kb": 64, 00:30:15.111 "state": "online", 00:30:15.111 "raid_level": "raid5f", 00:30:15.111 "superblock": false, 00:30:15.111 "num_base_bdevs": 3, 00:30:15.112 "num_base_bdevs_discovered": 3, 00:30:15.112 "num_base_bdevs_operational": 3, 00:30:15.112 "base_bdevs_list": [ 00:30:15.112 { 00:30:15.112 "name": "BaseBdev1", 00:30:15.112 "uuid": "b16fd555-ecb0-4af4-911c-ef53ec19590c", 00:30:15.112 "is_configured": true, 00:30:15.112 "data_offset": 0, 00:30:15.112 "data_size": 65536 00:30:15.112 }, 00:30:15.112 { 00:30:15.112 "name": "BaseBdev2", 00:30:15.112 "uuid": "df3638d8-a602-48d1-b7db-21050936ff00", 00:30:15.112 "is_configured": true, 00:30:15.112 "data_offset": 0, 00:30:15.112 "data_size": 65536 00:30:15.112 }, 00:30:15.112 { 00:30:15.112 "name": "BaseBdev3", 00:30:15.112 "uuid": "2d072523-253b-4503-a188-ae8b0d53c5e3", 00:30:15.112 "is_configured": true, 00:30:15.112 "data_offset": 0, 00:30:15.112 "data_size": 65536 00:30:15.112 } 00:30:15.112 ] 00:30:15.112 } 00:30:15.112 } 00:30:15.112 }' 00:30:15.112 23:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:15.369 23:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:30:15.369 BaseBdev2 00:30:15.369 BaseBdev3' 00:30:15.369 23:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:15.369 23:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:15.369 23:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:30:15.369 23:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:15.369 "name": "BaseBdev1", 00:30:15.369 "aliases": [ 00:30:15.369 "b16fd555-ecb0-4af4-911c-ef53ec19590c" 00:30:15.369 ], 00:30:15.369 "product_name": "Malloc disk", 00:30:15.369 "block_size": 512, 00:30:15.369 "num_blocks": 65536, 00:30:15.369 "uuid": "b16fd555-ecb0-4af4-911c-ef53ec19590c", 00:30:15.369 "assigned_rate_limits": { 00:30:15.369 "rw_ios_per_sec": 0, 00:30:15.369 "rw_mbytes_per_sec": 0, 00:30:15.369 "r_mbytes_per_sec": 0, 00:30:15.369 "w_mbytes_per_sec": 0 00:30:15.369 }, 00:30:15.369 "claimed": true, 00:30:15.369 "claim_type": "exclusive_write", 00:30:15.369 "zoned": false, 00:30:15.369 "supported_io_types": { 00:30:15.369 "read": true, 00:30:15.369 "write": true, 00:30:15.369 "unmap": true, 00:30:15.369 "flush": true, 00:30:15.369 "reset": true, 00:30:15.369 "nvme_admin": false, 00:30:15.369 "nvme_io": false, 00:30:15.369 "nvme_io_md": false, 00:30:15.369 "write_zeroes": true, 00:30:15.369 "zcopy": true, 00:30:15.369 "get_zone_info": false, 00:30:15.369 "zone_management": false, 00:30:15.369 "zone_append": false, 00:30:15.369 "compare": false, 00:30:15.369 "compare_and_write": false, 00:30:15.369 "abort": true, 00:30:15.369 "seek_hole": false, 00:30:15.369 "seek_data": false, 00:30:15.369 "copy": true, 00:30:15.369 "nvme_iov_md": false 00:30:15.369 }, 00:30:15.369 "memory_domains": [ 00:30:15.369 { 00:30:15.369 "dma_device_id": "system", 00:30:15.369 "dma_device_type": 1 00:30:15.369 }, 00:30:15.369 { 00:30:15.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:15.369 "dma_device_type": 2 00:30:15.369 } 00:30:15.369 ], 00:30:15.369 "driver_specific": {} 00:30:15.369 }' 00:30:15.369 23:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:15.627 23:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:15.627 23:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:15.627 23:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:15.627 23:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:15.627 23:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:15.627 23:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:15.627 23:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:15.884 23:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:15.884 23:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:15.884 23:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:15.884 23:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:15.884 23:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:15.884 23:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:30:15.884 23:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:16.142 23:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:16.142 "name": "BaseBdev2", 00:30:16.142 "aliases": [ 00:30:16.142 "df3638d8-a602-48d1-b7db-21050936ff00" 00:30:16.142 ], 00:30:16.142 "product_name": "Malloc disk", 00:30:16.142 "block_size": 512, 00:30:16.142 "num_blocks": 65536, 00:30:16.142 "uuid": "df3638d8-a602-48d1-b7db-21050936ff00", 00:30:16.142 "assigned_rate_limits": { 00:30:16.142 "rw_ios_per_sec": 0, 00:30:16.142 "rw_mbytes_per_sec": 0, 00:30:16.142 "r_mbytes_per_sec": 0, 00:30:16.142 "w_mbytes_per_sec": 0 00:30:16.142 }, 00:30:16.142 "claimed": true, 00:30:16.142 "claim_type": "exclusive_write", 00:30:16.142 "zoned": false, 00:30:16.142 "supported_io_types": { 00:30:16.142 "read": true, 00:30:16.142 "write": true, 00:30:16.142 "unmap": true, 00:30:16.142 "flush": true, 00:30:16.142 "reset": true, 00:30:16.142 "nvme_admin": false, 00:30:16.142 "nvme_io": false, 00:30:16.142 "nvme_io_md": false, 00:30:16.142 "write_zeroes": true, 00:30:16.142 "zcopy": true, 00:30:16.142 "get_zone_info": false, 00:30:16.142 "zone_management": false, 00:30:16.142 "zone_append": false, 00:30:16.142 "compare": false, 00:30:16.142 "compare_and_write": false, 00:30:16.142 "abort": true, 00:30:16.142 "seek_hole": false, 00:30:16.142 "seek_data": false, 00:30:16.142 "copy": true, 00:30:16.142 "nvme_iov_md": false 00:30:16.142 }, 00:30:16.142 "memory_domains": [ 00:30:16.142 { 00:30:16.142 "dma_device_id": "system", 00:30:16.142 "dma_device_type": 1 00:30:16.142 }, 00:30:16.142 { 00:30:16.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:16.142 "dma_device_type": 2 00:30:16.142 } 00:30:16.142 ], 00:30:16.142 "driver_specific": {} 00:30:16.142 }' 00:30:16.142 23:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:16.142 23:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:16.142 23:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:16.142 23:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:16.142 23:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:16.400 23:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:16.400 23:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:16.400 23:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:16.400 23:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:16.400 23:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:16.400 23:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:16.400 23:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:16.400 23:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:16.400 23:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:30:16.400 23:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:16.659 23:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:16.659 "name": "BaseBdev3", 00:30:16.659 "aliases": [ 00:30:16.659 "2d072523-253b-4503-a188-ae8b0d53c5e3" 00:30:16.659 ], 00:30:16.659 "product_name": "Malloc disk", 00:30:16.659 "block_size": 512, 00:30:16.659 "num_blocks": 65536, 00:30:16.659 "uuid": "2d072523-253b-4503-a188-ae8b0d53c5e3", 00:30:16.659 "assigned_rate_limits": { 00:30:16.659 "rw_ios_per_sec": 0, 00:30:16.659 "rw_mbytes_per_sec": 0, 00:30:16.659 "r_mbytes_per_sec": 0, 00:30:16.659 "w_mbytes_per_sec": 0 00:30:16.659 }, 00:30:16.659 "claimed": true, 00:30:16.659 "claim_type": "exclusive_write", 00:30:16.659 "zoned": false, 00:30:16.659 "supported_io_types": { 00:30:16.659 "read": true, 00:30:16.659 "write": true, 00:30:16.659 "unmap": true, 00:30:16.659 "flush": true, 00:30:16.659 "reset": true, 00:30:16.659 "nvme_admin": false, 00:30:16.659 "nvme_io": false, 00:30:16.659 "nvme_io_md": false, 00:30:16.659 "write_zeroes": true, 00:30:16.659 "zcopy": true, 00:30:16.659 "get_zone_info": false, 00:30:16.659 "zone_management": false, 00:30:16.659 "zone_append": false, 00:30:16.659 "compare": false, 00:30:16.659 "compare_and_write": false, 00:30:16.659 "abort": true, 00:30:16.659 "seek_hole": false, 00:30:16.659 "seek_data": false, 00:30:16.659 "copy": true, 00:30:16.659 "nvme_iov_md": false 00:30:16.659 }, 00:30:16.659 "memory_domains": [ 00:30:16.659 { 00:30:16.659 "dma_device_id": "system", 00:30:16.659 "dma_device_type": 1 00:30:16.659 }, 00:30:16.659 { 00:30:16.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:16.659 "dma_device_type": 2 00:30:16.659 } 00:30:16.659 ], 00:30:16.659 "driver_specific": {} 00:30:16.659 }' 00:30:16.659 23:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:16.659 23:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:16.917 23:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:16.917 23:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:16.917 23:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:16.917 23:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:16.917 23:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:16.917 23:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:16.917 23:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:16.917 23:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:17.175 23:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:17.175 23:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:17.175 23:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:30:17.434 [2024-07-13 23:16:06.635599] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:17.434 23:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:30:17.434 23:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:30:17.434 23:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:30:17.434 23:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:30:17.434 23:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:30:17.434 23:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:30:17.434 23:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:17.434 23:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:17.434 23:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:17.434 23:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:17.434 23:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:17.434 23:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:17.434 23:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:17.434 23:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:17.434 23:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:17.434 23:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:17.434 23:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:17.693 23:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:17.693 "name": "Existed_Raid", 00:30:17.693 "uuid": "efeecc43-1340-4e30-8ae7-66398253a3be", 00:30:17.693 "strip_size_kb": 64, 00:30:17.693 "state": "online", 00:30:17.693 "raid_level": "raid5f", 00:30:17.693 "superblock": false, 00:30:17.693 "num_base_bdevs": 3, 00:30:17.693 "num_base_bdevs_discovered": 2, 00:30:17.693 "num_base_bdevs_operational": 2, 00:30:17.693 "base_bdevs_list": [ 00:30:17.693 { 00:30:17.693 "name": null, 00:30:17.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:17.693 "is_configured": false, 00:30:17.693 "data_offset": 0, 00:30:17.693 "data_size": 65536 00:30:17.693 }, 00:30:17.693 { 00:30:17.693 "name": "BaseBdev2", 00:30:17.693 "uuid": "df3638d8-a602-48d1-b7db-21050936ff00", 00:30:17.693 "is_configured": true, 00:30:17.693 "data_offset": 0, 00:30:17.693 "data_size": 65536 00:30:17.693 }, 00:30:17.693 { 00:30:17.693 "name": "BaseBdev3", 00:30:17.693 "uuid": "2d072523-253b-4503-a188-ae8b0d53c5e3", 00:30:17.693 "is_configured": true, 00:30:17.693 "data_offset": 0, 00:30:17.693 "data_size": 65536 00:30:17.693 } 00:30:17.693 ] 00:30:17.693 }' 00:30:17.693 23:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:17.693 23:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:18.258 23:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:30:18.258 23:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:30:18.258 23:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:18.258 23:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:30:18.517 23:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:30:18.517 23:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:18.517 23:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:30:18.776 [2024-07-13 23:16:08.134844] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:18.776 [2024-07-13 23:16:08.134975] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:18.776 [2024-07-13 23:16:08.145174] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:18.776 23:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:30:18.776 23:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:30:18.776 23:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:30:18.776 23:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:19.367 23:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:30:19.367 23:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:19.367 23:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:30:19.367 [2024-07-13 23:16:08.733439] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:19.367 [2024-07-13 23:16:08.733554] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:30:19.367 23:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:30:19.367 23:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:30:19.625 23:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:19.625 23:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:30:19.625 23:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:30:19.625 23:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:30:19.625 23:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:30:19.625 23:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:30:19.625 23:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:30:19.625 23:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:30:19.885 BaseBdev2 00:30:19.885 23:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:30:19.885 23:16:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:30:19.885 23:16:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:19.885 23:16:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:30:19.885 23:16:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:19.885 23:16:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:19.885 23:16:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:20.144 23:16:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:20.403 [ 00:30:20.403 { 00:30:20.403 "name": "BaseBdev2", 00:30:20.403 "aliases": [ 00:30:20.403 "2b2772a7-9e12-4545-be4e-78bad638fa08" 00:30:20.403 ], 00:30:20.403 "product_name": "Malloc disk", 00:30:20.403 "block_size": 512, 00:30:20.403 "num_blocks": 65536, 00:30:20.403 "uuid": "2b2772a7-9e12-4545-be4e-78bad638fa08", 00:30:20.403 "assigned_rate_limits": { 00:30:20.403 "rw_ios_per_sec": 0, 00:30:20.403 "rw_mbytes_per_sec": 0, 00:30:20.403 "r_mbytes_per_sec": 0, 00:30:20.403 "w_mbytes_per_sec": 0 00:30:20.403 }, 00:30:20.403 "claimed": false, 00:30:20.403 "zoned": false, 00:30:20.403 "supported_io_types": { 00:30:20.403 "read": true, 00:30:20.403 "write": true, 00:30:20.403 "unmap": true, 00:30:20.403 "flush": true, 00:30:20.403 "reset": true, 00:30:20.403 "nvme_admin": false, 00:30:20.403 "nvme_io": false, 00:30:20.403 "nvme_io_md": false, 00:30:20.403 "write_zeroes": true, 00:30:20.403 "zcopy": true, 00:30:20.403 "get_zone_info": false, 00:30:20.403 "zone_management": false, 00:30:20.403 "zone_append": false, 00:30:20.403 "compare": false, 00:30:20.403 "compare_and_write": false, 00:30:20.403 "abort": true, 00:30:20.403 "seek_hole": false, 00:30:20.403 "seek_data": false, 00:30:20.403 "copy": true, 00:30:20.403 "nvme_iov_md": false 00:30:20.403 }, 00:30:20.403 "memory_domains": [ 00:30:20.403 { 00:30:20.403 "dma_device_id": "system", 00:30:20.403 "dma_device_type": 1 00:30:20.403 }, 00:30:20.403 { 00:30:20.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:20.403 "dma_device_type": 2 00:30:20.403 } 00:30:20.403 ], 00:30:20.403 "driver_specific": {} 00:30:20.403 } 00:30:20.403 ] 00:30:20.403 23:16:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:30:20.403 23:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:30:20.403 23:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:30:20.403 23:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:30:20.662 BaseBdev3 00:30:20.662 23:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:30:20.662 23:16:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:30:20.662 23:16:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:20.662 23:16:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:30:20.662 23:16:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:20.662 23:16:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:20.662 23:16:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:20.920 23:16:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:21.179 [ 00:30:21.179 { 00:30:21.179 "name": "BaseBdev3", 00:30:21.179 "aliases": [ 00:30:21.179 "d203236a-fce3-4e1c-bc60-cc234ef7cd56" 00:30:21.179 ], 00:30:21.179 "product_name": "Malloc disk", 00:30:21.179 "block_size": 512, 00:30:21.179 "num_blocks": 65536, 00:30:21.179 "uuid": "d203236a-fce3-4e1c-bc60-cc234ef7cd56", 00:30:21.179 "assigned_rate_limits": { 00:30:21.179 "rw_ios_per_sec": 0, 00:30:21.179 "rw_mbytes_per_sec": 0, 00:30:21.179 "r_mbytes_per_sec": 0, 00:30:21.179 "w_mbytes_per_sec": 0 00:30:21.179 }, 00:30:21.179 "claimed": false, 00:30:21.179 "zoned": false, 00:30:21.179 "supported_io_types": { 00:30:21.179 "read": true, 00:30:21.179 "write": true, 00:30:21.179 "unmap": true, 00:30:21.179 "flush": true, 00:30:21.179 "reset": true, 00:30:21.179 "nvme_admin": false, 00:30:21.179 "nvme_io": false, 00:30:21.179 "nvme_io_md": false, 00:30:21.179 "write_zeroes": true, 00:30:21.179 "zcopy": true, 00:30:21.179 "get_zone_info": false, 00:30:21.179 "zone_management": false, 00:30:21.179 "zone_append": false, 00:30:21.179 "compare": false, 00:30:21.179 "compare_and_write": false, 00:30:21.179 "abort": true, 00:30:21.179 "seek_hole": false, 00:30:21.179 "seek_data": false, 00:30:21.179 "copy": true, 00:30:21.179 "nvme_iov_md": false 00:30:21.179 }, 00:30:21.179 "memory_domains": [ 00:30:21.179 { 00:30:21.179 "dma_device_id": "system", 00:30:21.179 "dma_device_type": 1 00:30:21.179 }, 00:30:21.179 { 00:30:21.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:21.179 "dma_device_type": 2 00:30:21.179 } 00:30:21.179 ], 00:30:21.179 "driver_specific": {} 00:30:21.179 } 00:30:21.179 ] 00:30:21.179 23:16:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:30:21.179 23:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:30:21.179 23:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:30:21.179 23:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:30:21.179 [2024-07-13 23:16:10.557439] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:21.179 [2024-07-13 23:16:10.557543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:21.179 [2024-07-13 23:16:10.557577] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:21.179 [2024-07-13 23:16:10.559491] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:21.179 23:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:21.179 23:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:21.179 23:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:21.179 23:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:21.179 23:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:21.179 23:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:21.179 23:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:21.179 23:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:21.179 23:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:21.179 23:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:21.179 23:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:21.179 23:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:21.437 23:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:21.438 "name": "Existed_Raid", 00:30:21.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:21.438 "strip_size_kb": 64, 00:30:21.438 "state": "configuring", 00:30:21.438 "raid_level": "raid5f", 00:30:21.438 "superblock": false, 00:30:21.438 "num_base_bdevs": 3, 00:30:21.438 "num_base_bdevs_discovered": 2, 00:30:21.438 "num_base_bdevs_operational": 3, 00:30:21.438 "base_bdevs_list": [ 00:30:21.438 { 00:30:21.438 "name": "BaseBdev1", 00:30:21.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:21.438 "is_configured": false, 00:30:21.438 "data_offset": 0, 00:30:21.438 "data_size": 0 00:30:21.438 }, 00:30:21.438 { 00:30:21.438 "name": "BaseBdev2", 00:30:21.438 "uuid": "2b2772a7-9e12-4545-be4e-78bad638fa08", 00:30:21.438 "is_configured": true, 00:30:21.438 "data_offset": 0, 00:30:21.438 "data_size": 65536 00:30:21.438 }, 00:30:21.438 { 00:30:21.438 "name": "BaseBdev3", 00:30:21.438 "uuid": "d203236a-fce3-4e1c-bc60-cc234ef7cd56", 00:30:21.438 "is_configured": true, 00:30:21.438 "data_offset": 0, 00:30:21.438 "data_size": 65536 00:30:21.438 } 00:30:21.438 ] 00:30:21.438 }' 00:30:21.438 23:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:21.438 23:16:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:22.374 23:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:30:22.374 [2024-07-13 23:16:11.652161] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:22.374 23:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:22.374 23:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:22.374 23:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:22.374 23:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:22.374 23:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:22.374 23:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:22.374 23:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:22.374 23:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:22.374 23:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:22.374 23:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:22.374 23:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:22.374 23:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:22.633 23:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:22.633 "name": "Existed_Raid", 00:30:22.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:22.633 "strip_size_kb": 64, 00:30:22.633 "state": "configuring", 00:30:22.633 "raid_level": "raid5f", 00:30:22.633 "superblock": false, 00:30:22.633 "num_base_bdevs": 3, 00:30:22.633 "num_base_bdevs_discovered": 1, 00:30:22.633 "num_base_bdevs_operational": 3, 00:30:22.633 "base_bdevs_list": [ 00:30:22.633 { 00:30:22.633 "name": "BaseBdev1", 00:30:22.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:22.633 "is_configured": false, 00:30:22.633 "data_offset": 0, 00:30:22.633 "data_size": 0 00:30:22.633 }, 00:30:22.633 { 00:30:22.633 "name": null, 00:30:22.633 "uuid": "2b2772a7-9e12-4545-be4e-78bad638fa08", 00:30:22.633 "is_configured": false, 00:30:22.633 "data_offset": 0, 00:30:22.633 "data_size": 65536 00:30:22.633 }, 00:30:22.633 { 00:30:22.633 "name": "BaseBdev3", 00:30:22.633 "uuid": "d203236a-fce3-4e1c-bc60-cc234ef7cd56", 00:30:22.633 "is_configured": true, 00:30:22.633 "data_offset": 0, 00:30:22.633 "data_size": 65536 00:30:22.633 } 00:30:22.633 ] 00:30:22.633 }' 00:30:22.633 23:16:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:22.633 23:16:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:23.200 23:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:23.200 23:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:23.459 23:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:30:23.459 23:16:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:30:23.718 [2024-07-13 23:16:13.084880] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:23.718 BaseBdev1 00:30:23.718 23:16:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:30:23.718 23:16:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:30:23.718 23:16:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:23.718 23:16:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:30:23.718 23:16:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:23.718 23:16:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:23.718 23:16:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:23.975 23:16:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:24.238 [ 00:30:24.238 { 00:30:24.238 "name": "BaseBdev1", 00:30:24.238 "aliases": [ 00:30:24.238 "24906f0a-a720-4c57-8e54-feca5bfce372" 00:30:24.238 ], 00:30:24.238 "product_name": "Malloc disk", 00:30:24.238 "block_size": 512, 00:30:24.238 "num_blocks": 65536, 00:30:24.238 "uuid": "24906f0a-a720-4c57-8e54-feca5bfce372", 00:30:24.238 "assigned_rate_limits": { 00:30:24.238 "rw_ios_per_sec": 0, 00:30:24.238 "rw_mbytes_per_sec": 0, 00:30:24.238 "r_mbytes_per_sec": 0, 00:30:24.238 "w_mbytes_per_sec": 0 00:30:24.238 }, 00:30:24.238 "claimed": true, 00:30:24.238 "claim_type": "exclusive_write", 00:30:24.238 "zoned": false, 00:30:24.238 "supported_io_types": { 00:30:24.238 "read": true, 00:30:24.238 "write": true, 00:30:24.238 "unmap": true, 00:30:24.238 "flush": true, 00:30:24.238 "reset": true, 00:30:24.238 "nvme_admin": false, 00:30:24.238 "nvme_io": false, 00:30:24.238 "nvme_io_md": false, 00:30:24.238 "write_zeroes": true, 00:30:24.238 "zcopy": true, 00:30:24.238 "get_zone_info": false, 00:30:24.238 "zone_management": false, 00:30:24.238 "zone_append": false, 00:30:24.238 "compare": false, 00:30:24.238 "compare_and_write": false, 00:30:24.238 "abort": true, 00:30:24.238 "seek_hole": false, 00:30:24.238 "seek_data": false, 00:30:24.238 "copy": true, 00:30:24.238 "nvme_iov_md": false 00:30:24.238 }, 00:30:24.238 "memory_domains": [ 00:30:24.238 { 00:30:24.238 "dma_device_id": "system", 00:30:24.238 "dma_device_type": 1 00:30:24.238 }, 00:30:24.238 { 00:30:24.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:24.238 "dma_device_type": 2 00:30:24.238 } 00:30:24.238 ], 00:30:24.238 "driver_specific": {} 00:30:24.238 } 00:30:24.238 ] 00:30:24.238 23:16:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:30:24.238 23:16:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:24.238 23:16:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:24.238 23:16:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:24.238 23:16:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:24.238 23:16:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:24.238 23:16:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:24.238 23:16:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:24.238 23:16:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:24.238 23:16:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:24.238 23:16:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:24.238 23:16:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:24.238 23:16:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:24.496 23:16:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:24.496 "name": "Existed_Raid", 00:30:24.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:24.496 "strip_size_kb": 64, 00:30:24.496 "state": "configuring", 00:30:24.496 "raid_level": "raid5f", 00:30:24.496 "superblock": false, 00:30:24.496 "num_base_bdevs": 3, 00:30:24.496 "num_base_bdevs_discovered": 2, 00:30:24.496 "num_base_bdevs_operational": 3, 00:30:24.496 "base_bdevs_list": [ 00:30:24.496 { 00:30:24.496 "name": "BaseBdev1", 00:30:24.496 "uuid": "24906f0a-a720-4c57-8e54-feca5bfce372", 00:30:24.496 "is_configured": true, 00:30:24.496 "data_offset": 0, 00:30:24.496 "data_size": 65536 00:30:24.496 }, 00:30:24.496 { 00:30:24.496 "name": null, 00:30:24.496 "uuid": "2b2772a7-9e12-4545-be4e-78bad638fa08", 00:30:24.496 "is_configured": false, 00:30:24.496 "data_offset": 0, 00:30:24.496 "data_size": 65536 00:30:24.496 }, 00:30:24.496 { 00:30:24.496 "name": "BaseBdev3", 00:30:24.496 "uuid": "d203236a-fce3-4e1c-bc60-cc234ef7cd56", 00:30:24.496 "is_configured": true, 00:30:24.496 "data_offset": 0, 00:30:24.496 "data_size": 65536 00:30:24.496 } 00:30:24.496 ] 00:30:24.496 }' 00:30:24.496 23:16:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:24.496 23:16:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:25.060 23:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:25.060 23:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:25.317 23:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:30:25.317 23:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:30:25.574 [2024-07-13 23:16:14.773588] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:25.574 23:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:25.574 23:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:25.574 23:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:25.574 23:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:25.574 23:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:25.574 23:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:25.574 23:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:25.574 23:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:25.574 23:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:25.574 23:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:25.574 23:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:25.574 23:16:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:25.831 23:16:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:25.831 "name": "Existed_Raid", 00:30:25.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:25.831 "strip_size_kb": 64, 00:30:25.831 "state": "configuring", 00:30:25.831 "raid_level": "raid5f", 00:30:25.831 "superblock": false, 00:30:25.831 "num_base_bdevs": 3, 00:30:25.831 "num_base_bdevs_discovered": 1, 00:30:25.831 "num_base_bdevs_operational": 3, 00:30:25.831 "base_bdevs_list": [ 00:30:25.831 { 00:30:25.831 "name": "BaseBdev1", 00:30:25.831 "uuid": "24906f0a-a720-4c57-8e54-feca5bfce372", 00:30:25.831 "is_configured": true, 00:30:25.831 "data_offset": 0, 00:30:25.831 "data_size": 65536 00:30:25.831 }, 00:30:25.831 { 00:30:25.831 "name": null, 00:30:25.831 "uuid": "2b2772a7-9e12-4545-be4e-78bad638fa08", 00:30:25.831 "is_configured": false, 00:30:25.831 "data_offset": 0, 00:30:25.831 "data_size": 65536 00:30:25.831 }, 00:30:25.831 { 00:30:25.831 "name": null, 00:30:25.831 "uuid": "d203236a-fce3-4e1c-bc60-cc234ef7cd56", 00:30:25.831 "is_configured": false, 00:30:25.831 "data_offset": 0, 00:30:25.831 "data_size": 65536 00:30:25.831 } 00:30:25.831 ] 00:30:25.831 }' 00:30:25.831 23:16:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:25.831 23:16:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.396 23:16:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:26.396 23:16:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:26.654 23:16:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:30:26.654 23:16:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:30:26.913 [2024-07-13 23:16:16.113933] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:26.913 23:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:26.913 23:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:26.913 23:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:26.913 23:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:26.913 23:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:26.913 23:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:26.913 23:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:26.913 23:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:26.913 23:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:26.913 23:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:26.913 23:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:26.913 23:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:27.171 23:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:27.171 "name": "Existed_Raid", 00:30:27.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:27.171 "strip_size_kb": 64, 00:30:27.171 "state": "configuring", 00:30:27.171 "raid_level": "raid5f", 00:30:27.171 "superblock": false, 00:30:27.171 "num_base_bdevs": 3, 00:30:27.171 "num_base_bdevs_discovered": 2, 00:30:27.171 "num_base_bdevs_operational": 3, 00:30:27.171 "base_bdevs_list": [ 00:30:27.171 { 00:30:27.171 "name": "BaseBdev1", 00:30:27.171 "uuid": "24906f0a-a720-4c57-8e54-feca5bfce372", 00:30:27.171 "is_configured": true, 00:30:27.171 "data_offset": 0, 00:30:27.171 "data_size": 65536 00:30:27.171 }, 00:30:27.171 { 00:30:27.171 "name": null, 00:30:27.171 "uuid": "2b2772a7-9e12-4545-be4e-78bad638fa08", 00:30:27.171 "is_configured": false, 00:30:27.171 "data_offset": 0, 00:30:27.171 "data_size": 65536 00:30:27.171 }, 00:30:27.171 { 00:30:27.171 "name": "BaseBdev3", 00:30:27.171 "uuid": "d203236a-fce3-4e1c-bc60-cc234ef7cd56", 00:30:27.171 "is_configured": true, 00:30:27.171 "data_offset": 0, 00:30:27.171 "data_size": 65536 00:30:27.171 } 00:30:27.171 ] 00:30:27.171 }' 00:30:27.171 23:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:27.171 23:16:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.737 23:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:27.737 23:16:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:27.995 23:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:30:27.995 23:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:30:28.261 [2024-07-13 23:16:17.454298] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:28.261 23:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:28.261 23:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:28.261 23:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:28.261 23:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:28.261 23:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:28.261 23:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:28.261 23:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:28.261 23:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:28.261 23:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:28.261 23:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:28.261 23:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:28.261 23:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:28.524 23:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:28.524 "name": "Existed_Raid", 00:30:28.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:28.524 "strip_size_kb": 64, 00:30:28.524 "state": "configuring", 00:30:28.524 "raid_level": "raid5f", 00:30:28.524 "superblock": false, 00:30:28.524 "num_base_bdevs": 3, 00:30:28.524 "num_base_bdevs_discovered": 1, 00:30:28.524 "num_base_bdevs_operational": 3, 00:30:28.524 "base_bdevs_list": [ 00:30:28.524 { 00:30:28.524 "name": null, 00:30:28.524 "uuid": "24906f0a-a720-4c57-8e54-feca5bfce372", 00:30:28.524 "is_configured": false, 00:30:28.524 "data_offset": 0, 00:30:28.524 "data_size": 65536 00:30:28.524 }, 00:30:28.524 { 00:30:28.524 "name": null, 00:30:28.524 "uuid": "2b2772a7-9e12-4545-be4e-78bad638fa08", 00:30:28.524 "is_configured": false, 00:30:28.524 "data_offset": 0, 00:30:28.524 "data_size": 65536 00:30:28.524 }, 00:30:28.524 { 00:30:28.524 "name": "BaseBdev3", 00:30:28.524 "uuid": "d203236a-fce3-4e1c-bc60-cc234ef7cd56", 00:30:28.524 "is_configured": true, 00:30:28.524 "data_offset": 0, 00:30:28.524 "data_size": 65536 00:30:28.524 } 00:30:28.524 ] 00:30:28.524 }' 00:30:28.524 23:16:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:28.524 23:16:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.095 23:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:29.095 23:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:29.353 23:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:30:29.353 23:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:30:29.612 [2024-07-13 23:16:18.804706] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:29.612 23:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:29.612 23:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:29.612 23:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:29.612 23:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:29.612 23:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:29.612 23:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:29.612 23:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:29.612 23:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:29.612 23:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:29.612 23:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:29.612 23:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:29.612 23:16:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:29.871 23:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:29.871 "name": "Existed_Raid", 00:30:29.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:29.871 "strip_size_kb": 64, 00:30:29.871 "state": "configuring", 00:30:29.871 "raid_level": "raid5f", 00:30:29.871 "superblock": false, 00:30:29.871 "num_base_bdevs": 3, 00:30:29.871 "num_base_bdevs_discovered": 2, 00:30:29.871 "num_base_bdevs_operational": 3, 00:30:29.871 "base_bdevs_list": [ 00:30:29.871 { 00:30:29.871 "name": null, 00:30:29.871 "uuid": "24906f0a-a720-4c57-8e54-feca5bfce372", 00:30:29.871 "is_configured": false, 00:30:29.871 "data_offset": 0, 00:30:29.871 "data_size": 65536 00:30:29.871 }, 00:30:29.871 { 00:30:29.871 "name": "BaseBdev2", 00:30:29.871 "uuid": "2b2772a7-9e12-4545-be4e-78bad638fa08", 00:30:29.871 "is_configured": true, 00:30:29.871 "data_offset": 0, 00:30:29.871 "data_size": 65536 00:30:29.871 }, 00:30:29.871 { 00:30:29.871 "name": "BaseBdev3", 00:30:29.871 "uuid": "d203236a-fce3-4e1c-bc60-cc234ef7cd56", 00:30:29.871 "is_configured": true, 00:30:29.871 "data_offset": 0, 00:30:29.871 "data_size": 65536 00:30:29.871 } 00:30:29.871 ] 00:30:29.871 }' 00:30:29.871 23:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:29.871 23:16:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.438 23:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:30.438 23:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:30.697 23:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:30:30.697 23:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:30.697 23:16:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:30:30.955 23:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 24906f0a-a720-4c57-8e54-feca5bfce372 00:30:31.214 [2024-07-13 23:16:20.438041] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:30:31.214 [2024-07-13 23:16:20.438117] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:30:31.214 [2024-07-13 23:16:20.438127] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:30:31.214 [2024-07-13 23:16:20.438220] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:30:31.214 [2024-07-13 23:16:20.438952] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:30:31.214 [2024-07-13 23:16:20.438968] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:30:31.214 [2024-07-13 23:16:20.439221] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:31.214 NewBaseBdev 00:30:31.214 23:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:30:31.214 23:16:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:30:31.214 23:16:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:31.214 23:16:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:30:31.215 23:16:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:31.215 23:16:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:31.215 23:16:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:31.473 23:16:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:30:31.732 [ 00:30:31.732 { 00:30:31.732 "name": "NewBaseBdev", 00:30:31.732 "aliases": [ 00:30:31.732 "24906f0a-a720-4c57-8e54-feca5bfce372" 00:30:31.732 ], 00:30:31.732 "product_name": "Malloc disk", 00:30:31.732 "block_size": 512, 00:30:31.732 "num_blocks": 65536, 00:30:31.732 "uuid": "24906f0a-a720-4c57-8e54-feca5bfce372", 00:30:31.732 "assigned_rate_limits": { 00:30:31.732 "rw_ios_per_sec": 0, 00:30:31.732 "rw_mbytes_per_sec": 0, 00:30:31.732 "r_mbytes_per_sec": 0, 00:30:31.732 "w_mbytes_per_sec": 0 00:30:31.732 }, 00:30:31.732 "claimed": true, 00:30:31.732 "claim_type": "exclusive_write", 00:30:31.732 "zoned": false, 00:30:31.732 "supported_io_types": { 00:30:31.732 "read": true, 00:30:31.732 "write": true, 00:30:31.732 "unmap": true, 00:30:31.732 "flush": true, 00:30:31.732 "reset": true, 00:30:31.732 "nvme_admin": false, 00:30:31.732 "nvme_io": false, 00:30:31.732 "nvme_io_md": false, 00:30:31.732 "write_zeroes": true, 00:30:31.732 "zcopy": true, 00:30:31.732 "get_zone_info": false, 00:30:31.732 "zone_management": false, 00:30:31.732 "zone_append": false, 00:30:31.732 "compare": false, 00:30:31.732 "compare_and_write": false, 00:30:31.732 "abort": true, 00:30:31.732 "seek_hole": false, 00:30:31.732 "seek_data": false, 00:30:31.732 "copy": true, 00:30:31.732 "nvme_iov_md": false 00:30:31.732 }, 00:30:31.732 "memory_domains": [ 00:30:31.732 { 00:30:31.732 "dma_device_id": "system", 00:30:31.732 "dma_device_type": 1 00:30:31.732 }, 00:30:31.732 { 00:30:31.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:31.732 "dma_device_type": 2 00:30:31.732 } 00:30:31.732 ], 00:30:31.732 "driver_specific": {} 00:30:31.732 } 00:30:31.732 ] 00:30:31.732 23:16:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:30:31.732 23:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:30:31.732 23:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:31.732 23:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:31.732 23:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:31.732 23:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:31.732 23:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:31.732 23:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:31.732 23:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:31.732 23:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:31.732 23:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:31.732 23:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:31.732 23:16:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:31.991 23:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:31.991 "name": "Existed_Raid", 00:30:31.991 "uuid": "e32c168f-9d1c-403c-83e4-3fda1c5ea86a", 00:30:31.991 "strip_size_kb": 64, 00:30:31.991 "state": "online", 00:30:31.991 "raid_level": "raid5f", 00:30:31.991 "superblock": false, 00:30:31.991 "num_base_bdevs": 3, 00:30:31.991 "num_base_bdevs_discovered": 3, 00:30:31.991 "num_base_bdevs_operational": 3, 00:30:31.991 "base_bdevs_list": [ 00:30:31.991 { 00:30:31.991 "name": "NewBaseBdev", 00:30:31.991 "uuid": "24906f0a-a720-4c57-8e54-feca5bfce372", 00:30:31.991 "is_configured": true, 00:30:31.991 "data_offset": 0, 00:30:31.991 "data_size": 65536 00:30:31.991 }, 00:30:31.991 { 00:30:31.991 "name": "BaseBdev2", 00:30:31.991 "uuid": "2b2772a7-9e12-4545-be4e-78bad638fa08", 00:30:31.991 "is_configured": true, 00:30:31.991 "data_offset": 0, 00:30:31.991 "data_size": 65536 00:30:31.991 }, 00:30:31.991 { 00:30:31.991 "name": "BaseBdev3", 00:30:31.991 "uuid": "d203236a-fce3-4e1c-bc60-cc234ef7cd56", 00:30:31.991 "is_configured": true, 00:30:31.991 "data_offset": 0, 00:30:31.991 "data_size": 65536 00:30:31.991 } 00:30:31.991 ] 00:30:31.991 }' 00:30:31.991 23:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:31.991 23:16:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:32.557 23:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:30:32.557 23:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:30:32.557 23:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:30:32.557 23:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:30:32.557 23:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:30:32.557 23:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:30:32.557 23:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:30:32.557 23:16:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:30:32.814 [2024-07-13 23:16:22.082719] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:32.814 23:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:30:32.814 "name": "Existed_Raid", 00:30:32.814 "aliases": [ 00:30:32.814 "e32c168f-9d1c-403c-83e4-3fda1c5ea86a" 00:30:32.814 ], 00:30:32.814 "product_name": "Raid Volume", 00:30:32.814 "block_size": 512, 00:30:32.814 "num_blocks": 131072, 00:30:32.814 "uuid": "e32c168f-9d1c-403c-83e4-3fda1c5ea86a", 00:30:32.814 "assigned_rate_limits": { 00:30:32.814 "rw_ios_per_sec": 0, 00:30:32.814 "rw_mbytes_per_sec": 0, 00:30:32.814 "r_mbytes_per_sec": 0, 00:30:32.814 "w_mbytes_per_sec": 0 00:30:32.814 }, 00:30:32.814 "claimed": false, 00:30:32.814 "zoned": false, 00:30:32.814 "supported_io_types": { 00:30:32.814 "read": true, 00:30:32.814 "write": true, 00:30:32.814 "unmap": false, 00:30:32.814 "flush": false, 00:30:32.814 "reset": true, 00:30:32.814 "nvme_admin": false, 00:30:32.814 "nvme_io": false, 00:30:32.814 "nvme_io_md": false, 00:30:32.814 "write_zeroes": true, 00:30:32.814 "zcopy": false, 00:30:32.814 "get_zone_info": false, 00:30:32.814 "zone_management": false, 00:30:32.814 "zone_append": false, 00:30:32.814 "compare": false, 00:30:32.814 "compare_and_write": false, 00:30:32.814 "abort": false, 00:30:32.814 "seek_hole": false, 00:30:32.814 "seek_data": false, 00:30:32.814 "copy": false, 00:30:32.814 "nvme_iov_md": false 00:30:32.814 }, 00:30:32.814 "driver_specific": { 00:30:32.814 "raid": { 00:30:32.814 "uuid": "e32c168f-9d1c-403c-83e4-3fda1c5ea86a", 00:30:32.814 "strip_size_kb": 64, 00:30:32.814 "state": "online", 00:30:32.814 "raid_level": "raid5f", 00:30:32.814 "superblock": false, 00:30:32.814 "num_base_bdevs": 3, 00:30:32.814 "num_base_bdevs_discovered": 3, 00:30:32.814 "num_base_bdevs_operational": 3, 00:30:32.814 "base_bdevs_list": [ 00:30:32.814 { 00:30:32.814 "name": "NewBaseBdev", 00:30:32.814 "uuid": "24906f0a-a720-4c57-8e54-feca5bfce372", 00:30:32.814 "is_configured": true, 00:30:32.814 "data_offset": 0, 00:30:32.814 "data_size": 65536 00:30:32.814 }, 00:30:32.814 { 00:30:32.814 "name": "BaseBdev2", 00:30:32.814 "uuid": "2b2772a7-9e12-4545-be4e-78bad638fa08", 00:30:32.814 "is_configured": true, 00:30:32.814 "data_offset": 0, 00:30:32.814 "data_size": 65536 00:30:32.814 }, 00:30:32.814 { 00:30:32.814 "name": "BaseBdev3", 00:30:32.814 "uuid": "d203236a-fce3-4e1c-bc60-cc234ef7cd56", 00:30:32.814 "is_configured": true, 00:30:32.814 "data_offset": 0, 00:30:32.814 "data_size": 65536 00:30:32.814 } 00:30:32.814 ] 00:30:32.814 } 00:30:32.814 } 00:30:32.814 }' 00:30:32.814 23:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:32.814 23:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:30:32.814 BaseBdev2 00:30:32.814 BaseBdev3' 00:30:32.814 23:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:32.814 23:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:30:32.814 23:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:33.072 23:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:33.072 "name": "NewBaseBdev", 00:30:33.072 "aliases": [ 00:30:33.072 "24906f0a-a720-4c57-8e54-feca5bfce372" 00:30:33.072 ], 00:30:33.072 "product_name": "Malloc disk", 00:30:33.072 "block_size": 512, 00:30:33.072 "num_blocks": 65536, 00:30:33.072 "uuid": "24906f0a-a720-4c57-8e54-feca5bfce372", 00:30:33.072 "assigned_rate_limits": { 00:30:33.072 "rw_ios_per_sec": 0, 00:30:33.072 "rw_mbytes_per_sec": 0, 00:30:33.072 "r_mbytes_per_sec": 0, 00:30:33.072 "w_mbytes_per_sec": 0 00:30:33.072 }, 00:30:33.072 "claimed": true, 00:30:33.072 "claim_type": "exclusive_write", 00:30:33.072 "zoned": false, 00:30:33.072 "supported_io_types": { 00:30:33.072 "read": true, 00:30:33.072 "write": true, 00:30:33.072 "unmap": true, 00:30:33.072 "flush": true, 00:30:33.072 "reset": true, 00:30:33.072 "nvme_admin": false, 00:30:33.072 "nvme_io": false, 00:30:33.072 "nvme_io_md": false, 00:30:33.072 "write_zeroes": true, 00:30:33.072 "zcopy": true, 00:30:33.072 "get_zone_info": false, 00:30:33.072 "zone_management": false, 00:30:33.072 "zone_append": false, 00:30:33.072 "compare": false, 00:30:33.072 "compare_and_write": false, 00:30:33.072 "abort": true, 00:30:33.072 "seek_hole": false, 00:30:33.072 "seek_data": false, 00:30:33.072 "copy": true, 00:30:33.072 "nvme_iov_md": false 00:30:33.072 }, 00:30:33.072 "memory_domains": [ 00:30:33.072 { 00:30:33.072 "dma_device_id": "system", 00:30:33.072 "dma_device_type": 1 00:30:33.072 }, 00:30:33.072 { 00:30:33.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:33.072 "dma_device_type": 2 00:30:33.072 } 00:30:33.072 ], 00:30:33.072 "driver_specific": {} 00:30:33.072 }' 00:30:33.072 23:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:33.072 23:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:33.072 23:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:33.072 23:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:33.330 23:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:33.330 23:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:33.330 23:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:33.330 23:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:33.330 23:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:33.330 23:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:33.330 23:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:33.587 23:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:33.587 23:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:33.587 23:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:30:33.587 23:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:33.587 23:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:33.587 "name": "BaseBdev2", 00:30:33.587 "aliases": [ 00:30:33.587 "2b2772a7-9e12-4545-be4e-78bad638fa08" 00:30:33.587 ], 00:30:33.587 "product_name": "Malloc disk", 00:30:33.587 "block_size": 512, 00:30:33.587 "num_blocks": 65536, 00:30:33.587 "uuid": "2b2772a7-9e12-4545-be4e-78bad638fa08", 00:30:33.587 "assigned_rate_limits": { 00:30:33.587 "rw_ios_per_sec": 0, 00:30:33.587 "rw_mbytes_per_sec": 0, 00:30:33.587 "r_mbytes_per_sec": 0, 00:30:33.587 "w_mbytes_per_sec": 0 00:30:33.587 }, 00:30:33.587 "claimed": true, 00:30:33.587 "claim_type": "exclusive_write", 00:30:33.587 "zoned": false, 00:30:33.587 "supported_io_types": { 00:30:33.587 "read": true, 00:30:33.587 "write": true, 00:30:33.587 "unmap": true, 00:30:33.587 "flush": true, 00:30:33.587 "reset": true, 00:30:33.587 "nvme_admin": false, 00:30:33.587 "nvme_io": false, 00:30:33.587 "nvme_io_md": false, 00:30:33.587 "write_zeroes": true, 00:30:33.587 "zcopy": true, 00:30:33.587 "get_zone_info": false, 00:30:33.587 "zone_management": false, 00:30:33.587 "zone_append": false, 00:30:33.587 "compare": false, 00:30:33.587 "compare_and_write": false, 00:30:33.587 "abort": true, 00:30:33.587 "seek_hole": false, 00:30:33.587 "seek_data": false, 00:30:33.587 "copy": true, 00:30:33.587 "nvme_iov_md": false 00:30:33.587 }, 00:30:33.587 "memory_domains": [ 00:30:33.588 { 00:30:33.588 "dma_device_id": "system", 00:30:33.588 "dma_device_type": 1 00:30:33.588 }, 00:30:33.588 { 00:30:33.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:33.588 "dma_device_type": 2 00:30:33.588 } 00:30:33.588 ], 00:30:33.588 "driver_specific": {} 00:30:33.588 }' 00:30:33.845 23:16:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:33.845 23:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:33.845 23:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:33.845 23:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:33.845 23:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:33.845 23:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:33.845 23:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:33.845 23:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:34.103 23:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:34.103 23:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:34.103 23:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:34.103 23:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:34.103 23:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:34.103 23:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:30:34.103 23:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:34.361 23:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:34.361 "name": "BaseBdev3", 00:30:34.361 "aliases": [ 00:30:34.361 "d203236a-fce3-4e1c-bc60-cc234ef7cd56" 00:30:34.361 ], 00:30:34.361 "product_name": "Malloc disk", 00:30:34.361 "block_size": 512, 00:30:34.361 "num_blocks": 65536, 00:30:34.361 "uuid": "d203236a-fce3-4e1c-bc60-cc234ef7cd56", 00:30:34.361 "assigned_rate_limits": { 00:30:34.361 "rw_ios_per_sec": 0, 00:30:34.361 "rw_mbytes_per_sec": 0, 00:30:34.361 "r_mbytes_per_sec": 0, 00:30:34.361 "w_mbytes_per_sec": 0 00:30:34.361 }, 00:30:34.361 "claimed": true, 00:30:34.361 "claim_type": "exclusive_write", 00:30:34.361 "zoned": false, 00:30:34.361 "supported_io_types": { 00:30:34.361 "read": true, 00:30:34.361 "write": true, 00:30:34.361 "unmap": true, 00:30:34.361 "flush": true, 00:30:34.361 "reset": true, 00:30:34.361 "nvme_admin": false, 00:30:34.361 "nvme_io": false, 00:30:34.361 "nvme_io_md": false, 00:30:34.361 "write_zeroes": true, 00:30:34.361 "zcopy": true, 00:30:34.361 "get_zone_info": false, 00:30:34.361 "zone_management": false, 00:30:34.361 "zone_append": false, 00:30:34.361 "compare": false, 00:30:34.361 "compare_and_write": false, 00:30:34.361 "abort": true, 00:30:34.361 "seek_hole": false, 00:30:34.361 "seek_data": false, 00:30:34.361 "copy": true, 00:30:34.361 "nvme_iov_md": false 00:30:34.361 }, 00:30:34.361 "memory_domains": [ 00:30:34.361 { 00:30:34.361 "dma_device_id": "system", 00:30:34.361 "dma_device_type": 1 00:30:34.361 }, 00:30:34.361 { 00:30:34.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:34.361 "dma_device_type": 2 00:30:34.361 } 00:30:34.361 ], 00:30:34.361 "driver_specific": {} 00:30:34.361 }' 00:30:34.361 23:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:34.361 23:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:34.361 23:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:34.361 23:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:34.361 23:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:34.619 23:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:34.619 23:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:34.619 23:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:34.619 23:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:34.619 23:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:34.619 23:16:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:34.878 23:16:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:34.878 23:16:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:35.137 [2024-07-13 23:16:24.299022] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:35.137 [2024-07-13 23:16:24.299074] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:35.137 [2024-07-13 23:16:24.299195] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:35.137 [2024-07-13 23:16:24.299508] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:35.137 [2024-07-13 23:16:24.299525] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:30:35.137 23:16:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 159628 00:30:35.137 23:16:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 159628 ']' 00:30:35.138 23:16:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # kill -0 159628 00:30:35.138 23:16:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@953 -- # uname 00:30:35.138 23:16:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:35.138 23:16:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 159628 00:30:35.138 23:16:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:35.138 23:16:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:35.138 23:16:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 159628' 00:30:35.138 killing process with pid 159628 00:30:35.138 23:16:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@967 -- # kill 159628 00:30:35.138 23:16:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # wait 159628 00:30:35.138 [2024-07-13 23:16:24.344886] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:35.138 [2024-07-13 23:16:24.380136] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:35.397 23:16:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:30:35.397 00:30:35.397 real 0m29.542s 00:30:35.397 user 0m56.273s 00:30:35.397 sys 0m3.336s 00:30:35.397 23:16:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:35.397 23:16:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:35.397 ************************************ 00:30:35.397 END TEST raid5f_state_function_test 00:30:35.397 ************************************ 00:30:35.397 23:16:24 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:30:35.397 23:16:24 bdev_raid -- bdev/bdev_raid.sh@887 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:30:35.397 23:16:24 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:30:35.397 23:16:24 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:35.397 23:16:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:35.397 ************************************ 00:30:35.397 START TEST raid5f_state_function_test_sb 00:30:35.397 ************************************ 00:30:35.397 23:16:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid5f 3 true 00:30:35.397 23:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:30:35.397 23:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:30:35.397 23:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:30:35.397 23:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:30:35.397 23:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:30:35.397 23:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:35.397 23:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:30:35.397 23:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:30:35.397 23:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:35.397 23:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:30:35.397 23:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:30:35.397 23:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:35.397 23:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:30:35.397 23:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:30:35.397 23:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:35.397 23:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:30:35.397 23:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:30:35.397 23:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:30:35.397 23:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:30:35.397 23:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:30:35.397 23:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:30:35.397 23:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:30:35.397 23:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:30:35.397 23:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:30:35.397 23:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:30:35.397 23:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:30:35.398 23:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=160592 00:30:35.398 23:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:30:35.398 23:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 160592' 00:30:35.398 Process raid pid: 160592 00:30:35.398 23:16:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 160592 /var/tmp/spdk-raid.sock 00:30:35.398 23:16:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 160592 ']' 00:30:35.398 23:16:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:35.398 23:16:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:35.398 23:16:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:35.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:35.398 23:16:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:35.398 23:16:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:35.398 [2024-07-13 23:16:24.752392] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:30:35.398 [2024-07-13 23:16:24.752806] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:35.657 [2024-07-13 23:16:24.899082] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.657 [2024-07-13 23:16:24.993930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:35.657 [2024-07-13 23:16:25.051841] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:36.592 23:16:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:36.592 23:16:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:30:36.593 23:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:30:36.593 [2024-07-13 23:16:25.982675] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:36.593 [2024-07-13 23:16:25.983012] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:36.593 [2024-07-13 23:16:25.983127] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:36.593 [2024-07-13 23:16:25.983194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:36.593 [2024-07-13 23:16:25.983291] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:36.593 [2024-07-13 23:16:25.983380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:36.851 23:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:36.851 23:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:36.851 23:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:36.851 23:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:36.851 23:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:36.851 23:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:36.851 23:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:36.851 23:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:36.851 23:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:36.851 23:16:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:36.851 23:16:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:36.851 23:16:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:37.110 23:16:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:37.110 "name": "Existed_Raid", 00:30:37.110 "uuid": "f774d416-b394-4d96-b7bf-3cfb2525e1fe", 00:30:37.110 "strip_size_kb": 64, 00:30:37.110 "state": "configuring", 00:30:37.110 "raid_level": "raid5f", 00:30:37.110 "superblock": true, 00:30:37.110 "num_base_bdevs": 3, 00:30:37.110 "num_base_bdevs_discovered": 0, 00:30:37.110 "num_base_bdevs_operational": 3, 00:30:37.110 "base_bdevs_list": [ 00:30:37.110 { 00:30:37.110 "name": "BaseBdev1", 00:30:37.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:37.110 "is_configured": false, 00:30:37.110 "data_offset": 0, 00:30:37.110 "data_size": 0 00:30:37.110 }, 00:30:37.110 { 00:30:37.110 "name": "BaseBdev2", 00:30:37.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:37.110 "is_configured": false, 00:30:37.110 "data_offset": 0, 00:30:37.110 "data_size": 0 00:30:37.110 }, 00:30:37.110 { 00:30:37.110 "name": "BaseBdev3", 00:30:37.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:37.110 "is_configured": false, 00:30:37.110 "data_offset": 0, 00:30:37.110 "data_size": 0 00:30:37.110 } 00:30:37.110 ] 00:30:37.110 }' 00:30:37.110 23:16:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:37.110 23:16:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.677 23:16:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:37.935 [2024-07-13 23:16:27.206739] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:37.935 [2024-07-13 23:16:27.206957] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:30:37.935 23:16:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:30:38.193 [2024-07-13 23:16:27.438809] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:38.193 [2024-07-13 23:16:27.439167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:38.193 [2024-07-13 23:16:27.439288] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:38.193 [2024-07-13 23:16:27.439353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:38.193 [2024-07-13 23:16:27.439452] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:38.193 [2024-07-13 23:16:27.439520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:38.194 23:16:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:30:38.452 [2024-07-13 23:16:27.677747] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:38.452 BaseBdev1 00:30:38.452 23:16:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:30:38.452 23:16:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:30:38.452 23:16:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:38.452 23:16:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:30:38.452 23:16:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:38.452 23:16:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:38.452 23:16:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:38.709 23:16:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:38.967 [ 00:30:38.967 { 00:30:38.967 "name": "BaseBdev1", 00:30:38.967 "aliases": [ 00:30:38.967 "1e886040-8846-4646-9b2e-3d484faa0644" 00:30:38.967 ], 00:30:38.967 "product_name": "Malloc disk", 00:30:38.967 "block_size": 512, 00:30:38.967 "num_blocks": 65536, 00:30:38.967 "uuid": "1e886040-8846-4646-9b2e-3d484faa0644", 00:30:38.967 "assigned_rate_limits": { 00:30:38.967 "rw_ios_per_sec": 0, 00:30:38.967 "rw_mbytes_per_sec": 0, 00:30:38.967 "r_mbytes_per_sec": 0, 00:30:38.967 "w_mbytes_per_sec": 0 00:30:38.967 }, 00:30:38.967 "claimed": true, 00:30:38.967 "claim_type": "exclusive_write", 00:30:38.967 "zoned": false, 00:30:38.967 "supported_io_types": { 00:30:38.967 "read": true, 00:30:38.967 "write": true, 00:30:38.967 "unmap": true, 00:30:38.967 "flush": true, 00:30:38.967 "reset": true, 00:30:38.967 "nvme_admin": false, 00:30:38.967 "nvme_io": false, 00:30:38.967 "nvme_io_md": false, 00:30:38.968 "write_zeroes": true, 00:30:38.968 "zcopy": true, 00:30:38.968 "get_zone_info": false, 00:30:38.968 "zone_management": false, 00:30:38.968 "zone_append": false, 00:30:38.968 "compare": false, 00:30:38.968 "compare_and_write": false, 00:30:38.968 "abort": true, 00:30:38.968 "seek_hole": false, 00:30:38.968 "seek_data": false, 00:30:38.968 "copy": true, 00:30:38.968 "nvme_iov_md": false 00:30:38.968 }, 00:30:38.968 "memory_domains": [ 00:30:38.968 { 00:30:38.968 "dma_device_id": "system", 00:30:38.968 "dma_device_type": 1 00:30:38.968 }, 00:30:38.968 { 00:30:38.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:38.968 "dma_device_type": 2 00:30:38.968 } 00:30:38.968 ], 00:30:38.968 "driver_specific": {} 00:30:38.968 } 00:30:38.968 ] 00:30:38.968 23:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:30:38.968 23:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:38.968 23:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:38.968 23:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:38.968 23:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:38.968 23:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:38.968 23:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:38.968 23:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:38.968 23:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:38.968 23:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:38.968 23:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:38.968 23:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:38.968 23:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:39.226 23:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:39.226 "name": "Existed_Raid", 00:30:39.226 "uuid": "b5815146-9d0e-4278-b69d-94654ff75b8f", 00:30:39.226 "strip_size_kb": 64, 00:30:39.226 "state": "configuring", 00:30:39.226 "raid_level": "raid5f", 00:30:39.226 "superblock": true, 00:30:39.226 "num_base_bdevs": 3, 00:30:39.226 "num_base_bdevs_discovered": 1, 00:30:39.226 "num_base_bdevs_operational": 3, 00:30:39.226 "base_bdevs_list": [ 00:30:39.226 { 00:30:39.226 "name": "BaseBdev1", 00:30:39.226 "uuid": "1e886040-8846-4646-9b2e-3d484faa0644", 00:30:39.226 "is_configured": true, 00:30:39.226 "data_offset": 2048, 00:30:39.226 "data_size": 63488 00:30:39.226 }, 00:30:39.226 { 00:30:39.226 "name": "BaseBdev2", 00:30:39.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:39.226 "is_configured": false, 00:30:39.226 "data_offset": 0, 00:30:39.226 "data_size": 0 00:30:39.226 }, 00:30:39.226 { 00:30:39.226 "name": "BaseBdev3", 00:30:39.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:39.226 "is_configured": false, 00:30:39.226 "data_offset": 0, 00:30:39.226 "data_size": 0 00:30:39.226 } 00:30:39.226 ] 00:30:39.226 }' 00:30:39.226 23:16:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:39.226 23:16:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.794 23:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:40.052 [2024-07-13 23:16:29.346203] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:40.052 [2024-07-13 23:16:29.346460] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:30:40.053 23:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:30:40.310 [2024-07-13 23:16:29.578316] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:40.310 [2024-07-13 23:16:29.580685] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:40.310 [2024-07-13 23:16:29.580895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:40.310 [2024-07-13 23:16:29.581038] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:40.310 [2024-07-13 23:16:29.581109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:40.310 23:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:30:40.310 23:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:30:40.310 23:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:40.310 23:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:40.310 23:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:40.310 23:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:40.310 23:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:40.310 23:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:40.310 23:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:40.310 23:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:40.310 23:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:40.310 23:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:40.311 23:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:40.311 23:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:40.569 23:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:40.569 "name": "Existed_Raid", 00:30:40.569 "uuid": "32504fd8-fae4-4205-a7fc-bc33c6eeec80", 00:30:40.569 "strip_size_kb": 64, 00:30:40.569 "state": "configuring", 00:30:40.569 "raid_level": "raid5f", 00:30:40.569 "superblock": true, 00:30:40.569 "num_base_bdevs": 3, 00:30:40.569 "num_base_bdevs_discovered": 1, 00:30:40.569 "num_base_bdevs_operational": 3, 00:30:40.569 "base_bdevs_list": [ 00:30:40.569 { 00:30:40.569 "name": "BaseBdev1", 00:30:40.569 "uuid": "1e886040-8846-4646-9b2e-3d484faa0644", 00:30:40.569 "is_configured": true, 00:30:40.569 "data_offset": 2048, 00:30:40.569 "data_size": 63488 00:30:40.569 }, 00:30:40.569 { 00:30:40.569 "name": "BaseBdev2", 00:30:40.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:40.569 "is_configured": false, 00:30:40.569 "data_offset": 0, 00:30:40.569 "data_size": 0 00:30:40.569 }, 00:30:40.569 { 00:30:40.569 "name": "BaseBdev3", 00:30:40.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:40.569 "is_configured": false, 00:30:40.569 "data_offset": 0, 00:30:40.569 "data_size": 0 00:30:40.569 } 00:30:40.569 ] 00:30:40.569 }' 00:30:40.569 23:16:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:40.569 23:16:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.134 23:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:30:41.392 [2024-07-13 23:16:30.772805] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:41.392 BaseBdev2 00:30:41.392 23:16:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:30:41.392 23:16:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:30:41.392 23:16:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:41.392 23:16:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:30:41.392 23:16:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:41.392 23:16:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:41.392 23:16:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:41.958 23:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:41.958 [ 00:30:41.958 { 00:30:41.958 "name": "BaseBdev2", 00:30:41.958 "aliases": [ 00:30:41.958 "2ea51f32-20f9-49c5-b206-81f1a3bd1df9" 00:30:41.958 ], 00:30:41.958 "product_name": "Malloc disk", 00:30:41.958 "block_size": 512, 00:30:41.958 "num_blocks": 65536, 00:30:41.958 "uuid": "2ea51f32-20f9-49c5-b206-81f1a3bd1df9", 00:30:41.958 "assigned_rate_limits": { 00:30:41.958 "rw_ios_per_sec": 0, 00:30:41.958 "rw_mbytes_per_sec": 0, 00:30:41.958 "r_mbytes_per_sec": 0, 00:30:41.958 "w_mbytes_per_sec": 0 00:30:41.958 }, 00:30:41.958 "claimed": true, 00:30:41.958 "claim_type": "exclusive_write", 00:30:41.958 "zoned": false, 00:30:41.958 "supported_io_types": { 00:30:41.958 "read": true, 00:30:41.958 "write": true, 00:30:41.958 "unmap": true, 00:30:41.958 "flush": true, 00:30:41.958 "reset": true, 00:30:41.958 "nvme_admin": false, 00:30:41.958 "nvme_io": false, 00:30:41.958 "nvme_io_md": false, 00:30:41.958 "write_zeroes": true, 00:30:41.958 "zcopy": true, 00:30:41.958 "get_zone_info": false, 00:30:41.958 "zone_management": false, 00:30:41.958 "zone_append": false, 00:30:41.958 "compare": false, 00:30:41.958 "compare_and_write": false, 00:30:41.958 "abort": true, 00:30:41.958 "seek_hole": false, 00:30:41.958 "seek_data": false, 00:30:41.958 "copy": true, 00:30:41.958 "nvme_iov_md": false 00:30:41.958 }, 00:30:41.958 "memory_domains": [ 00:30:41.958 { 00:30:41.958 "dma_device_id": "system", 00:30:41.958 "dma_device_type": 1 00:30:41.958 }, 00:30:41.958 { 00:30:41.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:41.958 "dma_device_type": 2 00:30:41.958 } 00:30:41.958 ], 00:30:41.958 "driver_specific": {} 00:30:41.958 } 00:30:41.958 ] 00:30:41.958 23:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:30:41.958 23:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:30:41.958 23:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:30:41.958 23:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:41.958 23:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:41.958 23:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:41.958 23:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:41.958 23:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:41.958 23:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:41.958 23:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:41.958 23:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:41.958 23:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:41.958 23:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:41.958 23:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:41.958 23:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:42.216 23:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:42.216 "name": "Existed_Raid", 00:30:42.216 "uuid": "32504fd8-fae4-4205-a7fc-bc33c6eeec80", 00:30:42.216 "strip_size_kb": 64, 00:30:42.216 "state": "configuring", 00:30:42.216 "raid_level": "raid5f", 00:30:42.216 "superblock": true, 00:30:42.216 "num_base_bdevs": 3, 00:30:42.216 "num_base_bdevs_discovered": 2, 00:30:42.216 "num_base_bdevs_operational": 3, 00:30:42.216 "base_bdevs_list": [ 00:30:42.216 { 00:30:42.216 "name": "BaseBdev1", 00:30:42.216 "uuid": "1e886040-8846-4646-9b2e-3d484faa0644", 00:30:42.216 "is_configured": true, 00:30:42.216 "data_offset": 2048, 00:30:42.216 "data_size": 63488 00:30:42.216 }, 00:30:42.216 { 00:30:42.216 "name": "BaseBdev2", 00:30:42.216 "uuid": "2ea51f32-20f9-49c5-b206-81f1a3bd1df9", 00:30:42.216 "is_configured": true, 00:30:42.216 "data_offset": 2048, 00:30:42.216 "data_size": 63488 00:30:42.216 }, 00:30:42.216 { 00:30:42.216 "name": "BaseBdev3", 00:30:42.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:42.216 "is_configured": false, 00:30:42.216 "data_offset": 0, 00:30:42.216 "data_size": 0 00:30:42.216 } 00:30:42.216 ] 00:30:42.216 }' 00:30:42.216 23:16:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:42.216 23:16:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:43.152 23:16:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:30:43.152 [2024-07-13 23:16:32.494245] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:43.152 [2024-07-13 23:16:32.494525] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:30:43.152 [2024-07-13 23:16:32.494541] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:30:43.152 [2024-07-13 23:16:32.494702] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:30:43.152 BaseBdev3 00:30:43.152 [2024-07-13 23:16:32.495519] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:30:43.152 [2024-07-13 23:16:32.495551] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:30:43.152 [2024-07-13 23:16:32.495716] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:43.152 23:16:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:30:43.153 23:16:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:30:43.153 23:16:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:43.153 23:16:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:30:43.153 23:16:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:43.153 23:16:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:43.153 23:16:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:43.411 23:16:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:43.670 [ 00:30:43.670 { 00:30:43.670 "name": "BaseBdev3", 00:30:43.670 "aliases": [ 00:30:43.670 "b701ea3b-4151-483a-ac55-51cd15249f09" 00:30:43.670 ], 00:30:43.670 "product_name": "Malloc disk", 00:30:43.670 "block_size": 512, 00:30:43.670 "num_blocks": 65536, 00:30:43.670 "uuid": "b701ea3b-4151-483a-ac55-51cd15249f09", 00:30:43.670 "assigned_rate_limits": { 00:30:43.670 "rw_ios_per_sec": 0, 00:30:43.670 "rw_mbytes_per_sec": 0, 00:30:43.670 "r_mbytes_per_sec": 0, 00:30:43.670 "w_mbytes_per_sec": 0 00:30:43.670 }, 00:30:43.670 "claimed": true, 00:30:43.670 "claim_type": "exclusive_write", 00:30:43.670 "zoned": false, 00:30:43.670 "supported_io_types": { 00:30:43.670 "read": true, 00:30:43.670 "write": true, 00:30:43.670 "unmap": true, 00:30:43.670 "flush": true, 00:30:43.670 "reset": true, 00:30:43.670 "nvme_admin": false, 00:30:43.670 "nvme_io": false, 00:30:43.670 "nvme_io_md": false, 00:30:43.670 "write_zeroes": true, 00:30:43.670 "zcopy": true, 00:30:43.670 "get_zone_info": false, 00:30:43.670 "zone_management": false, 00:30:43.670 "zone_append": false, 00:30:43.670 "compare": false, 00:30:43.670 "compare_and_write": false, 00:30:43.670 "abort": true, 00:30:43.670 "seek_hole": false, 00:30:43.670 "seek_data": false, 00:30:43.670 "copy": true, 00:30:43.670 "nvme_iov_md": false 00:30:43.670 }, 00:30:43.670 "memory_domains": [ 00:30:43.670 { 00:30:43.670 "dma_device_id": "system", 00:30:43.670 "dma_device_type": 1 00:30:43.670 }, 00:30:43.670 { 00:30:43.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:43.670 "dma_device_type": 2 00:30:43.670 } 00:30:43.670 ], 00:30:43.670 "driver_specific": {} 00:30:43.670 } 00:30:43.670 ] 00:30:43.670 23:16:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:30:43.670 23:16:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:30:43.670 23:16:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:30:43.670 23:16:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:30:43.670 23:16:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:43.670 23:16:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:43.670 23:16:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:43.670 23:16:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:43.670 23:16:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:43.670 23:16:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:43.670 23:16:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:43.670 23:16:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:43.670 23:16:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:43.670 23:16:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:43.670 23:16:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:43.929 23:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:43.929 "name": "Existed_Raid", 00:30:43.929 "uuid": "32504fd8-fae4-4205-a7fc-bc33c6eeec80", 00:30:43.929 "strip_size_kb": 64, 00:30:43.929 "state": "online", 00:30:43.929 "raid_level": "raid5f", 00:30:43.929 "superblock": true, 00:30:43.929 "num_base_bdevs": 3, 00:30:43.929 "num_base_bdevs_discovered": 3, 00:30:43.929 "num_base_bdevs_operational": 3, 00:30:43.929 "base_bdevs_list": [ 00:30:43.929 { 00:30:43.929 "name": "BaseBdev1", 00:30:43.929 "uuid": "1e886040-8846-4646-9b2e-3d484faa0644", 00:30:43.929 "is_configured": true, 00:30:43.929 "data_offset": 2048, 00:30:43.929 "data_size": 63488 00:30:43.929 }, 00:30:43.929 { 00:30:43.929 "name": "BaseBdev2", 00:30:43.929 "uuid": "2ea51f32-20f9-49c5-b206-81f1a3bd1df9", 00:30:43.929 "is_configured": true, 00:30:43.929 "data_offset": 2048, 00:30:43.929 "data_size": 63488 00:30:43.929 }, 00:30:43.929 { 00:30:43.929 "name": "BaseBdev3", 00:30:43.929 "uuid": "b701ea3b-4151-483a-ac55-51cd15249f09", 00:30:43.929 "is_configured": true, 00:30:43.929 "data_offset": 2048, 00:30:43.929 "data_size": 63488 00:30:43.929 } 00:30:43.929 ] 00:30:43.929 }' 00:30:43.929 23:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:43.929 23:16:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:44.497 23:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:30:44.497 23:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:30:44.497 23:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:30:44.497 23:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:30:44.497 23:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:30:44.497 23:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:30:44.497 23:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:30:44.497 23:16:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:30:44.754 [2024-07-13 23:16:34.054829] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:44.754 23:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:30:44.754 "name": "Existed_Raid", 00:30:44.754 "aliases": [ 00:30:44.754 "32504fd8-fae4-4205-a7fc-bc33c6eeec80" 00:30:44.754 ], 00:30:44.754 "product_name": "Raid Volume", 00:30:44.754 "block_size": 512, 00:30:44.754 "num_blocks": 126976, 00:30:44.754 "uuid": "32504fd8-fae4-4205-a7fc-bc33c6eeec80", 00:30:44.754 "assigned_rate_limits": { 00:30:44.755 "rw_ios_per_sec": 0, 00:30:44.755 "rw_mbytes_per_sec": 0, 00:30:44.755 "r_mbytes_per_sec": 0, 00:30:44.755 "w_mbytes_per_sec": 0 00:30:44.755 }, 00:30:44.755 "claimed": false, 00:30:44.755 "zoned": false, 00:30:44.755 "supported_io_types": { 00:30:44.755 "read": true, 00:30:44.755 "write": true, 00:30:44.755 "unmap": false, 00:30:44.755 "flush": false, 00:30:44.755 "reset": true, 00:30:44.755 "nvme_admin": false, 00:30:44.755 "nvme_io": false, 00:30:44.755 "nvme_io_md": false, 00:30:44.755 "write_zeroes": true, 00:30:44.755 "zcopy": false, 00:30:44.755 "get_zone_info": false, 00:30:44.755 "zone_management": false, 00:30:44.755 "zone_append": false, 00:30:44.755 "compare": false, 00:30:44.755 "compare_and_write": false, 00:30:44.755 "abort": false, 00:30:44.755 "seek_hole": false, 00:30:44.755 "seek_data": false, 00:30:44.755 "copy": false, 00:30:44.755 "nvme_iov_md": false 00:30:44.755 }, 00:30:44.755 "driver_specific": { 00:30:44.755 "raid": { 00:30:44.755 "uuid": "32504fd8-fae4-4205-a7fc-bc33c6eeec80", 00:30:44.755 "strip_size_kb": 64, 00:30:44.755 "state": "online", 00:30:44.755 "raid_level": "raid5f", 00:30:44.755 "superblock": true, 00:30:44.755 "num_base_bdevs": 3, 00:30:44.755 "num_base_bdevs_discovered": 3, 00:30:44.755 "num_base_bdevs_operational": 3, 00:30:44.755 "base_bdevs_list": [ 00:30:44.755 { 00:30:44.755 "name": "BaseBdev1", 00:30:44.755 "uuid": "1e886040-8846-4646-9b2e-3d484faa0644", 00:30:44.755 "is_configured": true, 00:30:44.755 "data_offset": 2048, 00:30:44.755 "data_size": 63488 00:30:44.755 }, 00:30:44.755 { 00:30:44.755 "name": "BaseBdev2", 00:30:44.755 "uuid": "2ea51f32-20f9-49c5-b206-81f1a3bd1df9", 00:30:44.755 "is_configured": true, 00:30:44.755 "data_offset": 2048, 00:30:44.755 "data_size": 63488 00:30:44.755 }, 00:30:44.755 { 00:30:44.755 "name": "BaseBdev3", 00:30:44.755 "uuid": "b701ea3b-4151-483a-ac55-51cd15249f09", 00:30:44.755 "is_configured": true, 00:30:44.755 "data_offset": 2048, 00:30:44.755 "data_size": 63488 00:30:44.755 } 00:30:44.755 ] 00:30:44.755 } 00:30:44.755 } 00:30:44.755 }' 00:30:44.755 23:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:44.755 23:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:30:44.755 BaseBdev2 00:30:44.755 BaseBdev3' 00:30:44.755 23:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:44.755 23:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:30:44.755 23:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:45.031 23:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:45.031 "name": "BaseBdev1", 00:30:45.031 "aliases": [ 00:30:45.031 "1e886040-8846-4646-9b2e-3d484faa0644" 00:30:45.031 ], 00:30:45.031 "product_name": "Malloc disk", 00:30:45.031 "block_size": 512, 00:30:45.031 "num_blocks": 65536, 00:30:45.031 "uuid": "1e886040-8846-4646-9b2e-3d484faa0644", 00:30:45.031 "assigned_rate_limits": { 00:30:45.031 "rw_ios_per_sec": 0, 00:30:45.031 "rw_mbytes_per_sec": 0, 00:30:45.031 "r_mbytes_per_sec": 0, 00:30:45.031 "w_mbytes_per_sec": 0 00:30:45.031 }, 00:30:45.031 "claimed": true, 00:30:45.031 "claim_type": "exclusive_write", 00:30:45.031 "zoned": false, 00:30:45.031 "supported_io_types": { 00:30:45.031 "read": true, 00:30:45.031 "write": true, 00:30:45.031 "unmap": true, 00:30:45.031 "flush": true, 00:30:45.031 "reset": true, 00:30:45.031 "nvme_admin": false, 00:30:45.031 "nvme_io": false, 00:30:45.031 "nvme_io_md": false, 00:30:45.031 "write_zeroes": true, 00:30:45.031 "zcopy": true, 00:30:45.031 "get_zone_info": false, 00:30:45.031 "zone_management": false, 00:30:45.031 "zone_append": false, 00:30:45.031 "compare": false, 00:30:45.031 "compare_and_write": false, 00:30:45.031 "abort": true, 00:30:45.031 "seek_hole": false, 00:30:45.031 "seek_data": false, 00:30:45.031 "copy": true, 00:30:45.031 "nvme_iov_md": false 00:30:45.031 }, 00:30:45.031 "memory_domains": [ 00:30:45.031 { 00:30:45.031 "dma_device_id": "system", 00:30:45.031 "dma_device_type": 1 00:30:45.031 }, 00:30:45.031 { 00:30:45.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:45.031 "dma_device_type": 2 00:30:45.031 } 00:30:45.031 ], 00:30:45.031 "driver_specific": {} 00:30:45.031 }' 00:30:45.031 23:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:45.031 23:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:45.300 23:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:45.300 23:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:45.300 23:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:45.300 23:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:45.300 23:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:45.300 23:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:45.300 23:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:45.300 23:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:45.300 23:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:45.300 23:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:45.300 23:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:45.300 23:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:30:45.300 23:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:45.557 23:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:45.557 "name": "BaseBdev2", 00:30:45.557 "aliases": [ 00:30:45.557 "2ea51f32-20f9-49c5-b206-81f1a3bd1df9" 00:30:45.558 ], 00:30:45.558 "product_name": "Malloc disk", 00:30:45.558 "block_size": 512, 00:30:45.558 "num_blocks": 65536, 00:30:45.558 "uuid": "2ea51f32-20f9-49c5-b206-81f1a3bd1df9", 00:30:45.558 "assigned_rate_limits": { 00:30:45.558 "rw_ios_per_sec": 0, 00:30:45.558 "rw_mbytes_per_sec": 0, 00:30:45.558 "r_mbytes_per_sec": 0, 00:30:45.558 "w_mbytes_per_sec": 0 00:30:45.558 }, 00:30:45.558 "claimed": true, 00:30:45.558 "claim_type": "exclusive_write", 00:30:45.558 "zoned": false, 00:30:45.558 "supported_io_types": { 00:30:45.558 "read": true, 00:30:45.558 "write": true, 00:30:45.558 "unmap": true, 00:30:45.558 "flush": true, 00:30:45.558 "reset": true, 00:30:45.558 "nvme_admin": false, 00:30:45.558 "nvme_io": false, 00:30:45.558 "nvme_io_md": false, 00:30:45.558 "write_zeroes": true, 00:30:45.558 "zcopy": true, 00:30:45.558 "get_zone_info": false, 00:30:45.558 "zone_management": false, 00:30:45.558 "zone_append": false, 00:30:45.558 "compare": false, 00:30:45.558 "compare_and_write": false, 00:30:45.558 "abort": true, 00:30:45.558 "seek_hole": false, 00:30:45.558 "seek_data": false, 00:30:45.558 "copy": true, 00:30:45.558 "nvme_iov_md": false 00:30:45.558 }, 00:30:45.558 "memory_domains": [ 00:30:45.558 { 00:30:45.558 "dma_device_id": "system", 00:30:45.558 "dma_device_type": 1 00:30:45.558 }, 00:30:45.558 { 00:30:45.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:45.558 "dma_device_type": 2 00:30:45.558 } 00:30:45.558 ], 00:30:45.558 "driver_specific": {} 00:30:45.558 }' 00:30:45.558 23:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:45.816 23:16:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:45.816 23:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:45.816 23:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:45.816 23:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:45.816 23:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:45.816 23:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:45.816 23:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:46.074 23:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:46.074 23:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:46.074 23:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:46.074 23:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:46.074 23:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:46.074 23:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:30:46.074 23:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:46.332 23:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:46.332 "name": "BaseBdev3", 00:30:46.332 "aliases": [ 00:30:46.332 "b701ea3b-4151-483a-ac55-51cd15249f09" 00:30:46.332 ], 00:30:46.332 "product_name": "Malloc disk", 00:30:46.332 "block_size": 512, 00:30:46.332 "num_blocks": 65536, 00:30:46.332 "uuid": "b701ea3b-4151-483a-ac55-51cd15249f09", 00:30:46.332 "assigned_rate_limits": { 00:30:46.332 "rw_ios_per_sec": 0, 00:30:46.332 "rw_mbytes_per_sec": 0, 00:30:46.332 "r_mbytes_per_sec": 0, 00:30:46.332 "w_mbytes_per_sec": 0 00:30:46.332 }, 00:30:46.332 "claimed": true, 00:30:46.332 "claim_type": "exclusive_write", 00:30:46.332 "zoned": false, 00:30:46.332 "supported_io_types": { 00:30:46.332 "read": true, 00:30:46.332 "write": true, 00:30:46.332 "unmap": true, 00:30:46.332 "flush": true, 00:30:46.332 "reset": true, 00:30:46.332 "nvme_admin": false, 00:30:46.332 "nvme_io": false, 00:30:46.332 "nvme_io_md": false, 00:30:46.332 "write_zeroes": true, 00:30:46.332 "zcopy": true, 00:30:46.332 "get_zone_info": false, 00:30:46.332 "zone_management": false, 00:30:46.332 "zone_append": false, 00:30:46.332 "compare": false, 00:30:46.332 "compare_and_write": false, 00:30:46.332 "abort": true, 00:30:46.332 "seek_hole": false, 00:30:46.332 "seek_data": false, 00:30:46.332 "copy": true, 00:30:46.332 "nvme_iov_md": false 00:30:46.332 }, 00:30:46.332 "memory_domains": [ 00:30:46.332 { 00:30:46.332 "dma_device_id": "system", 00:30:46.332 "dma_device_type": 1 00:30:46.332 }, 00:30:46.332 { 00:30:46.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:46.332 "dma_device_type": 2 00:30:46.332 } 00:30:46.332 ], 00:30:46.332 "driver_specific": {} 00:30:46.332 }' 00:30:46.332 23:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:46.332 23:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:46.332 23:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:46.332 23:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:46.590 23:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:46.590 23:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:46.590 23:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:46.591 23:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:46.591 23:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:46.591 23:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:46.591 23:16:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:46.849 23:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:46.849 23:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:30:47.108 [2024-07-13 23:16:36.275247] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:47.108 23:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:30:47.108 23:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:30:47.108 23:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:30:47.108 23:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:30:47.108 23:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:30:47.108 23:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:30:47.108 23:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:47.108 23:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:47.108 23:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:47.108 23:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:47.108 23:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:47.108 23:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:47.108 23:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:47.108 23:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:47.108 23:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:47.108 23:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:47.108 23:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:47.368 23:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:47.368 "name": "Existed_Raid", 00:30:47.368 "uuid": "32504fd8-fae4-4205-a7fc-bc33c6eeec80", 00:30:47.368 "strip_size_kb": 64, 00:30:47.368 "state": "online", 00:30:47.368 "raid_level": "raid5f", 00:30:47.368 "superblock": true, 00:30:47.368 "num_base_bdevs": 3, 00:30:47.368 "num_base_bdevs_discovered": 2, 00:30:47.368 "num_base_bdevs_operational": 2, 00:30:47.368 "base_bdevs_list": [ 00:30:47.368 { 00:30:47.368 "name": null, 00:30:47.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:47.368 "is_configured": false, 00:30:47.368 "data_offset": 2048, 00:30:47.368 "data_size": 63488 00:30:47.368 }, 00:30:47.368 { 00:30:47.368 "name": "BaseBdev2", 00:30:47.368 "uuid": "2ea51f32-20f9-49c5-b206-81f1a3bd1df9", 00:30:47.368 "is_configured": true, 00:30:47.368 "data_offset": 2048, 00:30:47.368 "data_size": 63488 00:30:47.368 }, 00:30:47.368 { 00:30:47.368 "name": "BaseBdev3", 00:30:47.368 "uuid": "b701ea3b-4151-483a-ac55-51cd15249f09", 00:30:47.368 "is_configured": true, 00:30:47.368 "data_offset": 2048, 00:30:47.368 "data_size": 63488 00:30:47.368 } 00:30:47.368 ] 00:30:47.368 }' 00:30:47.368 23:16:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:47.368 23:16:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:47.935 23:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:30:47.935 23:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:30:47.935 23:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:47.935 23:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:30:48.193 23:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:30:48.193 23:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:48.193 23:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:30:48.452 [2024-07-13 23:16:37.718172] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:48.452 [2024-07-13 23:16:37.718359] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:48.452 [2024-07-13 23:16:37.728528] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:48.452 23:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:30:48.452 23:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:30:48.452 23:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:48.452 23:16:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:30:48.710 23:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:30:48.710 23:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:48.710 23:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:30:48.968 [2024-07-13 23:16:38.228768] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:48.968 [2024-07-13 23:16:38.228898] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:30:48.968 23:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:30:48.968 23:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:30:48.968 23:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:30:48.968 23:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:49.226 23:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:30:49.226 23:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:30:49.226 23:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:30:49.226 23:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:30:49.226 23:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:30:49.226 23:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:30:49.484 BaseBdev2 00:30:49.484 23:16:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:30:49.484 23:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:30:49.484 23:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:49.484 23:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:30:49.484 23:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:49.484 23:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:49.484 23:16:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:49.742 23:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:50.001 [ 00:30:50.001 { 00:30:50.001 "name": "BaseBdev2", 00:30:50.001 "aliases": [ 00:30:50.001 "2bad12e2-dbad-4ee6-b6ce-a3352a5899d9" 00:30:50.001 ], 00:30:50.001 "product_name": "Malloc disk", 00:30:50.001 "block_size": 512, 00:30:50.001 "num_blocks": 65536, 00:30:50.001 "uuid": "2bad12e2-dbad-4ee6-b6ce-a3352a5899d9", 00:30:50.001 "assigned_rate_limits": { 00:30:50.001 "rw_ios_per_sec": 0, 00:30:50.001 "rw_mbytes_per_sec": 0, 00:30:50.001 "r_mbytes_per_sec": 0, 00:30:50.001 "w_mbytes_per_sec": 0 00:30:50.001 }, 00:30:50.001 "claimed": false, 00:30:50.001 "zoned": false, 00:30:50.001 "supported_io_types": { 00:30:50.001 "read": true, 00:30:50.001 "write": true, 00:30:50.001 "unmap": true, 00:30:50.001 "flush": true, 00:30:50.001 "reset": true, 00:30:50.001 "nvme_admin": false, 00:30:50.001 "nvme_io": false, 00:30:50.001 "nvme_io_md": false, 00:30:50.001 "write_zeroes": true, 00:30:50.001 "zcopy": true, 00:30:50.001 "get_zone_info": false, 00:30:50.001 "zone_management": false, 00:30:50.001 "zone_append": false, 00:30:50.001 "compare": false, 00:30:50.001 "compare_and_write": false, 00:30:50.001 "abort": true, 00:30:50.001 "seek_hole": false, 00:30:50.001 "seek_data": false, 00:30:50.001 "copy": true, 00:30:50.001 "nvme_iov_md": false 00:30:50.001 }, 00:30:50.001 "memory_domains": [ 00:30:50.001 { 00:30:50.001 "dma_device_id": "system", 00:30:50.001 "dma_device_type": 1 00:30:50.001 }, 00:30:50.001 { 00:30:50.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:50.001 "dma_device_type": 2 00:30:50.001 } 00:30:50.001 ], 00:30:50.001 "driver_specific": {} 00:30:50.001 } 00:30:50.001 ] 00:30:50.001 23:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:30:50.001 23:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:30:50.001 23:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:30:50.001 23:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:30:50.259 BaseBdev3 00:30:50.259 23:16:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:30:50.259 23:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:30:50.259 23:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:50.259 23:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:30:50.259 23:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:50.259 23:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:50.259 23:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:50.518 23:16:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:50.776 [ 00:30:50.776 { 00:30:50.776 "name": "BaseBdev3", 00:30:50.776 "aliases": [ 00:30:50.776 "c4194139-d206-4978-87fb-1b36f5fa8c3b" 00:30:50.776 ], 00:30:50.776 "product_name": "Malloc disk", 00:30:50.776 "block_size": 512, 00:30:50.776 "num_blocks": 65536, 00:30:50.776 "uuid": "c4194139-d206-4978-87fb-1b36f5fa8c3b", 00:30:50.776 "assigned_rate_limits": { 00:30:50.776 "rw_ios_per_sec": 0, 00:30:50.776 "rw_mbytes_per_sec": 0, 00:30:50.776 "r_mbytes_per_sec": 0, 00:30:50.776 "w_mbytes_per_sec": 0 00:30:50.776 }, 00:30:50.776 "claimed": false, 00:30:50.776 "zoned": false, 00:30:50.776 "supported_io_types": { 00:30:50.776 "read": true, 00:30:50.776 "write": true, 00:30:50.776 "unmap": true, 00:30:50.777 "flush": true, 00:30:50.777 "reset": true, 00:30:50.777 "nvme_admin": false, 00:30:50.777 "nvme_io": false, 00:30:50.777 "nvme_io_md": false, 00:30:50.777 "write_zeroes": true, 00:30:50.777 "zcopy": true, 00:30:50.777 "get_zone_info": false, 00:30:50.777 "zone_management": false, 00:30:50.777 "zone_append": false, 00:30:50.777 "compare": false, 00:30:50.777 "compare_and_write": false, 00:30:50.777 "abort": true, 00:30:50.777 "seek_hole": false, 00:30:50.777 "seek_data": false, 00:30:50.777 "copy": true, 00:30:50.777 "nvme_iov_md": false 00:30:50.777 }, 00:30:50.777 "memory_domains": [ 00:30:50.777 { 00:30:50.777 "dma_device_id": "system", 00:30:50.777 "dma_device_type": 1 00:30:50.777 }, 00:30:50.777 { 00:30:50.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:50.777 "dma_device_type": 2 00:30:50.777 } 00:30:50.777 ], 00:30:50.777 "driver_specific": {} 00:30:50.777 } 00:30:50.777 ] 00:30:50.777 23:16:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:30:50.777 23:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:30:50.777 23:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:30:50.777 23:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:30:51.035 [2024-07-13 23:16:40.282154] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:51.035 [2024-07-13 23:16:40.282270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:51.035 [2024-07-13 23:16:40.282324] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:51.035 [2024-07-13 23:16:40.284603] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:51.035 23:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:51.035 23:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:51.035 23:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:51.035 23:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:51.035 23:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:51.035 23:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:51.035 23:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:51.035 23:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:51.035 23:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:51.035 23:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:51.035 23:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:51.035 23:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:51.294 23:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:51.294 "name": "Existed_Raid", 00:30:51.294 "uuid": "cbabbffc-7d89-4056-910c-b61d504ec0ff", 00:30:51.294 "strip_size_kb": 64, 00:30:51.294 "state": "configuring", 00:30:51.294 "raid_level": "raid5f", 00:30:51.294 "superblock": true, 00:30:51.294 "num_base_bdevs": 3, 00:30:51.294 "num_base_bdevs_discovered": 2, 00:30:51.294 "num_base_bdevs_operational": 3, 00:30:51.294 "base_bdevs_list": [ 00:30:51.294 { 00:30:51.294 "name": "BaseBdev1", 00:30:51.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:51.294 "is_configured": false, 00:30:51.294 "data_offset": 0, 00:30:51.294 "data_size": 0 00:30:51.294 }, 00:30:51.294 { 00:30:51.294 "name": "BaseBdev2", 00:30:51.294 "uuid": "2bad12e2-dbad-4ee6-b6ce-a3352a5899d9", 00:30:51.294 "is_configured": true, 00:30:51.294 "data_offset": 2048, 00:30:51.294 "data_size": 63488 00:30:51.294 }, 00:30:51.294 { 00:30:51.294 "name": "BaseBdev3", 00:30:51.294 "uuid": "c4194139-d206-4978-87fb-1b36f5fa8c3b", 00:30:51.294 "is_configured": true, 00:30:51.294 "data_offset": 2048, 00:30:51.294 "data_size": 63488 00:30:51.294 } 00:30:51.294 ] 00:30:51.294 }' 00:30:51.294 23:16:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:51.294 23:16:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:51.862 23:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:30:52.120 [2024-07-13 23:16:41.478833] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:52.120 23:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:52.120 23:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:52.121 23:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:52.121 23:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:52.121 23:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:52.121 23:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:52.121 23:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:52.121 23:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:52.121 23:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:52.121 23:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:52.121 23:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:52.121 23:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:52.379 23:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:52.379 "name": "Existed_Raid", 00:30:52.379 "uuid": "cbabbffc-7d89-4056-910c-b61d504ec0ff", 00:30:52.379 "strip_size_kb": 64, 00:30:52.379 "state": "configuring", 00:30:52.379 "raid_level": "raid5f", 00:30:52.379 "superblock": true, 00:30:52.379 "num_base_bdevs": 3, 00:30:52.379 "num_base_bdevs_discovered": 1, 00:30:52.379 "num_base_bdevs_operational": 3, 00:30:52.379 "base_bdevs_list": [ 00:30:52.379 { 00:30:52.379 "name": "BaseBdev1", 00:30:52.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:52.379 "is_configured": false, 00:30:52.379 "data_offset": 0, 00:30:52.379 "data_size": 0 00:30:52.379 }, 00:30:52.379 { 00:30:52.379 "name": null, 00:30:52.379 "uuid": "2bad12e2-dbad-4ee6-b6ce-a3352a5899d9", 00:30:52.379 "is_configured": false, 00:30:52.379 "data_offset": 2048, 00:30:52.379 "data_size": 63488 00:30:52.379 }, 00:30:52.379 { 00:30:52.379 "name": "BaseBdev3", 00:30:52.379 "uuid": "c4194139-d206-4978-87fb-1b36f5fa8c3b", 00:30:52.379 "is_configured": true, 00:30:52.379 "data_offset": 2048, 00:30:52.379 "data_size": 63488 00:30:52.379 } 00:30:52.379 ] 00:30:52.379 }' 00:30:52.379 23:16:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:52.379 23:16:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:53.315 23:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:53.315 23:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:53.315 23:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:30:53.315 23:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:30:53.574 [2024-07-13 23:16:42.940078] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:53.574 BaseBdev1 00:30:53.574 23:16:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:30:53.574 23:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:30:53.574 23:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:30:53.574 23:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:30:53.574 23:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:30:53.574 23:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:30:53.574 23:16:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:53.833 23:16:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:54.091 [ 00:30:54.091 { 00:30:54.091 "name": "BaseBdev1", 00:30:54.091 "aliases": [ 00:30:54.091 "514cac9b-8e0c-4b61-ac61-13629167d2ed" 00:30:54.091 ], 00:30:54.091 "product_name": "Malloc disk", 00:30:54.091 "block_size": 512, 00:30:54.091 "num_blocks": 65536, 00:30:54.091 "uuid": "514cac9b-8e0c-4b61-ac61-13629167d2ed", 00:30:54.091 "assigned_rate_limits": { 00:30:54.091 "rw_ios_per_sec": 0, 00:30:54.091 "rw_mbytes_per_sec": 0, 00:30:54.091 "r_mbytes_per_sec": 0, 00:30:54.091 "w_mbytes_per_sec": 0 00:30:54.091 }, 00:30:54.091 "claimed": true, 00:30:54.091 "claim_type": "exclusive_write", 00:30:54.091 "zoned": false, 00:30:54.091 "supported_io_types": { 00:30:54.091 "read": true, 00:30:54.091 "write": true, 00:30:54.091 "unmap": true, 00:30:54.091 "flush": true, 00:30:54.091 "reset": true, 00:30:54.091 "nvme_admin": false, 00:30:54.091 "nvme_io": false, 00:30:54.091 "nvme_io_md": false, 00:30:54.091 "write_zeroes": true, 00:30:54.091 "zcopy": true, 00:30:54.091 "get_zone_info": false, 00:30:54.091 "zone_management": false, 00:30:54.091 "zone_append": false, 00:30:54.091 "compare": false, 00:30:54.091 "compare_and_write": false, 00:30:54.091 "abort": true, 00:30:54.091 "seek_hole": false, 00:30:54.091 "seek_data": false, 00:30:54.091 "copy": true, 00:30:54.091 "nvme_iov_md": false 00:30:54.091 }, 00:30:54.091 "memory_domains": [ 00:30:54.091 { 00:30:54.091 "dma_device_id": "system", 00:30:54.091 "dma_device_type": 1 00:30:54.091 }, 00:30:54.091 { 00:30:54.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:54.091 "dma_device_type": 2 00:30:54.091 } 00:30:54.091 ], 00:30:54.091 "driver_specific": {} 00:30:54.091 } 00:30:54.091 ] 00:30:54.091 23:16:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:30:54.091 23:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:54.091 23:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:54.091 23:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:54.091 23:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:54.091 23:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:54.091 23:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:54.091 23:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:54.091 23:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:54.091 23:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:54.091 23:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:54.091 23:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:54.091 23:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:54.348 23:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:54.348 "name": "Existed_Raid", 00:30:54.348 "uuid": "cbabbffc-7d89-4056-910c-b61d504ec0ff", 00:30:54.348 "strip_size_kb": 64, 00:30:54.348 "state": "configuring", 00:30:54.348 "raid_level": "raid5f", 00:30:54.348 "superblock": true, 00:30:54.348 "num_base_bdevs": 3, 00:30:54.348 "num_base_bdevs_discovered": 2, 00:30:54.348 "num_base_bdevs_operational": 3, 00:30:54.348 "base_bdevs_list": [ 00:30:54.348 { 00:30:54.348 "name": "BaseBdev1", 00:30:54.348 "uuid": "514cac9b-8e0c-4b61-ac61-13629167d2ed", 00:30:54.348 "is_configured": true, 00:30:54.348 "data_offset": 2048, 00:30:54.348 "data_size": 63488 00:30:54.348 }, 00:30:54.348 { 00:30:54.348 "name": null, 00:30:54.348 "uuid": "2bad12e2-dbad-4ee6-b6ce-a3352a5899d9", 00:30:54.348 "is_configured": false, 00:30:54.348 "data_offset": 2048, 00:30:54.348 "data_size": 63488 00:30:54.348 }, 00:30:54.348 { 00:30:54.348 "name": "BaseBdev3", 00:30:54.348 "uuid": "c4194139-d206-4978-87fb-1b36f5fa8c3b", 00:30:54.348 "is_configured": true, 00:30:54.348 "data_offset": 2048, 00:30:54.348 "data_size": 63488 00:30:54.348 } 00:30:54.348 ] 00:30:54.348 }' 00:30:54.348 23:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:54.348 23:16:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:54.915 23:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:54.915 23:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:55.173 23:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:30:55.173 23:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:30:55.431 [2024-07-13 23:16:44.700628] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:55.431 23:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:55.431 23:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:55.431 23:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:55.431 23:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:55.431 23:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:55.431 23:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:55.431 23:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:55.431 23:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:55.431 23:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:55.431 23:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:55.431 23:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:55.431 23:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:55.689 23:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:55.689 "name": "Existed_Raid", 00:30:55.689 "uuid": "cbabbffc-7d89-4056-910c-b61d504ec0ff", 00:30:55.689 "strip_size_kb": 64, 00:30:55.689 "state": "configuring", 00:30:55.689 "raid_level": "raid5f", 00:30:55.689 "superblock": true, 00:30:55.689 "num_base_bdevs": 3, 00:30:55.689 "num_base_bdevs_discovered": 1, 00:30:55.689 "num_base_bdevs_operational": 3, 00:30:55.689 "base_bdevs_list": [ 00:30:55.689 { 00:30:55.689 "name": "BaseBdev1", 00:30:55.689 "uuid": "514cac9b-8e0c-4b61-ac61-13629167d2ed", 00:30:55.689 "is_configured": true, 00:30:55.689 "data_offset": 2048, 00:30:55.689 "data_size": 63488 00:30:55.689 }, 00:30:55.689 { 00:30:55.689 "name": null, 00:30:55.689 "uuid": "2bad12e2-dbad-4ee6-b6ce-a3352a5899d9", 00:30:55.689 "is_configured": false, 00:30:55.689 "data_offset": 2048, 00:30:55.689 "data_size": 63488 00:30:55.689 }, 00:30:55.689 { 00:30:55.689 "name": null, 00:30:55.689 "uuid": "c4194139-d206-4978-87fb-1b36f5fa8c3b", 00:30:55.689 "is_configured": false, 00:30:55.689 "data_offset": 2048, 00:30:55.689 "data_size": 63488 00:30:55.689 } 00:30:55.689 ] 00:30:55.689 }' 00:30:55.689 23:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:55.689 23:16:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:56.255 23:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:56.255 23:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:56.514 23:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:30:56.514 23:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:30:56.773 [2024-07-13 23:16:46.044969] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:56.773 23:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:56.773 23:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:56.773 23:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:56.773 23:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:56.773 23:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:56.773 23:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:56.773 23:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:56.773 23:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:56.773 23:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:56.773 23:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:56.773 23:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:56.773 23:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:57.032 23:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:57.032 "name": "Existed_Raid", 00:30:57.032 "uuid": "cbabbffc-7d89-4056-910c-b61d504ec0ff", 00:30:57.032 "strip_size_kb": 64, 00:30:57.032 "state": "configuring", 00:30:57.032 "raid_level": "raid5f", 00:30:57.032 "superblock": true, 00:30:57.032 "num_base_bdevs": 3, 00:30:57.032 "num_base_bdevs_discovered": 2, 00:30:57.032 "num_base_bdevs_operational": 3, 00:30:57.032 "base_bdevs_list": [ 00:30:57.032 { 00:30:57.032 "name": "BaseBdev1", 00:30:57.032 "uuid": "514cac9b-8e0c-4b61-ac61-13629167d2ed", 00:30:57.032 "is_configured": true, 00:30:57.032 "data_offset": 2048, 00:30:57.032 "data_size": 63488 00:30:57.032 }, 00:30:57.032 { 00:30:57.032 "name": null, 00:30:57.032 "uuid": "2bad12e2-dbad-4ee6-b6ce-a3352a5899d9", 00:30:57.032 "is_configured": false, 00:30:57.032 "data_offset": 2048, 00:30:57.032 "data_size": 63488 00:30:57.032 }, 00:30:57.032 { 00:30:57.032 "name": "BaseBdev3", 00:30:57.032 "uuid": "c4194139-d206-4978-87fb-1b36f5fa8c3b", 00:30:57.032 "is_configured": true, 00:30:57.032 "data_offset": 2048, 00:30:57.032 "data_size": 63488 00:30:57.032 } 00:30:57.032 ] 00:30:57.032 }' 00:30:57.032 23:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:57.032 23:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:57.600 23:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:57.600 23:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:57.858 23:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:30:57.858 23:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:30:58.122 [2024-07-13 23:16:47.425510] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:58.122 23:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:58.122 23:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:58.122 23:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:58.122 23:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:58.122 23:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:58.122 23:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:58.122 23:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:58.122 23:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:58.122 23:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:58.122 23:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:58.123 23:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:58.123 23:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:58.394 23:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:58.394 "name": "Existed_Raid", 00:30:58.394 "uuid": "cbabbffc-7d89-4056-910c-b61d504ec0ff", 00:30:58.394 "strip_size_kb": 64, 00:30:58.394 "state": "configuring", 00:30:58.394 "raid_level": "raid5f", 00:30:58.394 "superblock": true, 00:30:58.394 "num_base_bdevs": 3, 00:30:58.394 "num_base_bdevs_discovered": 1, 00:30:58.394 "num_base_bdevs_operational": 3, 00:30:58.394 "base_bdevs_list": [ 00:30:58.394 { 00:30:58.394 "name": null, 00:30:58.394 "uuid": "514cac9b-8e0c-4b61-ac61-13629167d2ed", 00:30:58.394 "is_configured": false, 00:30:58.394 "data_offset": 2048, 00:30:58.394 "data_size": 63488 00:30:58.394 }, 00:30:58.394 { 00:30:58.394 "name": null, 00:30:58.394 "uuid": "2bad12e2-dbad-4ee6-b6ce-a3352a5899d9", 00:30:58.394 "is_configured": false, 00:30:58.394 "data_offset": 2048, 00:30:58.394 "data_size": 63488 00:30:58.394 }, 00:30:58.394 { 00:30:58.394 "name": "BaseBdev3", 00:30:58.394 "uuid": "c4194139-d206-4978-87fb-1b36f5fa8c3b", 00:30:58.394 "is_configured": true, 00:30:58.394 "data_offset": 2048, 00:30:58.394 "data_size": 63488 00:30:58.394 } 00:30:58.394 ] 00:30:58.394 }' 00:30:58.394 23:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:58.394 23:16:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:58.960 23:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:58.960 23:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:59.218 23:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:30:59.218 23:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:30:59.477 [2024-07-13 23:16:48.810707] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:59.477 23:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:59.477 23:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:59.477 23:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:59.477 23:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:59.477 23:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:59.477 23:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:59.477 23:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:59.477 23:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:59.477 23:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:59.477 23:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:59.477 23:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:59.477 23:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:59.736 23:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:59.736 "name": "Existed_Raid", 00:30:59.736 "uuid": "cbabbffc-7d89-4056-910c-b61d504ec0ff", 00:30:59.736 "strip_size_kb": 64, 00:30:59.736 "state": "configuring", 00:30:59.736 "raid_level": "raid5f", 00:30:59.736 "superblock": true, 00:30:59.736 "num_base_bdevs": 3, 00:30:59.736 "num_base_bdevs_discovered": 2, 00:30:59.736 "num_base_bdevs_operational": 3, 00:30:59.736 "base_bdevs_list": [ 00:30:59.736 { 00:30:59.736 "name": null, 00:30:59.736 "uuid": "514cac9b-8e0c-4b61-ac61-13629167d2ed", 00:30:59.736 "is_configured": false, 00:30:59.736 "data_offset": 2048, 00:30:59.736 "data_size": 63488 00:30:59.736 }, 00:30:59.736 { 00:30:59.736 "name": "BaseBdev2", 00:30:59.736 "uuid": "2bad12e2-dbad-4ee6-b6ce-a3352a5899d9", 00:30:59.736 "is_configured": true, 00:30:59.736 "data_offset": 2048, 00:30:59.736 "data_size": 63488 00:30:59.736 }, 00:30:59.736 { 00:30:59.736 "name": "BaseBdev3", 00:30:59.736 "uuid": "c4194139-d206-4978-87fb-1b36f5fa8c3b", 00:30:59.736 "is_configured": true, 00:30:59.736 "data_offset": 2048, 00:30:59.736 "data_size": 63488 00:30:59.736 } 00:30:59.736 ] 00:30:59.736 }' 00:30:59.736 23:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:59.736 23:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:00.305 23:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:00.305 23:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:00.872 23:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:31:00.872 23:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:00.872 23:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:31:00.872 23:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 514cac9b-8e0c-4b61-ac61-13629167d2ed 00:31:01.131 [2024-07-13 23:16:50.403734] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:31:01.131 [2024-07-13 23:16:50.403959] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:31:01.131 [2024-07-13 23:16:50.403973] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:01.131 [2024-07-13 23:16:50.404045] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:31:01.131 NewBaseBdev 00:31:01.131 [2024-07-13 23:16:50.404807] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:31:01.131 [2024-07-13 23:16:50.404823] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:31:01.131 [2024-07-13 23:16:50.404922] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:01.131 23:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:31:01.131 23:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:31:01.131 23:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:01.131 23:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:31:01.131 23:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:01.131 23:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:01.131 23:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:01.389 23:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:31:01.648 [ 00:31:01.648 { 00:31:01.648 "name": "NewBaseBdev", 00:31:01.648 "aliases": [ 00:31:01.648 "514cac9b-8e0c-4b61-ac61-13629167d2ed" 00:31:01.648 ], 00:31:01.648 "product_name": "Malloc disk", 00:31:01.648 "block_size": 512, 00:31:01.648 "num_blocks": 65536, 00:31:01.648 "uuid": "514cac9b-8e0c-4b61-ac61-13629167d2ed", 00:31:01.648 "assigned_rate_limits": { 00:31:01.648 "rw_ios_per_sec": 0, 00:31:01.648 "rw_mbytes_per_sec": 0, 00:31:01.648 "r_mbytes_per_sec": 0, 00:31:01.648 "w_mbytes_per_sec": 0 00:31:01.648 }, 00:31:01.648 "claimed": true, 00:31:01.648 "claim_type": "exclusive_write", 00:31:01.648 "zoned": false, 00:31:01.648 "supported_io_types": { 00:31:01.648 "read": true, 00:31:01.648 "write": true, 00:31:01.648 "unmap": true, 00:31:01.648 "flush": true, 00:31:01.648 "reset": true, 00:31:01.648 "nvme_admin": false, 00:31:01.648 "nvme_io": false, 00:31:01.648 "nvme_io_md": false, 00:31:01.648 "write_zeroes": true, 00:31:01.648 "zcopy": true, 00:31:01.648 "get_zone_info": false, 00:31:01.648 "zone_management": false, 00:31:01.648 "zone_append": false, 00:31:01.648 "compare": false, 00:31:01.648 "compare_and_write": false, 00:31:01.648 "abort": true, 00:31:01.648 "seek_hole": false, 00:31:01.648 "seek_data": false, 00:31:01.648 "copy": true, 00:31:01.648 "nvme_iov_md": false 00:31:01.648 }, 00:31:01.648 "memory_domains": [ 00:31:01.648 { 00:31:01.648 "dma_device_id": "system", 00:31:01.648 "dma_device_type": 1 00:31:01.648 }, 00:31:01.648 { 00:31:01.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:01.648 "dma_device_type": 2 00:31:01.648 } 00:31:01.648 ], 00:31:01.648 "driver_specific": {} 00:31:01.648 } 00:31:01.648 ] 00:31:01.648 23:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:31:01.648 23:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:31:01.648 23:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:01.648 23:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:01.648 23:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:01.648 23:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:01.648 23:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:01.648 23:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:01.648 23:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:01.648 23:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:01.648 23:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:01.648 23:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:01.648 23:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:01.907 23:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:01.907 "name": "Existed_Raid", 00:31:01.907 "uuid": "cbabbffc-7d89-4056-910c-b61d504ec0ff", 00:31:01.907 "strip_size_kb": 64, 00:31:01.907 "state": "online", 00:31:01.907 "raid_level": "raid5f", 00:31:01.907 "superblock": true, 00:31:01.907 "num_base_bdevs": 3, 00:31:01.907 "num_base_bdevs_discovered": 3, 00:31:01.907 "num_base_bdevs_operational": 3, 00:31:01.907 "base_bdevs_list": [ 00:31:01.907 { 00:31:01.907 "name": "NewBaseBdev", 00:31:01.907 "uuid": "514cac9b-8e0c-4b61-ac61-13629167d2ed", 00:31:01.907 "is_configured": true, 00:31:01.907 "data_offset": 2048, 00:31:01.907 "data_size": 63488 00:31:01.907 }, 00:31:01.907 { 00:31:01.907 "name": "BaseBdev2", 00:31:01.907 "uuid": "2bad12e2-dbad-4ee6-b6ce-a3352a5899d9", 00:31:01.907 "is_configured": true, 00:31:01.907 "data_offset": 2048, 00:31:01.907 "data_size": 63488 00:31:01.907 }, 00:31:01.907 { 00:31:01.907 "name": "BaseBdev3", 00:31:01.907 "uuid": "c4194139-d206-4978-87fb-1b36f5fa8c3b", 00:31:01.907 "is_configured": true, 00:31:01.907 "data_offset": 2048, 00:31:01.907 "data_size": 63488 00:31:01.907 } 00:31:01.907 ] 00:31:01.907 }' 00:31:01.907 23:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:01.907 23:16:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:02.474 23:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:31:02.474 23:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:31:02.474 23:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:31:02.474 23:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:31:02.474 23:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:31:02.474 23:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:31:02.474 23:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:31:02.474 23:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:31:02.731 [2024-07-13 23:16:51.940686] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:02.731 23:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:31:02.731 "name": "Existed_Raid", 00:31:02.731 "aliases": [ 00:31:02.731 "cbabbffc-7d89-4056-910c-b61d504ec0ff" 00:31:02.731 ], 00:31:02.731 "product_name": "Raid Volume", 00:31:02.731 "block_size": 512, 00:31:02.731 "num_blocks": 126976, 00:31:02.731 "uuid": "cbabbffc-7d89-4056-910c-b61d504ec0ff", 00:31:02.732 "assigned_rate_limits": { 00:31:02.732 "rw_ios_per_sec": 0, 00:31:02.732 "rw_mbytes_per_sec": 0, 00:31:02.732 "r_mbytes_per_sec": 0, 00:31:02.732 "w_mbytes_per_sec": 0 00:31:02.732 }, 00:31:02.732 "claimed": false, 00:31:02.732 "zoned": false, 00:31:02.732 "supported_io_types": { 00:31:02.732 "read": true, 00:31:02.732 "write": true, 00:31:02.732 "unmap": false, 00:31:02.732 "flush": false, 00:31:02.732 "reset": true, 00:31:02.732 "nvme_admin": false, 00:31:02.732 "nvme_io": false, 00:31:02.732 "nvme_io_md": false, 00:31:02.732 "write_zeroes": true, 00:31:02.732 "zcopy": false, 00:31:02.732 "get_zone_info": false, 00:31:02.732 "zone_management": false, 00:31:02.732 "zone_append": false, 00:31:02.732 "compare": false, 00:31:02.732 "compare_and_write": false, 00:31:02.732 "abort": false, 00:31:02.732 "seek_hole": false, 00:31:02.732 "seek_data": false, 00:31:02.732 "copy": false, 00:31:02.732 "nvme_iov_md": false 00:31:02.732 }, 00:31:02.732 "driver_specific": { 00:31:02.732 "raid": { 00:31:02.732 "uuid": "cbabbffc-7d89-4056-910c-b61d504ec0ff", 00:31:02.732 "strip_size_kb": 64, 00:31:02.732 "state": "online", 00:31:02.732 "raid_level": "raid5f", 00:31:02.732 "superblock": true, 00:31:02.732 "num_base_bdevs": 3, 00:31:02.732 "num_base_bdevs_discovered": 3, 00:31:02.732 "num_base_bdevs_operational": 3, 00:31:02.732 "base_bdevs_list": [ 00:31:02.732 { 00:31:02.732 "name": "NewBaseBdev", 00:31:02.732 "uuid": "514cac9b-8e0c-4b61-ac61-13629167d2ed", 00:31:02.732 "is_configured": true, 00:31:02.732 "data_offset": 2048, 00:31:02.732 "data_size": 63488 00:31:02.732 }, 00:31:02.732 { 00:31:02.732 "name": "BaseBdev2", 00:31:02.732 "uuid": "2bad12e2-dbad-4ee6-b6ce-a3352a5899d9", 00:31:02.732 "is_configured": true, 00:31:02.732 "data_offset": 2048, 00:31:02.732 "data_size": 63488 00:31:02.732 }, 00:31:02.732 { 00:31:02.732 "name": "BaseBdev3", 00:31:02.732 "uuid": "c4194139-d206-4978-87fb-1b36f5fa8c3b", 00:31:02.732 "is_configured": true, 00:31:02.732 "data_offset": 2048, 00:31:02.732 "data_size": 63488 00:31:02.732 } 00:31:02.732 ] 00:31:02.732 } 00:31:02.732 } 00:31:02.732 }' 00:31:02.732 23:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:02.732 23:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:31:02.732 BaseBdev2 00:31:02.732 BaseBdev3' 00:31:02.732 23:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:02.732 23:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:02.732 23:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:31:02.989 23:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:02.989 "name": "NewBaseBdev", 00:31:02.989 "aliases": [ 00:31:02.989 "514cac9b-8e0c-4b61-ac61-13629167d2ed" 00:31:02.989 ], 00:31:02.989 "product_name": "Malloc disk", 00:31:02.989 "block_size": 512, 00:31:02.989 "num_blocks": 65536, 00:31:02.989 "uuid": "514cac9b-8e0c-4b61-ac61-13629167d2ed", 00:31:02.989 "assigned_rate_limits": { 00:31:02.989 "rw_ios_per_sec": 0, 00:31:02.989 "rw_mbytes_per_sec": 0, 00:31:02.989 "r_mbytes_per_sec": 0, 00:31:02.989 "w_mbytes_per_sec": 0 00:31:02.989 }, 00:31:02.989 "claimed": true, 00:31:02.989 "claim_type": "exclusive_write", 00:31:02.989 "zoned": false, 00:31:02.989 "supported_io_types": { 00:31:02.989 "read": true, 00:31:02.989 "write": true, 00:31:02.989 "unmap": true, 00:31:02.989 "flush": true, 00:31:02.989 "reset": true, 00:31:02.989 "nvme_admin": false, 00:31:02.989 "nvme_io": false, 00:31:02.989 "nvme_io_md": false, 00:31:02.989 "write_zeroes": true, 00:31:02.989 "zcopy": true, 00:31:02.989 "get_zone_info": false, 00:31:02.989 "zone_management": false, 00:31:02.989 "zone_append": false, 00:31:02.989 "compare": false, 00:31:02.989 "compare_and_write": false, 00:31:02.989 "abort": true, 00:31:02.989 "seek_hole": false, 00:31:02.989 "seek_data": false, 00:31:02.989 "copy": true, 00:31:02.989 "nvme_iov_md": false 00:31:02.989 }, 00:31:02.989 "memory_domains": [ 00:31:02.989 { 00:31:02.989 "dma_device_id": "system", 00:31:02.989 "dma_device_type": 1 00:31:02.989 }, 00:31:02.989 { 00:31:02.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:02.989 "dma_device_type": 2 00:31:02.989 } 00:31:02.989 ], 00:31:02.989 "driver_specific": {} 00:31:02.989 }' 00:31:02.989 23:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:02.989 23:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:02.989 23:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:02.989 23:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:03.246 23:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:03.246 23:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:03.246 23:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:03.246 23:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:03.246 23:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:03.246 23:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:03.246 23:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:03.540 23:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:03.540 23:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:03.540 23:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:31:03.540 23:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:03.540 23:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:03.540 "name": "BaseBdev2", 00:31:03.540 "aliases": [ 00:31:03.540 "2bad12e2-dbad-4ee6-b6ce-a3352a5899d9" 00:31:03.540 ], 00:31:03.540 "product_name": "Malloc disk", 00:31:03.540 "block_size": 512, 00:31:03.540 "num_blocks": 65536, 00:31:03.540 "uuid": "2bad12e2-dbad-4ee6-b6ce-a3352a5899d9", 00:31:03.540 "assigned_rate_limits": { 00:31:03.540 "rw_ios_per_sec": 0, 00:31:03.540 "rw_mbytes_per_sec": 0, 00:31:03.540 "r_mbytes_per_sec": 0, 00:31:03.540 "w_mbytes_per_sec": 0 00:31:03.540 }, 00:31:03.540 "claimed": true, 00:31:03.540 "claim_type": "exclusive_write", 00:31:03.540 "zoned": false, 00:31:03.540 "supported_io_types": { 00:31:03.540 "read": true, 00:31:03.540 "write": true, 00:31:03.540 "unmap": true, 00:31:03.540 "flush": true, 00:31:03.540 "reset": true, 00:31:03.540 "nvme_admin": false, 00:31:03.540 "nvme_io": false, 00:31:03.540 "nvme_io_md": false, 00:31:03.540 "write_zeroes": true, 00:31:03.540 "zcopy": true, 00:31:03.540 "get_zone_info": false, 00:31:03.540 "zone_management": false, 00:31:03.540 "zone_append": false, 00:31:03.540 "compare": false, 00:31:03.540 "compare_and_write": false, 00:31:03.540 "abort": true, 00:31:03.540 "seek_hole": false, 00:31:03.540 "seek_data": false, 00:31:03.540 "copy": true, 00:31:03.540 "nvme_iov_md": false 00:31:03.540 }, 00:31:03.540 "memory_domains": [ 00:31:03.540 { 00:31:03.540 "dma_device_id": "system", 00:31:03.540 "dma_device_type": 1 00:31:03.540 }, 00:31:03.540 { 00:31:03.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:03.540 "dma_device_type": 2 00:31:03.540 } 00:31:03.540 ], 00:31:03.540 "driver_specific": {} 00:31:03.540 }' 00:31:03.540 23:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:03.797 23:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:03.797 23:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:03.797 23:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:03.797 23:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:03.797 23:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:03.797 23:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:03.797 23:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:04.054 23:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:04.054 23:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:04.054 23:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:04.054 23:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:04.054 23:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:04.054 23:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:31:04.054 23:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:04.311 23:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:04.311 "name": "BaseBdev3", 00:31:04.311 "aliases": [ 00:31:04.311 "c4194139-d206-4978-87fb-1b36f5fa8c3b" 00:31:04.311 ], 00:31:04.311 "product_name": "Malloc disk", 00:31:04.311 "block_size": 512, 00:31:04.311 "num_blocks": 65536, 00:31:04.311 "uuid": "c4194139-d206-4978-87fb-1b36f5fa8c3b", 00:31:04.311 "assigned_rate_limits": { 00:31:04.311 "rw_ios_per_sec": 0, 00:31:04.311 "rw_mbytes_per_sec": 0, 00:31:04.311 "r_mbytes_per_sec": 0, 00:31:04.311 "w_mbytes_per_sec": 0 00:31:04.311 }, 00:31:04.311 "claimed": true, 00:31:04.311 "claim_type": "exclusive_write", 00:31:04.311 "zoned": false, 00:31:04.311 "supported_io_types": { 00:31:04.311 "read": true, 00:31:04.311 "write": true, 00:31:04.311 "unmap": true, 00:31:04.311 "flush": true, 00:31:04.311 "reset": true, 00:31:04.311 "nvme_admin": false, 00:31:04.311 "nvme_io": false, 00:31:04.311 "nvme_io_md": false, 00:31:04.311 "write_zeroes": true, 00:31:04.311 "zcopy": true, 00:31:04.311 "get_zone_info": false, 00:31:04.311 "zone_management": false, 00:31:04.311 "zone_append": false, 00:31:04.311 "compare": false, 00:31:04.311 "compare_and_write": false, 00:31:04.311 "abort": true, 00:31:04.311 "seek_hole": false, 00:31:04.311 "seek_data": false, 00:31:04.311 "copy": true, 00:31:04.311 "nvme_iov_md": false 00:31:04.311 }, 00:31:04.311 "memory_domains": [ 00:31:04.311 { 00:31:04.311 "dma_device_id": "system", 00:31:04.311 "dma_device_type": 1 00:31:04.311 }, 00:31:04.311 { 00:31:04.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:04.311 "dma_device_type": 2 00:31:04.311 } 00:31:04.311 ], 00:31:04.311 "driver_specific": {} 00:31:04.311 }' 00:31:04.311 23:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:04.311 23:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:04.311 23:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:04.311 23:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:04.311 23:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:04.311 23:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:04.311 23:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:04.568 23:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:04.568 23:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:04.568 23:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:04.568 23:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:04.568 23:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:04.568 23:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:04.826 [2024-07-13 23:16:54.160997] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:04.826 [2024-07-13 23:16:54.161056] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:04.826 [2024-07-13 23:16:54.161146] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:04.826 [2024-07-13 23:16:54.161464] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:04.827 [2024-07-13 23:16:54.161486] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:31:04.827 23:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 160592 00:31:04.827 23:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 160592 ']' 00:31:04.827 23:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 160592 00:31:04.827 23:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:31:04.827 23:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:04.827 23:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 160592 00:31:04.827 23:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:04.827 23:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:04.827 killing process with pid 160592 00:31:04.827 23:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 160592' 00:31:04.827 23:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 160592 00:31:04.827 [2024-07-13 23:16:54.204095] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:04.827 23:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 160592 00:31:04.827 [2024-07-13 23:16:54.231027] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:05.085 23:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:31:05.085 00:31:05.085 real 0m29.775s 00:31:05.085 user 0m56.792s 00:31:05.085 sys 0m3.411s 00:31:05.085 23:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:05.085 ************************************ 00:31:05.085 END TEST raid5f_state_function_test_sb 00:31:05.085 ************************************ 00:31:05.085 23:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:05.343 23:16:54 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:31:05.343 23:16:54 bdev_raid -- bdev/bdev_raid.sh@888 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:31:05.343 23:16:54 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:31:05.343 23:16:54 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:05.343 23:16:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:05.343 ************************************ 00:31:05.343 START TEST raid5f_superblock_test 00:31:05.343 ************************************ 00:31:05.343 23:16:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid5f 3 00:31:05.343 23:16:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid5f 00:31:05.343 23:16:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:31:05.343 23:16:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:31:05.343 23:16:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:31:05.343 23:16:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:31:05.343 23:16:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:31:05.343 23:16:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:31:05.343 23:16:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:31:05.344 23:16:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:31:05.344 23:16:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:31:05.344 23:16:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:31:05.344 23:16:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:31:05.344 23:16:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:31:05.344 23:16:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid5f '!=' raid1 ']' 00:31:05.344 23:16:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:31:05.344 23:16:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:31:05.344 23:16:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=161561 00:31:05.344 23:16:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 161561 /var/tmp/spdk-raid.sock 00:31:05.344 23:16:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:31:05.344 23:16:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 161561 ']' 00:31:05.344 23:16:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:05.344 23:16:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:05.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:05.344 23:16:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:05.344 23:16:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:05.344 23:16:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:05.344 [2024-07-13 23:16:54.583009] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:31:05.344 [2024-07-13 23:16:54.583898] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid161561 ] 00:31:05.344 [2024-07-13 23:16:54.731468] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.603 [2024-07-13 23:16:54.826238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:05.603 [2024-07-13 23:16:54.878887] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:06.170 23:16:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:06.170 23:16:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:31:06.170 23:16:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:31:06.170 23:16:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:31:06.170 23:16:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:31:06.170 23:16:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:31:06.170 23:16:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:31:06.170 23:16:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:06.170 23:16:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:31:06.170 23:16:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:06.170 23:16:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:31:06.428 malloc1 00:31:06.428 23:16:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:06.686 [2024-07-13 23:16:55.997386] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:06.686 [2024-07-13 23:16:55.997517] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:06.686 [2024-07-13 23:16:55.997568] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:31:06.686 [2024-07-13 23:16:55.997651] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:06.686 [2024-07-13 23:16:56.000408] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:06.686 [2024-07-13 23:16:56.000473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:06.686 pt1 00:31:06.686 23:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:31:06.686 23:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:31:06.686 23:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:31:06.686 23:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:31:06.686 23:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:31:06.686 23:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:06.686 23:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:31:06.686 23:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:06.686 23:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:31:06.944 malloc2 00:31:06.944 23:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:07.202 [2024-07-13 23:16:56.443537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:07.202 [2024-07-13 23:16:56.443652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:07.202 [2024-07-13 23:16:56.443690] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:31:07.202 [2024-07-13 23:16:56.443738] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:07.202 [2024-07-13 23:16:56.446360] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:07.202 [2024-07-13 23:16:56.446432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:07.202 pt2 00:31:07.202 23:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:31:07.202 23:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:31:07.202 23:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:31:07.202 23:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:31:07.202 23:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:31:07.202 23:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:07.202 23:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:31:07.202 23:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:07.202 23:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:31:07.461 malloc3 00:31:07.461 23:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:07.719 [2024-07-13 23:16:56.905133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:07.719 [2024-07-13 23:16:56.905257] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:07.719 [2024-07-13 23:16:56.905324] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:31:07.719 [2024-07-13 23:16:56.905369] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:07.719 [2024-07-13 23:16:56.907863] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:07.719 [2024-07-13 23:16:56.907933] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:07.719 pt3 00:31:07.719 23:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:31:07.719 23:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:31:07.719 23:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:31:07.719 [2024-07-13 23:16:57.117257] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:07.719 [2024-07-13 23:16:57.119663] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:07.719 [2024-07-13 23:16:57.119762] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:07.719 [2024-07-13 23:16:57.120064] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:31:07.719 [2024-07-13 23:16:57.120090] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:07.719 [2024-07-13 23:16:57.120222] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:31:07.719 [2024-07-13 23:16:57.121147] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:31:07.719 [2024-07-13 23:16:57.121172] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880 00:31:07.719 [2024-07-13 23:16:57.121426] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:07.979 23:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:07.979 23:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:07.979 23:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:07.979 23:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:07.979 23:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:07.979 23:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:07.979 23:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:07.979 23:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:07.979 23:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:07.979 23:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:07.979 23:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:07.979 23:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:07.979 23:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:07.979 "name": "raid_bdev1", 00:31:07.979 "uuid": "90a650b5-8db9-4bbe-9b88-3635f4bf9b21", 00:31:07.979 "strip_size_kb": 64, 00:31:07.979 "state": "online", 00:31:07.979 "raid_level": "raid5f", 00:31:07.979 "superblock": true, 00:31:07.979 "num_base_bdevs": 3, 00:31:07.979 "num_base_bdevs_discovered": 3, 00:31:07.979 "num_base_bdevs_operational": 3, 00:31:07.979 "base_bdevs_list": [ 00:31:07.979 { 00:31:07.979 "name": "pt1", 00:31:07.979 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:07.979 "is_configured": true, 00:31:07.979 "data_offset": 2048, 00:31:07.979 "data_size": 63488 00:31:07.979 }, 00:31:07.979 { 00:31:07.979 "name": "pt2", 00:31:07.979 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:07.979 "is_configured": true, 00:31:07.979 "data_offset": 2048, 00:31:07.979 "data_size": 63488 00:31:07.979 }, 00:31:07.979 { 00:31:07.979 "name": "pt3", 00:31:07.979 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:07.979 "is_configured": true, 00:31:07.979 "data_offset": 2048, 00:31:07.979 "data_size": 63488 00:31:07.979 } 00:31:07.979 ] 00:31:07.979 }' 00:31:07.979 23:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:07.979 23:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:08.944 23:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:31:08.944 23:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:31:08.944 23:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:31:08.944 23:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:31:08.944 23:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:31:08.944 23:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:31:08.944 23:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:08.944 23:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:31:08.944 [2024-07-13 23:16:58.233870] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:08.944 23:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:31:08.944 "name": "raid_bdev1", 00:31:08.944 "aliases": [ 00:31:08.944 "90a650b5-8db9-4bbe-9b88-3635f4bf9b21" 00:31:08.944 ], 00:31:08.944 "product_name": "Raid Volume", 00:31:08.944 "block_size": 512, 00:31:08.944 "num_blocks": 126976, 00:31:08.944 "uuid": "90a650b5-8db9-4bbe-9b88-3635f4bf9b21", 00:31:08.944 "assigned_rate_limits": { 00:31:08.944 "rw_ios_per_sec": 0, 00:31:08.944 "rw_mbytes_per_sec": 0, 00:31:08.944 "r_mbytes_per_sec": 0, 00:31:08.944 "w_mbytes_per_sec": 0 00:31:08.944 }, 00:31:08.944 "claimed": false, 00:31:08.944 "zoned": false, 00:31:08.944 "supported_io_types": { 00:31:08.944 "read": true, 00:31:08.944 "write": true, 00:31:08.944 "unmap": false, 00:31:08.944 "flush": false, 00:31:08.944 "reset": true, 00:31:08.944 "nvme_admin": false, 00:31:08.944 "nvme_io": false, 00:31:08.944 "nvme_io_md": false, 00:31:08.944 "write_zeroes": true, 00:31:08.944 "zcopy": false, 00:31:08.944 "get_zone_info": false, 00:31:08.944 "zone_management": false, 00:31:08.944 "zone_append": false, 00:31:08.944 "compare": false, 00:31:08.944 "compare_and_write": false, 00:31:08.944 "abort": false, 00:31:08.944 "seek_hole": false, 00:31:08.944 "seek_data": false, 00:31:08.944 "copy": false, 00:31:08.944 "nvme_iov_md": false 00:31:08.944 }, 00:31:08.944 "driver_specific": { 00:31:08.944 "raid": { 00:31:08.944 "uuid": "90a650b5-8db9-4bbe-9b88-3635f4bf9b21", 00:31:08.944 "strip_size_kb": 64, 00:31:08.944 "state": "online", 00:31:08.944 "raid_level": "raid5f", 00:31:08.944 "superblock": true, 00:31:08.944 "num_base_bdevs": 3, 00:31:08.944 "num_base_bdevs_discovered": 3, 00:31:08.944 "num_base_bdevs_operational": 3, 00:31:08.944 "base_bdevs_list": [ 00:31:08.944 { 00:31:08.944 "name": "pt1", 00:31:08.944 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:08.944 "is_configured": true, 00:31:08.944 "data_offset": 2048, 00:31:08.944 "data_size": 63488 00:31:08.944 }, 00:31:08.944 { 00:31:08.944 "name": "pt2", 00:31:08.944 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:08.944 "is_configured": true, 00:31:08.944 "data_offset": 2048, 00:31:08.944 "data_size": 63488 00:31:08.944 }, 00:31:08.944 { 00:31:08.944 "name": "pt3", 00:31:08.944 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:08.944 "is_configured": true, 00:31:08.944 "data_offset": 2048, 00:31:08.944 "data_size": 63488 00:31:08.944 } 00:31:08.944 ] 00:31:08.944 } 00:31:08.944 } 00:31:08.944 }' 00:31:08.944 23:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:08.944 23:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:31:08.944 pt2 00:31:08.944 pt3' 00:31:08.944 23:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:08.944 23:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:31:08.944 23:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:09.203 23:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:09.203 "name": "pt1", 00:31:09.203 "aliases": [ 00:31:09.203 "00000000-0000-0000-0000-000000000001" 00:31:09.203 ], 00:31:09.203 "product_name": "passthru", 00:31:09.203 "block_size": 512, 00:31:09.203 "num_blocks": 65536, 00:31:09.203 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:09.203 "assigned_rate_limits": { 00:31:09.203 "rw_ios_per_sec": 0, 00:31:09.203 "rw_mbytes_per_sec": 0, 00:31:09.203 "r_mbytes_per_sec": 0, 00:31:09.203 "w_mbytes_per_sec": 0 00:31:09.203 }, 00:31:09.203 "claimed": true, 00:31:09.203 "claim_type": "exclusive_write", 00:31:09.203 "zoned": false, 00:31:09.203 "supported_io_types": { 00:31:09.203 "read": true, 00:31:09.203 "write": true, 00:31:09.203 "unmap": true, 00:31:09.203 "flush": true, 00:31:09.203 "reset": true, 00:31:09.203 "nvme_admin": false, 00:31:09.203 "nvme_io": false, 00:31:09.203 "nvme_io_md": false, 00:31:09.203 "write_zeroes": true, 00:31:09.203 "zcopy": true, 00:31:09.203 "get_zone_info": false, 00:31:09.203 "zone_management": false, 00:31:09.203 "zone_append": false, 00:31:09.203 "compare": false, 00:31:09.203 "compare_and_write": false, 00:31:09.203 "abort": true, 00:31:09.203 "seek_hole": false, 00:31:09.203 "seek_data": false, 00:31:09.203 "copy": true, 00:31:09.203 "nvme_iov_md": false 00:31:09.203 }, 00:31:09.203 "memory_domains": [ 00:31:09.203 { 00:31:09.203 "dma_device_id": "system", 00:31:09.203 "dma_device_type": 1 00:31:09.203 }, 00:31:09.203 { 00:31:09.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:09.203 "dma_device_type": 2 00:31:09.203 } 00:31:09.203 ], 00:31:09.203 "driver_specific": { 00:31:09.203 "passthru": { 00:31:09.203 "name": "pt1", 00:31:09.203 "base_bdev_name": "malloc1" 00:31:09.203 } 00:31:09.203 } 00:31:09.203 }' 00:31:09.203 23:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:09.461 23:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:09.461 23:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:09.461 23:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:09.461 23:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:09.461 23:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:09.461 23:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:09.461 23:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:09.461 23:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:09.461 23:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:09.720 23:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:09.720 23:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:09.720 23:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:09.720 23:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:09.720 23:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:31:09.978 23:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:09.978 "name": "pt2", 00:31:09.978 "aliases": [ 00:31:09.978 "00000000-0000-0000-0000-000000000002" 00:31:09.978 ], 00:31:09.978 "product_name": "passthru", 00:31:09.978 "block_size": 512, 00:31:09.978 "num_blocks": 65536, 00:31:09.979 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:09.979 "assigned_rate_limits": { 00:31:09.979 "rw_ios_per_sec": 0, 00:31:09.979 "rw_mbytes_per_sec": 0, 00:31:09.979 "r_mbytes_per_sec": 0, 00:31:09.979 "w_mbytes_per_sec": 0 00:31:09.979 }, 00:31:09.979 "claimed": true, 00:31:09.979 "claim_type": "exclusive_write", 00:31:09.979 "zoned": false, 00:31:09.979 "supported_io_types": { 00:31:09.979 "read": true, 00:31:09.979 "write": true, 00:31:09.979 "unmap": true, 00:31:09.979 "flush": true, 00:31:09.979 "reset": true, 00:31:09.979 "nvme_admin": false, 00:31:09.979 "nvme_io": false, 00:31:09.979 "nvme_io_md": false, 00:31:09.979 "write_zeroes": true, 00:31:09.979 "zcopy": true, 00:31:09.979 "get_zone_info": false, 00:31:09.979 "zone_management": false, 00:31:09.979 "zone_append": false, 00:31:09.979 "compare": false, 00:31:09.979 "compare_and_write": false, 00:31:09.979 "abort": true, 00:31:09.979 "seek_hole": false, 00:31:09.979 "seek_data": false, 00:31:09.979 "copy": true, 00:31:09.979 "nvme_iov_md": false 00:31:09.979 }, 00:31:09.979 "memory_domains": [ 00:31:09.979 { 00:31:09.979 "dma_device_id": "system", 00:31:09.979 "dma_device_type": 1 00:31:09.979 }, 00:31:09.979 { 00:31:09.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:09.979 "dma_device_type": 2 00:31:09.979 } 00:31:09.979 ], 00:31:09.979 "driver_specific": { 00:31:09.979 "passthru": { 00:31:09.979 "name": "pt2", 00:31:09.979 "base_bdev_name": "malloc2" 00:31:09.979 } 00:31:09.979 } 00:31:09.979 }' 00:31:09.979 23:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:09.979 23:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:09.979 23:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:09.979 23:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:09.979 23:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:09.979 23:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:10.237 23:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:10.237 23:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:10.237 23:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:10.237 23:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:10.237 23:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:10.237 23:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:10.237 23:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:10.237 23:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:31:10.237 23:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:10.495 23:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:10.495 "name": "pt3", 00:31:10.495 "aliases": [ 00:31:10.495 "00000000-0000-0000-0000-000000000003" 00:31:10.495 ], 00:31:10.495 "product_name": "passthru", 00:31:10.495 "block_size": 512, 00:31:10.495 "num_blocks": 65536, 00:31:10.495 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:10.495 "assigned_rate_limits": { 00:31:10.495 "rw_ios_per_sec": 0, 00:31:10.495 "rw_mbytes_per_sec": 0, 00:31:10.495 "r_mbytes_per_sec": 0, 00:31:10.495 "w_mbytes_per_sec": 0 00:31:10.495 }, 00:31:10.495 "claimed": true, 00:31:10.495 "claim_type": "exclusive_write", 00:31:10.495 "zoned": false, 00:31:10.495 "supported_io_types": { 00:31:10.495 "read": true, 00:31:10.495 "write": true, 00:31:10.495 "unmap": true, 00:31:10.495 "flush": true, 00:31:10.495 "reset": true, 00:31:10.495 "nvme_admin": false, 00:31:10.495 "nvme_io": false, 00:31:10.495 "nvme_io_md": false, 00:31:10.495 "write_zeroes": true, 00:31:10.495 "zcopy": true, 00:31:10.495 "get_zone_info": false, 00:31:10.495 "zone_management": false, 00:31:10.495 "zone_append": false, 00:31:10.495 "compare": false, 00:31:10.495 "compare_and_write": false, 00:31:10.495 "abort": true, 00:31:10.495 "seek_hole": false, 00:31:10.495 "seek_data": false, 00:31:10.495 "copy": true, 00:31:10.495 "nvme_iov_md": false 00:31:10.495 }, 00:31:10.495 "memory_domains": [ 00:31:10.495 { 00:31:10.495 "dma_device_id": "system", 00:31:10.495 "dma_device_type": 1 00:31:10.495 }, 00:31:10.495 { 00:31:10.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:10.495 "dma_device_type": 2 00:31:10.495 } 00:31:10.495 ], 00:31:10.495 "driver_specific": { 00:31:10.495 "passthru": { 00:31:10.495 "name": "pt3", 00:31:10.495 "base_bdev_name": "malloc3" 00:31:10.495 } 00:31:10.495 } 00:31:10.495 }' 00:31:10.495 23:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:10.495 23:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:10.495 23:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:10.495 23:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:10.495 23:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:10.753 23:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:10.753 23:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:10.753 23:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:10.753 23:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:10.753 23:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:10.753 23:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:10.753 23:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:10.753 23:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:10.753 23:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:31:11.011 [2024-07-13 23:17:00.370430] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:11.011 23:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=90a650b5-8db9-4bbe-9b88-3635f4bf9b21 00:31:11.011 23:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 90a650b5-8db9-4bbe-9b88-3635f4bf9b21 ']' 00:31:11.011 23:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:11.269 [2024-07-13 23:17:00.650285] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:11.269 [2024-07-13 23:17:00.650324] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:11.269 [2024-07-13 23:17:00.650447] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:11.269 [2024-07-13 23:17:00.650546] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:11.269 [2024-07-13 23:17:00.650560] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline 00:31:11.269 23:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:11.269 23:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:31:11.527 23:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:31:11.527 23:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:31:11.527 23:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:31:11.527 23:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:31:11.785 23:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:31:11.785 23:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:31:12.043 23:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:31:12.043 23:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:31:12.301 23:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:31:12.301 23:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:31:12.559 23:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:31:12.559 23:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:31:12.559 23:17:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:31:12.559 23:17:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:31:12.559 23:17:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:12.559 23:17:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:12.559 23:17:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:12.559 23:17:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:12.559 23:17:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:12.559 23:17:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:12.559 23:17:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:12.559 23:17:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:31:12.559 23:17:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:31:12.818 [2024-07-13 23:17:02.050581] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:31:12.818 [2024-07-13 23:17:02.052748] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:31:12.818 [2024-07-13 23:17:02.052820] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:31:12.818 [2024-07-13 23:17:02.052875] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:31:12.818 [2024-07-13 23:17:02.052995] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:31:12.818 [2024-07-13 23:17:02.053035] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:31:12.818 [2024-07-13 23:17:02.053133] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:12.818 [2024-07-13 23:17:02.053145] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state configuring 00:31:12.818 request: 00:31:12.818 { 00:31:12.818 "name": "raid_bdev1", 00:31:12.818 "raid_level": "raid5f", 00:31:12.818 "base_bdevs": [ 00:31:12.818 "malloc1", 00:31:12.818 "malloc2", 00:31:12.818 "malloc3" 00:31:12.818 ], 00:31:12.818 "strip_size_kb": 64, 00:31:12.818 "superblock": false, 00:31:12.818 "method": "bdev_raid_create", 00:31:12.818 "req_id": 1 00:31:12.818 } 00:31:12.818 Got JSON-RPC error response 00:31:12.818 response: 00:31:12.818 { 00:31:12.818 "code": -17, 00:31:12.818 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:31:12.818 } 00:31:12.818 23:17:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:31:12.818 23:17:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:12.818 23:17:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:12.818 23:17:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:12.818 23:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:12.818 23:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:31:13.076 23:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:31:13.076 23:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:31:13.076 23:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:13.076 [2024-07-13 23:17:02.474553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:13.076 [2024-07-13 23:17:02.474677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:13.076 [2024-07-13 23:17:02.474739] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:31:13.076 [2024-07-13 23:17:02.474762] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:13.076 [2024-07-13 23:17:02.477485] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:13.076 [2024-07-13 23:17:02.477537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:13.076 [2024-07-13 23:17:02.477664] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:13.076 [2024-07-13 23:17:02.477755] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:13.076 pt1 00:31:13.335 23:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:31:13.335 23:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:13.335 23:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:13.335 23:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:13.335 23:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:13.335 23:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:13.335 23:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:13.335 23:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:13.335 23:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:13.335 23:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:13.335 23:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:13.335 23:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:13.335 23:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:13.335 "name": "raid_bdev1", 00:31:13.335 "uuid": "90a650b5-8db9-4bbe-9b88-3635f4bf9b21", 00:31:13.335 "strip_size_kb": 64, 00:31:13.335 "state": "configuring", 00:31:13.335 "raid_level": "raid5f", 00:31:13.335 "superblock": true, 00:31:13.335 "num_base_bdevs": 3, 00:31:13.335 "num_base_bdevs_discovered": 1, 00:31:13.335 "num_base_bdevs_operational": 3, 00:31:13.335 "base_bdevs_list": [ 00:31:13.335 { 00:31:13.335 "name": "pt1", 00:31:13.335 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:13.335 "is_configured": true, 00:31:13.335 "data_offset": 2048, 00:31:13.335 "data_size": 63488 00:31:13.335 }, 00:31:13.335 { 00:31:13.335 "name": null, 00:31:13.335 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:13.335 "is_configured": false, 00:31:13.335 "data_offset": 2048, 00:31:13.335 "data_size": 63488 00:31:13.335 }, 00:31:13.335 { 00:31:13.335 "name": null, 00:31:13.335 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:13.335 "is_configured": false, 00:31:13.335 "data_offset": 2048, 00:31:13.335 "data_size": 63488 00:31:13.335 } 00:31:13.335 ] 00:31:13.335 }' 00:31:13.335 23:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:13.335 23:17:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.286 23:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:31:14.286 23:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:14.286 [2024-07-13 23:17:03.518860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:14.286 [2024-07-13 23:17:03.518993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:14.286 [2024-07-13 23:17:03.519034] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:31:14.286 [2024-07-13 23:17:03.519056] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:14.286 [2024-07-13 23:17:03.519557] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:14.286 [2024-07-13 23:17:03.519607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:14.286 [2024-07-13 23:17:03.519746] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:14.286 [2024-07-13 23:17:03.519787] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:14.286 pt2 00:31:14.286 23:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:31:14.544 [2024-07-13 23:17:03.738904] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:31:14.544 23:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:31:14.544 23:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:14.544 23:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:14.544 23:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:14.544 23:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:14.544 23:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:14.544 23:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:14.544 23:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:14.544 23:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:14.544 23:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:14.544 23:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:14.544 23:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:14.802 23:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:14.802 "name": "raid_bdev1", 00:31:14.802 "uuid": "90a650b5-8db9-4bbe-9b88-3635f4bf9b21", 00:31:14.802 "strip_size_kb": 64, 00:31:14.802 "state": "configuring", 00:31:14.802 "raid_level": "raid5f", 00:31:14.802 "superblock": true, 00:31:14.802 "num_base_bdevs": 3, 00:31:14.802 "num_base_bdevs_discovered": 1, 00:31:14.802 "num_base_bdevs_operational": 3, 00:31:14.802 "base_bdevs_list": [ 00:31:14.802 { 00:31:14.802 "name": "pt1", 00:31:14.802 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:14.802 "is_configured": true, 00:31:14.802 "data_offset": 2048, 00:31:14.802 "data_size": 63488 00:31:14.802 }, 00:31:14.802 { 00:31:14.802 "name": null, 00:31:14.802 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:14.802 "is_configured": false, 00:31:14.802 "data_offset": 2048, 00:31:14.802 "data_size": 63488 00:31:14.802 }, 00:31:14.802 { 00:31:14.802 "name": null, 00:31:14.802 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:14.802 "is_configured": false, 00:31:14.802 "data_offset": 2048, 00:31:14.802 "data_size": 63488 00:31:14.802 } 00:31:14.802 ] 00:31:14.802 }' 00:31:14.802 23:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:14.802 23:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:15.367 23:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:31:15.367 23:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:31:15.367 23:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:15.626 [2024-07-13 23:17:04.775191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:15.626 [2024-07-13 23:17:04.775322] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:15.626 [2024-07-13 23:17:04.775361] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:31:15.626 [2024-07-13 23:17:04.775390] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:15.626 [2024-07-13 23:17:04.775928] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:15.626 [2024-07-13 23:17:04.775986] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:15.626 [2024-07-13 23:17:04.776090] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:15.626 [2024-07-13 23:17:04.776117] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:15.626 pt2 00:31:15.626 23:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:31:15.626 23:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:31:15.626 23:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:15.626 [2024-07-13 23:17:04.995149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:15.626 [2024-07-13 23:17:04.995240] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:15.626 [2024-07-13 23:17:04.995275] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:31:15.626 [2024-07-13 23:17:04.995303] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:15.626 [2024-07-13 23:17:04.995794] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:15.626 [2024-07-13 23:17:04.995844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:15.626 [2024-07-13 23:17:04.995980] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:31:15.626 [2024-07-13 23:17:04.996008] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:15.626 [2024-07-13 23:17:04.996153] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:31:15.626 [2024-07-13 23:17:04.996179] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:15.626 [2024-07-13 23:17:04.996268] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:31:15.626 [2024-07-13 23:17:04.996975] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:31:15.626 [2024-07-13 23:17:04.996999] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:31:15.626 [2024-07-13 23:17:04.997132] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:15.626 pt3 00:31:15.626 23:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:31:15.626 23:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:31:15.626 23:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:15.626 23:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:15.626 23:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:15.626 23:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:15.626 23:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:15.626 23:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:15.626 23:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:15.626 23:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:15.626 23:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:15.626 23:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:15.626 23:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:15.626 23:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:15.884 23:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:15.884 "name": "raid_bdev1", 00:31:15.884 "uuid": "90a650b5-8db9-4bbe-9b88-3635f4bf9b21", 00:31:15.884 "strip_size_kb": 64, 00:31:15.884 "state": "online", 00:31:15.884 "raid_level": "raid5f", 00:31:15.884 "superblock": true, 00:31:15.884 "num_base_bdevs": 3, 00:31:15.884 "num_base_bdevs_discovered": 3, 00:31:15.884 "num_base_bdevs_operational": 3, 00:31:15.884 "base_bdevs_list": [ 00:31:15.884 { 00:31:15.884 "name": "pt1", 00:31:15.884 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:15.884 "is_configured": true, 00:31:15.884 "data_offset": 2048, 00:31:15.884 "data_size": 63488 00:31:15.884 }, 00:31:15.884 { 00:31:15.884 "name": "pt2", 00:31:15.884 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:15.884 "is_configured": true, 00:31:15.884 "data_offset": 2048, 00:31:15.884 "data_size": 63488 00:31:15.884 }, 00:31:15.884 { 00:31:15.884 "name": "pt3", 00:31:15.884 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:15.884 "is_configured": true, 00:31:15.884 "data_offset": 2048, 00:31:15.884 "data_size": 63488 00:31:15.884 } 00:31:15.884 ] 00:31:15.884 }' 00:31:16.141 23:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:16.141 23:17:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:16.707 23:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:31:16.707 23:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:31:16.707 23:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:31:16.707 23:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:31:16.707 23:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:31:16.707 23:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:31:16.707 23:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:16.707 23:17:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:31:16.966 [2024-07-13 23:17:06.135679] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:16.966 23:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:31:16.966 "name": "raid_bdev1", 00:31:16.966 "aliases": [ 00:31:16.966 "90a650b5-8db9-4bbe-9b88-3635f4bf9b21" 00:31:16.966 ], 00:31:16.966 "product_name": "Raid Volume", 00:31:16.966 "block_size": 512, 00:31:16.966 "num_blocks": 126976, 00:31:16.966 "uuid": "90a650b5-8db9-4bbe-9b88-3635f4bf9b21", 00:31:16.966 "assigned_rate_limits": { 00:31:16.966 "rw_ios_per_sec": 0, 00:31:16.966 "rw_mbytes_per_sec": 0, 00:31:16.966 "r_mbytes_per_sec": 0, 00:31:16.966 "w_mbytes_per_sec": 0 00:31:16.966 }, 00:31:16.966 "claimed": false, 00:31:16.966 "zoned": false, 00:31:16.966 "supported_io_types": { 00:31:16.966 "read": true, 00:31:16.966 "write": true, 00:31:16.966 "unmap": false, 00:31:16.966 "flush": false, 00:31:16.966 "reset": true, 00:31:16.966 "nvme_admin": false, 00:31:16.966 "nvme_io": false, 00:31:16.966 "nvme_io_md": false, 00:31:16.966 "write_zeroes": true, 00:31:16.966 "zcopy": false, 00:31:16.966 "get_zone_info": false, 00:31:16.966 "zone_management": false, 00:31:16.966 "zone_append": false, 00:31:16.966 "compare": false, 00:31:16.966 "compare_and_write": false, 00:31:16.966 "abort": false, 00:31:16.966 "seek_hole": false, 00:31:16.966 "seek_data": false, 00:31:16.966 "copy": false, 00:31:16.966 "nvme_iov_md": false 00:31:16.966 }, 00:31:16.966 "driver_specific": { 00:31:16.966 "raid": { 00:31:16.966 "uuid": "90a650b5-8db9-4bbe-9b88-3635f4bf9b21", 00:31:16.966 "strip_size_kb": 64, 00:31:16.966 "state": "online", 00:31:16.966 "raid_level": "raid5f", 00:31:16.966 "superblock": true, 00:31:16.966 "num_base_bdevs": 3, 00:31:16.966 "num_base_bdevs_discovered": 3, 00:31:16.966 "num_base_bdevs_operational": 3, 00:31:16.966 "base_bdevs_list": [ 00:31:16.966 { 00:31:16.966 "name": "pt1", 00:31:16.966 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:16.966 "is_configured": true, 00:31:16.966 "data_offset": 2048, 00:31:16.966 "data_size": 63488 00:31:16.966 }, 00:31:16.966 { 00:31:16.966 "name": "pt2", 00:31:16.966 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:16.966 "is_configured": true, 00:31:16.966 "data_offset": 2048, 00:31:16.966 "data_size": 63488 00:31:16.966 }, 00:31:16.966 { 00:31:16.966 "name": "pt3", 00:31:16.966 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:16.966 "is_configured": true, 00:31:16.966 "data_offset": 2048, 00:31:16.966 "data_size": 63488 00:31:16.966 } 00:31:16.966 ] 00:31:16.966 } 00:31:16.966 } 00:31:16.966 }' 00:31:16.966 23:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:16.966 23:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:31:16.966 pt2 00:31:16.966 pt3' 00:31:16.966 23:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:16.966 23:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:31:16.966 23:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:17.224 23:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:17.224 "name": "pt1", 00:31:17.224 "aliases": [ 00:31:17.224 "00000000-0000-0000-0000-000000000001" 00:31:17.224 ], 00:31:17.224 "product_name": "passthru", 00:31:17.224 "block_size": 512, 00:31:17.224 "num_blocks": 65536, 00:31:17.224 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:17.224 "assigned_rate_limits": { 00:31:17.224 "rw_ios_per_sec": 0, 00:31:17.224 "rw_mbytes_per_sec": 0, 00:31:17.224 "r_mbytes_per_sec": 0, 00:31:17.224 "w_mbytes_per_sec": 0 00:31:17.224 }, 00:31:17.224 "claimed": true, 00:31:17.224 "claim_type": "exclusive_write", 00:31:17.224 "zoned": false, 00:31:17.224 "supported_io_types": { 00:31:17.224 "read": true, 00:31:17.224 "write": true, 00:31:17.224 "unmap": true, 00:31:17.224 "flush": true, 00:31:17.224 "reset": true, 00:31:17.224 "nvme_admin": false, 00:31:17.224 "nvme_io": false, 00:31:17.224 "nvme_io_md": false, 00:31:17.224 "write_zeroes": true, 00:31:17.224 "zcopy": true, 00:31:17.224 "get_zone_info": false, 00:31:17.224 "zone_management": false, 00:31:17.224 "zone_append": false, 00:31:17.224 "compare": false, 00:31:17.224 "compare_and_write": false, 00:31:17.224 "abort": true, 00:31:17.224 "seek_hole": false, 00:31:17.224 "seek_data": false, 00:31:17.224 "copy": true, 00:31:17.224 "nvme_iov_md": false 00:31:17.224 }, 00:31:17.224 "memory_domains": [ 00:31:17.224 { 00:31:17.224 "dma_device_id": "system", 00:31:17.224 "dma_device_type": 1 00:31:17.224 }, 00:31:17.224 { 00:31:17.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:17.224 "dma_device_type": 2 00:31:17.224 } 00:31:17.224 ], 00:31:17.224 "driver_specific": { 00:31:17.224 "passthru": { 00:31:17.224 "name": "pt1", 00:31:17.224 "base_bdev_name": "malloc1" 00:31:17.224 } 00:31:17.224 } 00:31:17.224 }' 00:31:17.224 23:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:17.224 23:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:17.224 23:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:17.224 23:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:17.224 23:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:17.224 23:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:17.224 23:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:17.483 23:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:17.483 23:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:17.483 23:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:17.483 23:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:17.483 23:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:17.483 23:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:17.483 23:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:31:17.483 23:17:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:17.769 23:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:17.769 "name": "pt2", 00:31:17.769 "aliases": [ 00:31:17.769 "00000000-0000-0000-0000-000000000002" 00:31:17.769 ], 00:31:17.769 "product_name": "passthru", 00:31:17.769 "block_size": 512, 00:31:17.769 "num_blocks": 65536, 00:31:17.769 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:17.769 "assigned_rate_limits": { 00:31:17.769 "rw_ios_per_sec": 0, 00:31:17.769 "rw_mbytes_per_sec": 0, 00:31:17.769 "r_mbytes_per_sec": 0, 00:31:17.769 "w_mbytes_per_sec": 0 00:31:17.769 }, 00:31:17.769 "claimed": true, 00:31:17.769 "claim_type": "exclusive_write", 00:31:17.769 "zoned": false, 00:31:17.769 "supported_io_types": { 00:31:17.769 "read": true, 00:31:17.769 "write": true, 00:31:17.769 "unmap": true, 00:31:17.769 "flush": true, 00:31:17.769 "reset": true, 00:31:17.769 "nvme_admin": false, 00:31:17.769 "nvme_io": false, 00:31:17.769 "nvme_io_md": false, 00:31:17.769 "write_zeroes": true, 00:31:17.769 "zcopy": true, 00:31:17.769 "get_zone_info": false, 00:31:17.769 "zone_management": false, 00:31:17.769 "zone_append": false, 00:31:17.769 "compare": false, 00:31:17.769 "compare_and_write": false, 00:31:17.769 "abort": true, 00:31:17.769 "seek_hole": false, 00:31:17.769 "seek_data": false, 00:31:17.769 "copy": true, 00:31:17.769 "nvme_iov_md": false 00:31:17.769 }, 00:31:17.769 "memory_domains": [ 00:31:17.769 { 00:31:17.769 "dma_device_id": "system", 00:31:17.769 "dma_device_type": 1 00:31:17.769 }, 00:31:17.769 { 00:31:17.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:17.769 "dma_device_type": 2 00:31:17.769 } 00:31:17.769 ], 00:31:17.769 "driver_specific": { 00:31:17.769 "passthru": { 00:31:17.769 "name": "pt2", 00:31:17.769 "base_bdev_name": "malloc2" 00:31:17.769 } 00:31:17.769 } 00:31:17.769 }' 00:31:17.769 23:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:17.769 23:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:18.040 23:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:18.040 23:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:18.040 23:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:18.040 23:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:18.040 23:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:18.040 23:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:18.040 23:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:18.040 23:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:18.040 23:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:18.298 23:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:18.298 23:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:18.298 23:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:18.298 23:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:31:18.556 23:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:18.556 "name": "pt3", 00:31:18.556 "aliases": [ 00:31:18.556 "00000000-0000-0000-0000-000000000003" 00:31:18.556 ], 00:31:18.556 "product_name": "passthru", 00:31:18.556 "block_size": 512, 00:31:18.556 "num_blocks": 65536, 00:31:18.556 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:18.556 "assigned_rate_limits": { 00:31:18.556 "rw_ios_per_sec": 0, 00:31:18.556 "rw_mbytes_per_sec": 0, 00:31:18.556 "r_mbytes_per_sec": 0, 00:31:18.556 "w_mbytes_per_sec": 0 00:31:18.556 }, 00:31:18.556 "claimed": true, 00:31:18.556 "claim_type": "exclusive_write", 00:31:18.556 "zoned": false, 00:31:18.556 "supported_io_types": { 00:31:18.556 "read": true, 00:31:18.556 "write": true, 00:31:18.556 "unmap": true, 00:31:18.556 "flush": true, 00:31:18.556 "reset": true, 00:31:18.556 "nvme_admin": false, 00:31:18.556 "nvme_io": false, 00:31:18.556 "nvme_io_md": false, 00:31:18.556 "write_zeroes": true, 00:31:18.556 "zcopy": true, 00:31:18.556 "get_zone_info": false, 00:31:18.556 "zone_management": false, 00:31:18.556 "zone_append": false, 00:31:18.556 "compare": false, 00:31:18.556 "compare_and_write": false, 00:31:18.556 "abort": true, 00:31:18.556 "seek_hole": false, 00:31:18.556 "seek_data": false, 00:31:18.556 "copy": true, 00:31:18.556 "nvme_iov_md": false 00:31:18.556 }, 00:31:18.556 "memory_domains": [ 00:31:18.556 { 00:31:18.556 "dma_device_id": "system", 00:31:18.556 "dma_device_type": 1 00:31:18.556 }, 00:31:18.556 { 00:31:18.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:18.556 "dma_device_type": 2 00:31:18.556 } 00:31:18.556 ], 00:31:18.556 "driver_specific": { 00:31:18.556 "passthru": { 00:31:18.556 "name": "pt3", 00:31:18.556 "base_bdev_name": "malloc3" 00:31:18.556 } 00:31:18.556 } 00:31:18.557 }' 00:31:18.557 23:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:18.557 23:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:18.557 23:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:18.557 23:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:18.557 23:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:18.557 23:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:18.557 23:17:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:18.815 23:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:18.815 23:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:18.815 23:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:18.815 23:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:18.815 23:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:18.815 23:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:18.815 23:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:31:19.074 [2024-07-13 23:17:08.408274] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:19.074 23:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 90a650b5-8db9-4bbe-9b88-3635f4bf9b21 '!=' 90a650b5-8db9-4bbe-9b88-3635f4bf9b21 ']' 00:31:19.074 23:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid5f 00:31:19.074 23:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:31:19.074 23:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:31:19.074 23:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:31:19.333 [2024-07-13 23:17:08.628139] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:31:19.333 23:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:19.333 23:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:19.333 23:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:19.333 23:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:19.333 23:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:19.333 23:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:19.333 23:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:19.333 23:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:19.333 23:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:19.333 23:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:19.333 23:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:19.333 23:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:19.592 23:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:19.592 "name": "raid_bdev1", 00:31:19.592 "uuid": "90a650b5-8db9-4bbe-9b88-3635f4bf9b21", 00:31:19.592 "strip_size_kb": 64, 00:31:19.592 "state": "online", 00:31:19.592 "raid_level": "raid5f", 00:31:19.592 "superblock": true, 00:31:19.592 "num_base_bdevs": 3, 00:31:19.592 "num_base_bdevs_discovered": 2, 00:31:19.592 "num_base_bdevs_operational": 2, 00:31:19.592 "base_bdevs_list": [ 00:31:19.592 { 00:31:19.592 "name": null, 00:31:19.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:19.592 "is_configured": false, 00:31:19.592 "data_offset": 2048, 00:31:19.592 "data_size": 63488 00:31:19.592 }, 00:31:19.592 { 00:31:19.592 "name": "pt2", 00:31:19.592 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:19.592 "is_configured": true, 00:31:19.592 "data_offset": 2048, 00:31:19.592 "data_size": 63488 00:31:19.592 }, 00:31:19.592 { 00:31:19.592 "name": "pt3", 00:31:19.592 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:19.592 "is_configured": true, 00:31:19.592 "data_offset": 2048, 00:31:19.592 "data_size": 63488 00:31:19.592 } 00:31:19.592 ] 00:31:19.592 }' 00:31:19.592 23:17:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:19.592 23:17:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:20.159 23:17:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:20.418 [2024-07-13 23:17:09.696346] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:20.418 [2024-07-13 23:17:09.696379] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:20.418 [2024-07-13 23:17:09.696465] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:20.418 [2024-07-13 23:17:09.696533] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:20.418 [2024-07-13 23:17:09.696545] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:31:20.418 23:17:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:20.418 23:17:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:31:20.676 23:17:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:31:20.676 23:17:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:31:20.676 23:17:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:31:20.676 23:17:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:31:20.676 23:17:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:31:20.935 23:17:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:31:20.935 23:17:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:31:20.935 23:17:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:31:21.194 23:17:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:31:21.194 23:17:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:31:21.194 23:17:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:31:21.194 23:17:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:31:21.194 23:17:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:21.194 [2024-07-13 23:17:10.564554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:21.194 [2024-07-13 23:17:10.564672] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:21.194 [2024-07-13 23:17:10.564710] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:31:21.194 [2024-07-13 23:17:10.564738] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:21.194 [2024-07-13 23:17:10.567301] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:21.194 [2024-07-13 23:17:10.567387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:21.194 [2024-07-13 23:17:10.567496] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:21.194 [2024-07-13 23:17:10.567535] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:21.194 pt2 00:31:21.194 23:17:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:31:21.194 23:17:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:21.194 23:17:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:21.194 23:17:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:21.194 23:17:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:21.194 23:17:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:21.194 23:17:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:21.194 23:17:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:21.194 23:17:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:21.194 23:17:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:21.194 23:17:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:21.194 23:17:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:21.760 23:17:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:21.760 "name": "raid_bdev1", 00:31:21.760 "uuid": "90a650b5-8db9-4bbe-9b88-3635f4bf9b21", 00:31:21.760 "strip_size_kb": 64, 00:31:21.760 "state": "configuring", 00:31:21.760 "raid_level": "raid5f", 00:31:21.760 "superblock": true, 00:31:21.760 "num_base_bdevs": 3, 00:31:21.760 "num_base_bdevs_discovered": 1, 00:31:21.760 "num_base_bdevs_operational": 2, 00:31:21.760 "base_bdevs_list": [ 00:31:21.760 { 00:31:21.760 "name": null, 00:31:21.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:21.760 "is_configured": false, 00:31:21.760 "data_offset": 2048, 00:31:21.760 "data_size": 63488 00:31:21.760 }, 00:31:21.760 { 00:31:21.760 "name": "pt2", 00:31:21.760 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:21.760 "is_configured": true, 00:31:21.760 "data_offset": 2048, 00:31:21.760 "data_size": 63488 00:31:21.760 }, 00:31:21.760 { 00:31:21.760 "name": null, 00:31:21.760 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:21.760 "is_configured": false, 00:31:21.760 "data_offset": 2048, 00:31:21.760 "data_size": 63488 00:31:21.760 } 00:31:21.760 ] 00:31:21.760 }' 00:31:21.760 23:17:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:21.760 23:17:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:22.326 23:17:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:31:22.326 23:17:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:31:22.326 23:17:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:31:22.326 23:17:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:22.584 [2024-07-13 23:17:11.740315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:22.585 [2024-07-13 23:17:11.740629] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:22.585 [2024-07-13 23:17:11.740796] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:31:22.585 [2024-07-13 23:17:11.740951] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:22.585 [2024-07-13 23:17:11.741558] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:22.585 [2024-07-13 23:17:11.741769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:22.585 [2024-07-13 23:17:11.742004] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:31:22.585 [2024-07-13 23:17:11.742139] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:22.585 [2024-07-13 23:17:11.742375] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:31:22.585 [2024-07-13 23:17:11.742493] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:22.585 [2024-07-13 23:17:11.742668] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:31:22.585 [2024-07-13 23:17:11.743529] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:31:22.585 [2024-07-13 23:17:11.743672] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:31:22.585 [2024-07-13 23:17:11.744037] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:22.585 pt3 00:31:22.585 23:17:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:22.585 23:17:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:22.585 23:17:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:22.585 23:17:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:22.585 23:17:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:22.585 23:17:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:22.585 23:17:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:22.585 23:17:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:22.585 23:17:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:22.585 23:17:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:22.585 23:17:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:22.585 23:17:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:22.842 23:17:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:22.842 "name": "raid_bdev1", 00:31:22.842 "uuid": "90a650b5-8db9-4bbe-9b88-3635f4bf9b21", 00:31:22.842 "strip_size_kb": 64, 00:31:22.843 "state": "online", 00:31:22.843 "raid_level": "raid5f", 00:31:22.843 "superblock": true, 00:31:22.843 "num_base_bdevs": 3, 00:31:22.843 "num_base_bdevs_discovered": 2, 00:31:22.843 "num_base_bdevs_operational": 2, 00:31:22.843 "base_bdevs_list": [ 00:31:22.843 { 00:31:22.843 "name": null, 00:31:22.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:22.843 "is_configured": false, 00:31:22.843 "data_offset": 2048, 00:31:22.843 "data_size": 63488 00:31:22.843 }, 00:31:22.843 { 00:31:22.843 "name": "pt2", 00:31:22.843 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:22.843 "is_configured": true, 00:31:22.843 "data_offset": 2048, 00:31:22.843 "data_size": 63488 00:31:22.843 }, 00:31:22.843 { 00:31:22.843 "name": "pt3", 00:31:22.843 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:22.843 "is_configured": true, 00:31:22.843 "data_offset": 2048, 00:31:22.843 "data_size": 63488 00:31:22.843 } 00:31:22.843 ] 00:31:22.843 }' 00:31:22.843 23:17:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:22.843 23:17:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:23.410 23:17:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:23.668 [2024-07-13 23:17:12.840582] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:23.668 [2024-07-13 23:17:12.840884] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:23.668 [2024-07-13 23:17:12.841134] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:23.668 [2024-07-13 23:17:12.841327] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:23.668 [2024-07-13 23:17:12.841444] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:31:23.668 23:17:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:23.668 23:17:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:31:23.927 23:17:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:31:23.927 23:17:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:31:23.927 23:17:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:31:23.927 23:17:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:31:23.927 23:17:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:31:24.185 23:17:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:24.443 [2024-07-13 23:17:13.604745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:24.443 [2024-07-13 23:17:13.604998] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:24.443 [2024-07-13 23:17:13.605079] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:31:24.443 [2024-07-13 23:17:13.605347] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:24.443 [2024-07-13 23:17:13.607677] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:24.443 [2024-07-13 23:17:13.607884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:24.443 [2024-07-13 23:17:13.608091] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:24.443 [2024-07-13 23:17:13.608223] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:24.443 [2024-07-13 23:17:13.608512] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:31:24.443 [2024-07-13 23:17:13.608629] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:24.443 [2024-07-13 23:17:13.608691] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring 00:31:24.443 [2024-07-13 23:17:13.608905] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:24.443 pt1 00:31:24.443 23:17:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:31:24.443 23:17:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:31:24.443 23:17:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:24.443 23:17:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:24.443 23:17:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:24.443 23:17:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:24.443 23:17:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:24.443 23:17:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:24.443 23:17:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:24.443 23:17:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:24.443 23:17:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:24.443 23:17:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:24.443 23:17:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:24.443 23:17:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:24.443 "name": "raid_bdev1", 00:31:24.443 "uuid": "90a650b5-8db9-4bbe-9b88-3635f4bf9b21", 00:31:24.443 "strip_size_kb": 64, 00:31:24.443 "state": "configuring", 00:31:24.443 "raid_level": "raid5f", 00:31:24.443 "superblock": true, 00:31:24.443 "num_base_bdevs": 3, 00:31:24.443 "num_base_bdevs_discovered": 1, 00:31:24.443 "num_base_bdevs_operational": 2, 00:31:24.443 "base_bdevs_list": [ 00:31:24.443 { 00:31:24.443 "name": null, 00:31:24.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:24.443 "is_configured": false, 00:31:24.443 "data_offset": 2048, 00:31:24.443 "data_size": 63488 00:31:24.443 }, 00:31:24.443 { 00:31:24.443 "name": "pt2", 00:31:24.443 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:24.443 "is_configured": true, 00:31:24.443 "data_offset": 2048, 00:31:24.443 "data_size": 63488 00:31:24.443 }, 00:31:24.443 { 00:31:24.443 "name": null, 00:31:24.443 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:24.443 "is_configured": false, 00:31:24.443 "data_offset": 2048, 00:31:24.443 "data_size": 63488 00:31:24.443 } 00:31:24.443 ] 00:31:24.443 }' 00:31:24.702 23:17:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:24.702 23:17:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.268 23:17:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:31:25.268 23:17:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:31:25.526 23:17:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:31:25.526 23:17:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:25.785 [2024-07-13 23:17:14.953707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:25.785 [2024-07-13 23:17:14.953957] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:25.785 [2024-07-13 23:17:14.954104] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:31:25.785 [2024-07-13 23:17:14.954233] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:25.785 [2024-07-13 23:17:14.954728] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:25.785 [2024-07-13 23:17:14.954885] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:25.785 [2024-07-13 23:17:14.955121] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:31:25.785 [2024-07-13 23:17:14.955249] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:25.785 [2024-07-13 23:17:14.955484] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:31:25.785 [2024-07-13 23:17:14.955598] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:25.785 [2024-07-13 23:17:14.955712] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002bb0 00:31:25.785 [2024-07-13 23:17:14.956603] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:31:25.785 [2024-07-13 23:17:14.956728] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:31:25.785 [2024-07-13 23:17:14.957003] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:25.785 pt3 00:31:25.785 23:17:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:25.785 23:17:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:25.785 23:17:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:25.785 23:17:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:25.785 23:17:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:25.785 23:17:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:25.785 23:17:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:25.785 23:17:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:25.785 23:17:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:25.785 23:17:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:25.785 23:17:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:25.785 23:17:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:26.077 23:17:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:26.077 "name": "raid_bdev1", 00:31:26.077 "uuid": "90a650b5-8db9-4bbe-9b88-3635f4bf9b21", 00:31:26.077 "strip_size_kb": 64, 00:31:26.077 "state": "online", 00:31:26.077 "raid_level": "raid5f", 00:31:26.077 "superblock": true, 00:31:26.077 "num_base_bdevs": 3, 00:31:26.077 "num_base_bdevs_discovered": 2, 00:31:26.077 "num_base_bdevs_operational": 2, 00:31:26.077 "base_bdevs_list": [ 00:31:26.077 { 00:31:26.077 "name": null, 00:31:26.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:26.077 "is_configured": false, 00:31:26.077 "data_offset": 2048, 00:31:26.077 "data_size": 63488 00:31:26.077 }, 00:31:26.077 { 00:31:26.077 "name": "pt2", 00:31:26.077 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:26.077 "is_configured": true, 00:31:26.077 "data_offset": 2048, 00:31:26.077 "data_size": 63488 00:31:26.077 }, 00:31:26.077 { 00:31:26.077 "name": "pt3", 00:31:26.077 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:26.077 "is_configured": true, 00:31:26.077 "data_offset": 2048, 00:31:26.077 "data_size": 63488 00:31:26.077 } 00:31:26.077 ] 00:31:26.077 }' 00:31:26.077 23:17:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:26.077 23:17:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:26.656 23:17:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:31:26.656 23:17:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:31:26.656 23:17:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:31:26.656 23:17:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:31:26.656 23:17:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:26.914 [2024-07-13 23:17:16.263082] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:26.914 23:17:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 90a650b5-8db9-4bbe-9b88-3635f4bf9b21 '!=' 90a650b5-8db9-4bbe-9b88-3635f4bf9b21 ']' 00:31:26.914 23:17:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 161561 00:31:26.914 23:17:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 161561 ']' 00:31:26.914 23:17:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # kill -0 161561 00:31:26.914 23:17:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@953 -- # uname 00:31:26.914 23:17:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:26.914 23:17:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 161561 00:31:26.914 killing process with pid 161561 00:31:26.914 23:17:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:26.914 23:17:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:26.914 23:17:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 161561' 00:31:26.914 23:17:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@967 -- # kill 161561 00:31:26.914 [2024-07-13 23:17:16.300586] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:26.914 23:17:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # wait 161561 00:31:26.914 [2024-07-13 23:17:16.300661] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:26.914 [2024-07-13 23:17:16.300728] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:26.914 [2024-07-13 23:17:16.300754] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:31:27.172 [2024-07-13 23:17:16.331639] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:27.172 ************************************ 00:31:27.172 END TEST raid5f_superblock_test 00:31:27.172 ************************************ 00:31:27.172 23:17:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:31:27.172 00:31:27.172 real 0m22.044s 00:31:27.172 user 0m42.016s 00:31:27.172 sys 0m2.485s 00:31:27.172 23:17:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:27.172 23:17:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.429 23:17:16 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:31:27.429 23:17:16 bdev_raid -- bdev/bdev_raid.sh@889 -- # '[' true = true ']' 00:31:27.429 23:17:16 bdev_raid -- bdev/bdev_raid.sh@890 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:31:27.429 23:17:16 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:31:27.429 23:17:16 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:27.429 23:17:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:27.429 ************************************ 00:31:27.429 START TEST raid5f_rebuild_test 00:31:27.429 ************************************ 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid5f 3 false false true 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid5f 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=3 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid5f '!=' raid1 ']' 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' false = true ']' 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@585 -- # strip_size=64 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # create_arg+=' -z 64' 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=162287 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 162287 /var/tmp/spdk-raid.sock 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 162287 ']' 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:27.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:27.429 23:17:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.429 [2024-07-13 23:17:16.699474] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:31:27.429 [2024-07-13 23:17:16.699997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162287 ] 00:31:27.429 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:27.429 Zero copy mechanism will not be used. 00:31:27.686 [2024-07-13 23:17:16.844556] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.686 [2024-07-13 23:17:16.930842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:27.686 [2024-07-13 23:17:16.984496] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:28.252 23:17:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:28.252 23:17:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:31:28.252 23:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:28.252 23:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:28.509 BaseBdev1_malloc 00:31:28.509 23:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:28.768 [2024-07-13 23:17:18.110265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:28.768 [2024-07-13 23:17:18.110611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:28.768 [2024-07-13 23:17:18.110826] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:31:28.768 [2024-07-13 23:17:18.110993] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:28.768 [2024-07-13 23:17:18.113960] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:28.768 [2024-07-13 23:17:18.114188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:28.768 BaseBdev1 00:31:28.768 23:17:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:28.768 23:17:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:29.026 BaseBdev2_malloc 00:31:29.026 23:17:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:29.284 [2024-07-13 23:17:18.589652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:29.284 [2024-07-13 23:17:18.589965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:29.284 [2024-07-13 23:17:18.590127] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:31:29.284 [2024-07-13 23:17:18.590289] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:29.284 [2024-07-13 23:17:18.592866] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:29.284 [2024-07-13 23:17:18.593103] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:29.284 BaseBdev2 00:31:29.284 23:17:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:29.284 23:17:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:31:29.543 BaseBdev3_malloc 00:31:29.543 23:17:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:31:29.801 [2024-07-13 23:17:19.077808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:31:29.801 [2024-07-13 23:17:19.078151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:29.801 [2024-07-13 23:17:19.078317] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:31:29.801 [2024-07-13 23:17:19.078466] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:29.801 [2024-07-13 23:17:19.081017] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:29.801 [2024-07-13 23:17:19.081212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:31:29.801 BaseBdev3 00:31:29.801 23:17:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:31:30.060 spare_malloc 00:31:30.060 23:17:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:30.318 spare_delay 00:31:30.318 23:17:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:30.576 [2024-07-13 23:17:19.813666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:30.576 [2024-07-13 23:17:19.813992] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:30.576 [2024-07-13 23:17:19.814146] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:31:30.576 [2024-07-13 23:17:19.814290] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:30.576 [2024-07-13 23:17:19.816854] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:30.576 [2024-07-13 23:17:19.817082] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:30.576 spare 00:31:30.576 23:17:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:31:30.834 [2024-07-13 23:17:20.118027] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:30.834 [2024-07-13 23:17:20.120331] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:30.834 [2024-07-13 23:17:20.120542] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:30.834 [2024-07-13 23:17:20.120735] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:31:30.834 [2024-07-13 23:17:20.120785] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:31:30.834 [2024-07-13 23:17:20.121107] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:31:30.834 [2024-07-13 23:17:20.122129] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:31:30.834 [2024-07-13 23:17:20.122282] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:31:30.834 [2024-07-13 23:17:20.122692] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:30.834 23:17:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:30.834 23:17:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:30.834 23:17:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:30.834 23:17:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:30.834 23:17:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:30.834 23:17:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:30.834 23:17:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:30.834 23:17:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:30.834 23:17:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:30.834 23:17:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:30.834 23:17:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:30.834 23:17:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:31.092 23:17:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:31.092 "name": "raid_bdev1", 00:31:31.092 "uuid": "b97f11f6-06dd-4099-888a-ca8eb97bf8af", 00:31:31.092 "strip_size_kb": 64, 00:31:31.092 "state": "online", 00:31:31.092 "raid_level": "raid5f", 00:31:31.092 "superblock": false, 00:31:31.092 "num_base_bdevs": 3, 00:31:31.092 "num_base_bdevs_discovered": 3, 00:31:31.092 "num_base_bdevs_operational": 3, 00:31:31.092 "base_bdevs_list": [ 00:31:31.092 { 00:31:31.092 "name": "BaseBdev1", 00:31:31.092 "uuid": "9c82d9a7-7c5d-5725-89eb-6eba4229fc80", 00:31:31.092 "is_configured": true, 00:31:31.092 "data_offset": 0, 00:31:31.092 "data_size": 65536 00:31:31.092 }, 00:31:31.092 { 00:31:31.092 "name": "BaseBdev2", 00:31:31.092 "uuid": "110eae64-00e0-5aa0-8ae4-335fa26c80af", 00:31:31.092 "is_configured": true, 00:31:31.092 "data_offset": 0, 00:31:31.092 "data_size": 65536 00:31:31.092 }, 00:31:31.092 { 00:31:31.092 "name": "BaseBdev3", 00:31:31.092 "uuid": "3ed94a82-ea24-5a6d-97fd-914ea064c603", 00:31:31.092 "is_configured": true, 00:31:31.092 "data_offset": 0, 00:31:31.092 "data_size": 65536 00:31:31.092 } 00:31:31.092 ] 00:31:31.092 }' 00:31:31.092 23:17:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:31.092 23:17:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:31.658 23:17:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:31:31.658 23:17:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:31.917 [2024-07-13 23:17:21.315627] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:32.175 23:17:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=131072 00:31:32.175 23:17:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:32.175 23:17:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:32.434 23:17:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:31:32.434 23:17:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:31:32.434 23:17:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:31:32.434 23:17:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:31:32.434 23:17:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:31:32.434 23:17:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:32.434 23:17:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:31:32.434 23:17:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:32.434 23:17:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:32.434 23:17:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:32.434 23:17:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:31:32.434 23:17:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:32.434 23:17:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:32.434 23:17:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:31:32.434 [2024-07-13 23:17:21.815609] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:31:32.434 /dev/nbd0 00:31:32.692 23:17:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:32.692 23:17:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:32.692 23:17:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:31:32.692 23:17:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:31:32.692 23:17:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:32.692 23:17:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:32.692 23:17:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:31:32.692 23:17:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # break 00:31:32.692 23:17:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:32.692 23:17:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:32.692 23:17:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:32.692 1+0 records in 00:31:32.692 1+0 records out 00:31:32.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000577824 s, 7.1 MB/s 00:31:32.692 23:17:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:32.692 23:17:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:31:32.692 23:17:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:32.692 23:17:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:32.692 23:17:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:31:32.692 23:17:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:32.692 23:17:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:32.692 23:17:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid5f = raid5f ']' 00:31:32.692 23:17:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # write_unit_size=256 00:31:32.692 23:17:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # echo 128 00:31:32.692 23:17:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:31:32.951 512+0 records in 00:31:32.951 512+0 records out 00:31:32.951 67108864 bytes (67 MB, 64 MiB) copied, 0.362684 s, 185 MB/s 00:31:32.951 23:17:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:31:32.951 23:17:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:32.951 23:17:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:32.951 23:17:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:32.951 23:17:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:31:32.951 23:17:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:32.951 23:17:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:33.209 23:17:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:33.209 23:17:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:33.209 23:17:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:33.209 23:17:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:33.209 23:17:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:33.209 23:17:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:33.209 [2024-07-13 23:17:22.528667] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:33.209 23:17:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:31:33.209 23:17:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:31:33.209 23:17:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:31:33.467 [2024-07-13 23:17:22.775393] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:33.467 23:17:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:33.467 23:17:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:33.467 23:17:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:33.467 23:17:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:33.467 23:17:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:33.467 23:17:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:33.467 23:17:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:33.467 23:17:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:33.467 23:17:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:33.467 23:17:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:33.467 23:17:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:33.467 23:17:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:33.724 23:17:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:33.724 "name": "raid_bdev1", 00:31:33.724 "uuid": "b97f11f6-06dd-4099-888a-ca8eb97bf8af", 00:31:33.724 "strip_size_kb": 64, 00:31:33.724 "state": "online", 00:31:33.724 "raid_level": "raid5f", 00:31:33.724 "superblock": false, 00:31:33.724 "num_base_bdevs": 3, 00:31:33.724 "num_base_bdevs_discovered": 2, 00:31:33.724 "num_base_bdevs_operational": 2, 00:31:33.724 "base_bdevs_list": [ 00:31:33.724 { 00:31:33.724 "name": null, 00:31:33.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:33.724 "is_configured": false, 00:31:33.724 "data_offset": 0, 00:31:33.724 "data_size": 65536 00:31:33.724 }, 00:31:33.724 { 00:31:33.724 "name": "BaseBdev2", 00:31:33.724 "uuid": "110eae64-00e0-5aa0-8ae4-335fa26c80af", 00:31:33.724 "is_configured": true, 00:31:33.724 "data_offset": 0, 00:31:33.724 "data_size": 65536 00:31:33.724 }, 00:31:33.724 { 00:31:33.724 "name": "BaseBdev3", 00:31:33.724 "uuid": "3ed94a82-ea24-5a6d-97fd-914ea064c603", 00:31:33.724 "is_configured": true, 00:31:33.724 "data_offset": 0, 00:31:33.724 "data_size": 65536 00:31:33.724 } 00:31:33.724 ] 00:31:33.724 }' 00:31:33.724 23:17:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:33.724 23:17:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:34.289 23:17:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:34.547 [2024-07-13 23:17:23.879653] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:34.547 [2024-07-13 23:17:23.884976] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027c00 00:31:34.547 [2024-07-13 23:17:23.887455] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:34.547 23:17:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:31:35.926 23:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:35.926 23:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:35.926 23:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:35.927 23:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:35.927 23:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:35.927 23:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:35.927 23:17:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:35.927 23:17:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:35.927 "name": "raid_bdev1", 00:31:35.927 "uuid": "b97f11f6-06dd-4099-888a-ca8eb97bf8af", 00:31:35.927 "strip_size_kb": 64, 00:31:35.927 "state": "online", 00:31:35.927 "raid_level": "raid5f", 00:31:35.927 "superblock": false, 00:31:35.927 "num_base_bdevs": 3, 00:31:35.927 "num_base_bdevs_discovered": 3, 00:31:35.927 "num_base_bdevs_operational": 3, 00:31:35.927 "process": { 00:31:35.927 "type": "rebuild", 00:31:35.927 "target": "spare", 00:31:35.927 "progress": { 00:31:35.927 "blocks": 24576, 00:31:35.927 "percent": 18 00:31:35.927 } 00:31:35.927 }, 00:31:35.927 "base_bdevs_list": [ 00:31:35.927 { 00:31:35.927 "name": "spare", 00:31:35.927 "uuid": "047e2ef7-1591-5183-b797-0b2f3047100b", 00:31:35.927 "is_configured": true, 00:31:35.927 "data_offset": 0, 00:31:35.927 "data_size": 65536 00:31:35.927 }, 00:31:35.927 { 00:31:35.927 "name": "BaseBdev2", 00:31:35.927 "uuid": "110eae64-00e0-5aa0-8ae4-335fa26c80af", 00:31:35.927 "is_configured": true, 00:31:35.927 "data_offset": 0, 00:31:35.927 "data_size": 65536 00:31:35.927 }, 00:31:35.927 { 00:31:35.927 "name": "BaseBdev3", 00:31:35.927 "uuid": "3ed94a82-ea24-5a6d-97fd-914ea064c603", 00:31:35.927 "is_configured": true, 00:31:35.927 "data_offset": 0, 00:31:35.927 "data_size": 65536 00:31:35.927 } 00:31:35.927 ] 00:31:35.927 }' 00:31:35.927 23:17:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:35.927 23:17:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:35.927 23:17:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:35.927 23:17:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:35.927 23:17:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:36.185 [2024-07-13 23:17:25.474292] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:36.185 [2024-07-13 23:17:25.503148] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:36.185 [2024-07-13 23:17:25.503411] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:36.185 [2024-07-13 23:17:25.503578] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:36.185 [2024-07-13 23:17:25.503689] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:36.185 23:17:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:36.185 23:17:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:36.185 23:17:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:36.185 23:17:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:36.185 23:17:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:36.185 23:17:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:36.185 23:17:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:36.185 23:17:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:36.185 23:17:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:36.185 23:17:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:36.185 23:17:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:36.185 23:17:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:36.443 23:17:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:36.443 "name": "raid_bdev1", 00:31:36.443 "uuid": "b97f11f6-06dd-4099-888a-ca8eb97bf8af", 00:31:36.443 "strip_size_kb": 64, 00:31:36.443 "state": "online", 00:31:36.443 "raid_level": "raid5f", 00:31:36.443 "superblock": false, 00:31:36.443 "num_base_bdevs": 3, 00:31:36.443 "num_base_bdevs_discovered": 2, 00:31:36.443 "num_base_bdevs_operational": 2, 00:31:36.443 "base_bdevs_list": [ 00:31:36.443 { 00:31:36.443 "name": null, 00:31:36.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:36.443 "is_configured": false, 00:31:36.443 "data_offset": 0, 00:31:36.443 "data_size": 65536 00:31:36.443 }, 00:31:36.443 { 00:31:36.443 "name": "BaseBdev2", 00:31:36.443 "uuid": "110eae64-00e0-5aa0-8ae4-335fa26c80af", 00:31:36.443 "is_configured": true, 00:31:36.443 "data_offset": 0, 00:31:36.443 "data_size": 65536 00:31:36.443 }, 00:31:36.443 { 00:31:36.443 "name": "BaseBdev3", 00:31:36.443 "uuid": "3ed94a82-ea24-5a6d-97fd-914ea064c603", 00:31:36.443 "is_configured": true, 00:31:36.443 "data_offset": 0, 00:31:36.443 "data_size": 65536 00:31:36.443 } 00:31:36.443 ] 00:31:36.443 }' 00:31:36.443 23:17:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:36.443 23:17:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:37.376 23:17:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:37.376 23:17:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:37.376 23:17:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:37.376 23:17:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:37.376 23:17:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:37.376 23:17:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:37.376 23:17:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:37.376 23:17:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:37.376 "name": "raid_bdev1", 00:31:37.376 "uuid": "b97f11f6-06dd-4099-888a-ca8eb97bf8af", 00:31:37.376 "strip_size_kb": 64, 00:31:37.376 "state": "online", 00:31:37.376 "raid_level": "raid5f", 00:31:37.376 "superblock": false, 00:31:37.376 "num_base_bdevs": 3, 00:31:37.376 "num_base_bdevs_discovered": 2, 00:31:37.376 "num_base_bdevs_operational": 2, 00:31:37.376 "base_bdevs_list": [ 00:31:37.376 { 00:31:37.376 "name": null, 00:31:37.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:37.376 "is_configured": false, 00:31:37.376 "data_offset": 0, 00:31:37.376 "data_size": 65536 00:31:37.376 }, 00:31:37.376 { 00:31:37.376 "name": "BaseBdev2", 00:31:37.376 "uuid": "110eae64-00e0-5aa0-8ae4-335fa26c80af", 00:31:37.376 "is_configured": true, 00:31:37.376 "data_offset": 0, 00:31:37.376 "data_size": 65536 00:31:37.376 }, 00:31:37.376 { 00:31:37.376 "name": "BaseBdev3", 00:31:37.376 "uuid": "3ed94a82-ea24-5a6d-97fd-914ea064c603", 00:31:37.376 "is_configured": true, 00:31:37.376 "data_offset": 0, 00:31:37.376 "data_size": 65536 00:31:37.376 } 00:31:37.376 ] 00:31:37.376 }' 00:31:37.376 23:17:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:37.376 23:17:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:37.376 23:17:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:37.634 23:17:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:37.634 23:17:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:37.634 [2024-07-13 23:17:27.018722] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:37.634 [2024-07-13 23:17:27.026201] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027da0 00:31:37.634 [2024-07-13 23:17:27.029437] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:37.634 23:17:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:31:39.010 23:17:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:39.010 23:17:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:39.010 23:17:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:39.010 23:17:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:39.010 23:17:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:39.010 23:17:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:39.010 23:17:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:39.010 23:17:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:39.010 "name": "raid_bdev1", 00:31:39.010 "uuid": "b97f11f6-06dd-4099-888a-ca8eb97bf8af", 00:31:39.010 "strip_size_kb": 64, 00:31:39.010 "state": "online", 00:31:39.010 "raid_level": "raid5f", 00:31:39.010 "superblock": false, 00:31:39.010 "num_base_bdevs": 3, 00:31:39.010 "num_base_bdevs_discovered": 3, 00:31:39.010 "num_base_bdevs_operational": 3, 00:31:39.010 "process": { 00:31:39.010 "type": "rebuild", 00:31:39.010 "target": "spare", 00:31:39.010 "progress": { 00:31:39.010 "blocks": 24576, 00:31:39.010 "percent": 18 00:31:39.010 } 00:31:39.010 }, 00:31:39.010 "base_bdevs_list": [ 00:31:39.010 { 00:31:39.010 "name": "spare", 00:31:39.010 "uuid": "047e2ef7-1591-5183-b797-0b2f3047100b", 00:31:39.010 "is_configured": true, 00:31:39.010 "data_offset": 0, 00:31:39.010 "data_size": 65536 00:31:39.010 }, 00:31:39.010 { 00:31:39.010 "name": "BaseBdev2", 00:31:39.010 "uuid": "110eae64-00e0-5aa0-8ae4-335fa26c80af", 00:31:39.010 "is_configured": true, 00:31:39.010 "data_offset": 0, 00:31:39.010 "data_size": 65536 00:31:39.010 }, 00:31:39.010 { 00:31:39.010 "name": "BaseBdev3", 00:31:39.010 "uuid": "3ed94a82-ea24-5a6d-97fd-914ea064c603", 00:31:39.010 "is_configured": true, 00:31:39.010 "data_offset": 0, 00:31:39.010 "data_size": 65536 00:31:39.010 } 00:31:39.010 ] 00:31:39.010 }' 00:31:39.010 23:17:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:39.010 23:17:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:39.010 23:17:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:39.010 23:17:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:39.010 23:17:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:31:39.010 23:17:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=3 00:31:39.010 23:17:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid5f = raid1 ']' 00:31:39.010 23:17:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=1086 00:31:39.010 23:17:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:39.010 23:17:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:39.010 23:17:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:39.010 23:17:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:39.010 23:17:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:39.010 23:17:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:39.010 23:17:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:39.010 23:17:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:39.577 23:17:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:39.577 "name": "raid_bdev1", 00:31:39.577 "uuid": "b97f11f6-06dd-4099-888a-ca8eb97bf8af", 00:31:39.577 "strip_size_kb": 64, 00:31:39.577 "state": "online", 00:31:39.577 "raid_level": "raid5f", 00:31:39.577 "superblock": false, 00:31:39.577 "num_base_bdevs": 3, 00:31:39.577 "num_base_bdevs_discovered": 3, 00:31:39.577 "num_base_bdevs_operational": 3, 00:31:39.577 "process": { 00:31:39.577 "type": "rebuild", 00:31:39.577 "target": "spare", 00:31:39.577 "progress": { 00:31:39.577 "blocks": 32768, 00:31:39.577 "percent": 25 00:31:39.577 } 00:31:39.577 }, 00:31:39.577 "base_bdevs_list": [ 00:31:39.577 { 00:31:39.577 "name": "spare", 00:31:39.577 "uuid": "047e2ef7-1591-5183-b797-0b2f3047100b", 00:31:39.577 "is_configured": true, 00:31:39.577 "data_offset": 0, 00:31:39.577 "data_size": 65536 00:31:39.577 }, 00:31:39.577 { 00:31:39.577 "name": "BaseBdev2", 00:31:39.577 "uuid": "110eae64-00e0-5aa0-8ae4-335fa26c80af", 00:31:39.577 "is_configured": true, 00:31:39.577 "data_offset": 0, 00:31:39.577 "data_size": 65536 00:31:39.577 }, 00:31:39.577 { 00:31:39.577 "name": "BaseBdev3", 00:31:39.577 "uuid": "3ed94a82-ea24-5a6d-97fd-914ea064c603", 00:31:39.577 "is_configured": true, 00:31:39.577 "data_offset": 0, 00:31:39.577 "data_size": 65536 00:31:39.578 } 00:31:39.578 ] 00:31:39.578 }' 00:31:39.578 23:17:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:39.578 23:17:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:39.578 23:17:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:39.578 23:17:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:39.578 23:17:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:40.511 23:17:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:40.511 23:17:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:40.511 23:17:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:40.511 23:17:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:40.511 23:17:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:40.511 23:17:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:40.511 23:17:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:40.511 23:17:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:40.768 23:17:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:40.768 "name": "raid_bdev1", 00:31:40.768 "uuid": "b97f11f6-06dd-4099-888a-ca8eb97bf8af", 00:31:40.768 "strip_size_kb": 64, 00:31:40.768 "state": "online", 00:31:40.768 "raid_level": "raid5f", 00:31:40.768 "superblock": false, 00:31:40.768 "num_base_bdevs": 3, 00:31:40.768 "num_base_bdevs_discovered": 3, 00:31:40.768 "num_base_bdevs_operational": 3, 00:31:40.768 "process": { 00:31:40.768 "type": "rebuild", 00:31:40.768 "target": "spare", 00:31:40.768 "progress": { 00:31:40.768 "blocks": 59392, 00:31:40.768 "percent": 45 00:31:40.768 } 00:31:40.768 }, 00:31:40.768 "base_bdevs_list": [ 00:31:40.768 { 00:31:40.768 "name": "spare", 00:31:40.768 "uuid": "047e2ef7-1591-5183-b797-0b2f3047100b", 00:31:40.768 "is_configured": true, 00:31:40.768 "data_offset": 0, 00:31:40.768 "data_size": 65536 00:31:40.768 }, 00:31:40.768 { 00:31:40.768 "name": "BaseBdev2", 00:31:40.768 "uuid": "110eae64-00e0-5aa0-8ae4-335fa26c80af", 00:31:40.768 "is_configured": true, 00:31:40.768 "data_offset": 0, 00:31:40.768 "data_size": 65536 00:31:40.768 }, 00:31:40.768 { 00:31:40.768 "name": "BaseBdev3", 00:31:40.768 "uuid": "3ed94a82-ea24-5a6d-97fd-914ea064c603", 00:31:40.768 "is_configured": true, 00:31:40.768 "data_offset": 0, 00:31:40.768 "data_size": 65536 00:31:40.768 } 00:31:40.768 ] 00:31:40.768 }' 00:31:40.768 23:17:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:40.768 23:17:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:40.768 23:17:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:40.768 23:17:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:40.768 23:17:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:42.170 23:17:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:42.170 23:17:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:42.170 23:17:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:42.170 23:17:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:42.170 23:17:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:42.170 23:17:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:42.170 23:17:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:42.170 23:17:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:42.170 23:17:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:42.170 "name": "raid_bdev1", 00:31:42.170 "uuid": "b97f11f6-06dd-4099-888a-ca8eb97bf8af", 00:31:42.170 "strip_size_kb": 64, 00:31:42.170 "state": "online", 00:31:42.170 "raid_level": "raid5f", 00:31:42.170 "superblock": false, 00:31:42.170 "num_base_bdevs": 3, 00:31:42.170 "num_base_bdevs_discovered": 3, 00:31:42.170 "num_base_bdevs_operational": 3, 00:31:42.170 "process": { 00:31:42.170 "type": "rebuild", 00:31:42.170 "target": "spare", 00:31:42.170 "progress": { 00:31:42.170 "blocks": 88064, 00:31:42.170 "percent": 67 00:31:42.170 } 00:31:42.170 }, 00:31:42.170 "base_bdevs_list": [ 00:31:42.170 { 00:31:42.170 "name": "spare", 00:31:42.170 "uuid": "047e2ef7-1591-5183-b797-0b2f3047100b", 00:31:42.170 "is_configured": true, 00:31:42.170 "data_offset": 0, 00:31:42.170 "data_size": 65536 00:31:42.170 }, 00:31:42.170 { 00:31:42.170 "name": "BaseBdev2", 00:31:42.170 "uuid": "110eae64-00e0-5aa0-8ae4-335fa26c80af", 00:31:42.170 "is_configured": true, 00:31:42.170 "data_offset": 0, 00:31:42.170 "data_size": 65536 00:31:42.170 }, 00:31:42.170 { 00:31:42.170 "name": "BaseBdev3", 00:31:42.170 "uuid": "3ed94a82-ea24-5a6d-97fd-914ea064c603", 00:31:42.170 "is_configured": true, 00:31:42.170 "data_offset": 0, 00:31:42.170 "data_size": 65536 00:31:42.170 } 00:31:42.170 ] 00:31:42.170 }' 00:31:42.170 23:17:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:42.170 23:17:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:42.170 23:17:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:42.170 23:17:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:42.170 23:17:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:43.102 23:17:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:43.102 23:17:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:43.102 23:17:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:43.102 23:17:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:43.102 23:17:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:43.103 23:17:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:43.360 23:17:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:43.360 23:17:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:43.618 23:17:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:43.618 "name": "raid_bdev1", 00:31:43.618 "uuid": "b97f11f6-06dd-4099-888a-ca8eb97bf8af", 00:31:43.618 "strip_size_kb": 64, 00:31:43.618 "state": "online", 00:31:43.618 "raid_level": "raid5f", 00:31:43.618 "superblock": false, 00:31:43.618 "num_base_bdevs": 3, 00:31:43.618 "num_base_bdevs_discovered": 3, 00:31:43.618 "num_base_bdevs_operational": 3, 00:31:43.618 "process": { 00:31:43.618 "type": "rebuild", 00:31:43.618 "target": "spare", 00:31:43.618 "progress": { 00:31:43.618 "blocks": 114688, 00:31:43.618 "percent": 87 00:31:43.618 } 00:31:43.618 }, 00:31:43.618 "base_bdevs_list": [ 00:31:43.618 { 00:31:43.618 "name": "spare", 00:31:43.618 "uuid": "047e2ef7-1591-5183-b797-0b2f3047100b", 00:31:43.618 "is_configured": true, 00:31:43.618 "data_offset": 0, 00:31:43.618 "data_size": 65536 00:31:43.618 }, 00:31:43.618 { 00:31:43.618 "name": "BaseBdev2", 00:31:43.618 "uuid": "110eae64-00e0-5aa0-8ae4-335fa26c80af", 00:31:43.618 "is_configured": true, 00:31:43.618 "data_offset": 0, 00:31:43.618 "data_size": 65536 00:31:43.618 }, 00:31:43.618 { 00:31:43.618 "name": "BaseBdev3", 00:31:43.618 "uuid": "3ed94a82-ea24-5a6d-97fd-914ea064c603", 00:31:43.618 "is_configured": true, 00:31:43.618 "data_offset": 0, 00:31:43.618 "data_size": 65536 00:31:43.618 } 00:31:43.618 ] 00:31:43.618 }' 00:31:43.618 23:17:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:43.618 23:17:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:43.618 23:17:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:43.618 23:17:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:43.618 23:17:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:44.189 [2024-07-13 23:17:33.500991] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:31:44.189 [2024-07-13 23:17:33.501440] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:31:44.189 [2024-07-13 23:17:33.501723] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:44.761 23:17:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:44.761 23:17:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:44.761 23:17:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:44.761 23:17:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:44.761 23:17:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:44.761 23:17:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:44.761 23:17:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:44.761 23:17:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:44.761 23:17:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:44.761 "name": "raid_bdev1", 00:31:44.761 "uuid": "b97f11f6-06dd-4099-888a-ca8eb97bf8af", 00:31:44.761 "strip_size_kb": 64, 00:31:44.761 "state": "online", 00:31:44.761 "raid_level": "raid5f", 00:31:44.761 "superblock": false, 00:31:44.761 "num_base_bdevs": 3, 00:31:44.761 "num_base_bdevs_discovered": 3, 00:31:44.761 "num_base_bdevs_operational": 3, 00:31:44.761 "base_bdevs_list": [ 00:31:44.761 { 00:31:44.761 "name": "spare", 00:31:44.761 "uuid": "047e2ef7-1591-5183-b797-0b2f3047100b", 00:31:44.761 "is_configured": true, 00:31:44.761 "data_offset": 0, 00:31:44.761 "data_size": 65536 00:31:44.761 }, 00:31:44.762 { 00:31:44.762 "name": "BaseBdev2", 00:31:44.762 "uuid": "110eae64-00e0-5aa0-8ae4-335fa26c80af", 00:31:44.762 "is_configured": true, 00:31:44.762 "data_offset": 0, 00:31:44.762 "data_size": 65536 00:31:44.762 }, 00:31:44.762 { 00:31:44.762 "name": "BaseBdev3", 00:31:44.762 "uuid": "3ed94a82-ea24-5a6d-97fd-914ea064c603", 00:31:44.762 "is_configured": true, 00:31:44.762 "data_offset": 0, 00:31:44.762 "data_size": 65536 00:31:44.762 } 00:31:44.762 ] 00:31:44.762 }' 00:31:44.762 23:17:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:45.020 23:17:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:31:45.020 23:17:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:45.020 23:17:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:31:45.020 23:17:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:31:45.020 23:17:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:45.020 23:17:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:45.020 23:17:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:45.020 23:17:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:45.020 23:17:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:45.020 23:17:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:45.020 23:17:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:45.277 23:17:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:45.277 "name": "raid_bdev1", 00:31:45.277 "uuid": "b97f11f6-06dd-4099-888a-ca8eb97bf8af", 00:31:45.277 "strip_size_kb": 64, 00:31:45.277 "state": "online", 00:31:45.277 "raid_level": "raid5f", 00:31:45.277 "superblock": false, 00:31:45.277 "num_base_bdevs": 3, 00:31:45.277 "num_base_bdevs_discovered": 3, 00:31:45.277 "num_base_bdevs_operational": 3, 00:31:45.277 "base_bdevs_list": [ 00:31:45.277 { 00:31:45.277 "name": "spare", 00:31:45.277 "uuid": "047e2ef7-1591-5183-b797-0b2f3047100b", 00:31:45.277 "is_configured": true, 00:31:45.277 "data_offset": 0, 00:31:45.277 "data_size": 65536 00:31:45.277 }, 00:31:45.277 { 00:31:45.277 "name": "BaseBdev2", 00:31:45.277 "uuid": "110eae64-00e0-5aa0-8ae4-335fa26c80af", 00:31:45.277 "is_configured": true, 00:31:45.277 "data_offset": 0, 00:31:45.277 "data_size": 65536 00:31:45.277 }, 00:31:45.277 { 00:31:45.277 "name": "BaseBdev3", 00:31:45.277 "uuid": "3ed94a82-ea24-5a6d-97fd-914ea064c603", 00:31:45.277 "is_configured": true, 00:31:45.277 "data_offset": 0, 00:31:45.277 "data_size": 65536 00:31:45.277 } 00:31:45.277 ] 00:31:45.277 }' 00:31:45.277 23:17:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:45.277 23:17:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:45.277 23:17:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:45.277 23:17:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:45.277 23:17:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:45.277 23:17:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:45.277 23:17:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:45.277 23:17:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:45.277 23:17:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:45.277 23:17:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:45.277 23:17:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:45.277 23:17:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:45.277 23:17:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:45.277 23:17:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:45.277 23:17:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:45.277 23:17:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:45.535 23:17:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:45.535 "name": "raid_bdev1", 00:31:45.535 "uuid": "b97f11f6-06dd-4099-888a-ca8eb97bf8af", 00:31:45.535 "strip_size_kb": 64, 00:31:45.535 "state": "online", 00:31:45.535 "raid_level": "raid5f", 00:31:45.535 "superblock": false, 00:31:45.535 "num_base_bdevs": 3, 00:31:45.535 "num_base_bdevs_discovered": 3, 00:31:45.535 "num_base_bdevs_operational": 3, 00:31:45.535 "base_bdevs_list": [ 00:31:45.535 { 00:31:45.535 "name": "spare", 00:31:45.535 "uuid": "047e2ef7-1591-5183-b797-0b2f3047100b", 00:31:45.535 "is_configured": true, 00:31:45.535 "data_offset": 0, 00:31:45.535 "data_size": 65536 00:31:45.535 }, 00:31:45.535 { 00:31:45.535 "name": "BaseBdev2", 00:31:45.535 "uuid": "110eae64-00e0-5aa0-8ae4-335fa26c80af", 00:31:45.535 "is_configured": true, 00:31:45.535 "data_offset": 0, 00:31:45.535 "data_size": 65536 00:31:45.535 }, 00:31:45.535 { 00:31:45.535 "name": "BaseBdev3", 00:31:45.535 "uuid": "3ed94a82-ea24-5a6d-97fd-914ea064c603", 00:31:45.535 "is_configured": true, 00:31:45.535 "data_offset": 0, 00:31:45.535 "data_size": 65536 00:31:45.535 } 00:31:45.535 ] 00:31:45.535 }' 00:31:45.535 23:17:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:45.535 23:17:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.103 23:17:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:46.362 [2024-07-13 23:17:35.730130] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:46.362 [2024-07-13 23:17:35.730570] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:46.362 [2024-07-13 23:17:35.730877] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:46.362 [2024-07-13 23:17:35.731199] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:46.362 [2024-07-13 23:17:35.731353] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:31:46.362 23:17:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:46.362 23:17:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:31:46.929 23:17:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:31:46.929 23:17:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:31:46.929 23:17:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:31:46.929 23:17:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:31:46.929 23:17:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:46.929 23:17:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:31:46.929 23:17:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:46.929 23:17:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:46.929 23:17:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:46.929 23:17:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:31:46.929 23:17:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:46.929 23:17:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:46.929 23:17:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:31:46.929 /dev/nbd0 00:31:46.929 23:17:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:46.929 23:17:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:46.929 23:17:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:31:46.929 23:17:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:31:46.929 23:17:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:46.929 23:17:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:46.929 23:17:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:31:46.929 23:17:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # break 00:31:46.929 23:17:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:46.929 23:17:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:46.929 23:17:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:46.929 1+0 records in 00:31:46.929 1+0 records out 00:31:46.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372867 s, 11.0 MB/s 00:31:46.929 23:17:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:46.929 23:17:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:31:46.929 23:17:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:46.929 23:17:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:46.929 23:17:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:31:46.929 23:17:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:46.929 23:17:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:46.929 23:17:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:31:47.188 /dev/nbd1 00:31:47.447 23:17:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:47.447 23:17:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:47.447 23:17:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:31:47.447 23:17:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:31:47.448 23:17:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:47.448 23:17:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:47.448 23:17:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:31:47.448 23:17:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # break 00:31:47.448 23:17:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:47.448 23:17:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:47.448 23:17:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:47.448 1+0 records in 00:31:47.448 1+0 records out 00:31:47.448 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000451215 s, 9.1 MB/s 00:31:47.448 23:17:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:47.448 23:17:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:31:47.448 23:17:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:47.448 23:17:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:47.448 23:17:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:31:47.448 23:17:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:47.448 23:17:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:47.448 23:17:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:31:47.448 23:17:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:31:47.448 23:17:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:47.448 23:17:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:47.448 23:17:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:47.448 23:17:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:31:47.448 23:17:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:47.448 23:17:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:47.707 23:17:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:47.707 23:17:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:47.707 23:17:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:47.707 23:17:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:47.707 23:17:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:47.707 23:17:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:47.707 23:17:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:31:47.707 23:17:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:31:47.707 23:17:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:47.707 23:17:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:31:47.966 23:17:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:47.966 23:17:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:47.966 23:17:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:47.966 23:17:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:47.966 23:17:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:47.966 23:17:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:47.966 23:17:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:31:47.966 23:17:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:31:47.966 23:17:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:31:47.966 23:17:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 162287 00:31:47.966 23:17:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 162287 ']' 00:31:47.966 23:17:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 162287 00:31:47.966 23:17:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:31:47.966 23:17:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:47.966 23:17:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 162287 00:31:47.966 23:17:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:47.966 23:17:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:47.966 23:17:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 162287' 00:31:47.966 killing process with pid 162287 00:31:47.966 23:17:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@967 -- # kill 162287 00:31:47.966 Received shutdown signal, test time was about 60.000000 seconds 00:31:47.966 00:31:47.966 Latency(us) 00:31:47.966 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:47.966 =================================================================================================================== 00:31:47.966 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:47.966 23:17:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # wait 162287 00:31:47.966 [2024-07-13 23:17:37.300538] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:47.966 [2024-07-13 23:17:37.346650] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:48.538 23:17:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:31:48.538 00:31:48.538 real 0m21.036s 00:31:48.538 user 0m32.760s 00:31:48.538 sys 0m2.620s 00:31:48.538 ************************************ 00:31:48.538 END TEST raid5f_rebuild_test 00:31:48.538 ************************************ 00:31:48.538 23:17:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:48.538 23:17:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:48.538 23:17:37 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:31:48.538 23:17:37 bdev_raid -- bdev/bdev_raid.sh@891 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:31:48.538 23:17:37 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:31:48.538 23:17:37 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:48.539 23:17:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:48.539 ************************************ 00:31:48.539 START TEST raid5f_rebuild_test_sb 00:31:48.539 ************************************ 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid5f 3 true false true 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid5f 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=3 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid5f '!=' raid1 ']' 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' false = true ']' 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # strip_size=64 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # create_arg+=' -z 64' 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=162832 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 162832 /var/tmp/spdk-raid.sock 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 162832 ']' 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:48.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:48.539 23:17:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:48.539 [2024-07-13 23:17:37.783523] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:31:48.539 [2024-07-13 23:17:37.784041] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162832 ] 00:31:48.539 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:48.539 Zero copy mechanism will not be used. 00:31:48.539 [2024-07-13 23:17:37.914885] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:48.798 [2024-07-13 23:17:38.023003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:48.798 [2024-07-13 23:17:38.096027] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:49.366 23:17:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:49.366 23:17:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:31:49.366 23:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:49.366 23:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:49.623 BaseBdev1_malloc 00:31:49.624 23:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:49.882 [2024-07-13 23:17:39.137607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:49.882 [2024-07-13 23:17:39.138096] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:49.882 [2024-07-13 23:17:39.138285] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:31:49.882 [2024-07-13 23:17:39.138479] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:49.882 [2024-07-13 23:17:39.141526] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:49.882 [2024-07-13 23:17:39.141784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:49.882 BaseBdev1 00:31:49.882 23:17:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:49.883 23:17:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:50.141 BaseBdev2_malloc 00:31:50.142 23:17:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:50.400 [2024-07-13 23:17:39.626344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:50.400 [2024-07-13 23:17:39.626828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:50.400 [2024-07-13 23:17:39.627027] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:31:50.400 [2024-07-13 23:17:39.627218] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:50.400 [2024-07-13 23:17:39.630497] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:50.400 [2024-07-13 23:17:39.630703] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:50.400 BaseBdev2 00:31:50.400 23:17:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:50.400 23:17:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:31:50.659 BaseBdev3_malloc 00:31:50.659 23:17:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:31:50.917 [2024-07-13 23:17:40.108105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:31:50.917 [2024-07-13 23:17:40.108577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:50.917 [2024-07-13 23:17:40.108811] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:31:50.917 [2024-07-13 23:17:40.109084] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:50.917 [2024-07-13 23:17:40.112149] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:50.917 [2024-07-13 23:17:40.112351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:31:50.917 BaseBdev3 00:31:50.917 23:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:31:51.176 spare_malloc 00:31:51.176 23:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:51.435 spare_delay 00:31:51.435 23:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:51.692 [2024-07-13 23:17:40.876676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:51.692 [2024-07-13 23:17:40.877165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:51.692 [2024-07-13 23:17:40.877380] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:31:51.692 [2024-07-13 23:17:40.877620] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:51.692 [2024-07-13 23:17:40.880470] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:51.693 [2024-07-13 23:17:40.880682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:51.693 spare 00:31:51.693 23:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:31:51.693 [2024-07-13 23:17:41.097207] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:51.951 [2024-07-13 23:17:41.099951] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:51.951 [2024-07-13 23:17:41.100206] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:51.951 [2024-07-13 23:17:41.100532] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:31:51.951 [2024-07-13 23:17:41.100707] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:51.951 [2024-07-13 23:17:41.101104] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:31:51.951 [2024-07-13 23:17:41.102314] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:31:51.951 [2024-07-13 23:17:41.102470] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:31:51.951 [2024-07-13 23:17:41.102822] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:51.951 23:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:51.951 23:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:51.951 23:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:51.951 23:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:51.951 23:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:51.951 23:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:51.951 23:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:51.951 23:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:51.951 23:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:51.951 23:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:51.951 23:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:51.951 23:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:52.210 23:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:52.210 "name": "raid_bdev1", 00:31:52.210 "uuid": "f5fae5fd-fc12-4e10-ae83-3252d1616c51", 00:31:52.210 "strip_size_kb": 64, 00:31:52.210 "state": "online", 00:31:52.210 "raid_level": "raid5f", 00:31:52.210 "superblock": true, 00:31:52.210 "num_base_bdevs": 3, 00:31:52.210 "num_base_bdevs_discovered": 3, 00:31:52.210 "num_base_bdevs_operational": 3, 00:31:52.210 "base_bdevs_list": [ 00:31:52.210 { 00:31:52.210 "name": "BaseBdev1", 00:31:52.210 "uuid": "1f4a44fe-7aa4-5e58-a7da-68dbfa9c269b", 00:31:52.210 "is_configured": true, 00:31:52.210 "data_offset": 2048, 00:31:52.210 "data_size": 63488 00:31:52.210 }, 00:31:52.210 { 00:31:52.210 "name": "BaseBdev2", 00:31:52.210 "uuid": "8d883b94-1d99-58a9-88df-eeae8f65d339", 00:31:52.210 "is_configured": true, 00:31:52.210 "data_offset": 2048, 00:31:52.210 "data_size": 63488 00:31:52.210 }, 00:31:52.210 { 00:31:52.210 "name": "BaseBdev3", 00:31:52.210 "uuid": "13c10d35-9b61-5285-9767-ae606e7f42d4", 00:31:52.210 "is_configured": true, 00:31:52.210 "data_offset": 2048, 00:31:52.210 "data_size": 63488 00:31:52.210 } 00:31:52.210 ] 00:31:52.210 }' 00:31:52.210 23:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:52.210 23:17:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:52.778 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:52.778 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:31:53.036 [2024-07-13 23:17:42.271428] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:53.036 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=126976 00:31:53.036 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:53.036 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:53.294 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:31:53.294 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:31:53.294 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:31:53.294 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:31:53.294 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:31:53.294 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:53.294 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:31:53.294 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:53.294 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:53.294 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:53.294 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:31:53.294 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:53.294 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:53.294 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:31:53.553 [2024-07-13 23:17:42.767464] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:31:53.553 /dev/nbd0 00:31:53.553 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:53.553 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:53.553 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:31:53.553 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:31:53.553 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:53.553 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:53.553 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:31:53.553 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:31:53.553 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:53.553 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:53.553 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:53.553 1+0 records in 00:31:53.553 1+0 records out 00:31:53.553 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000657173 s, 6.2 MB/s 00:31:53.553 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:53.553 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:31:53.553 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:53.553 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:53.553 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:31:53.553 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:53.553 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:53.553 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid5f = raid5f ']' 00:31:53.553 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # write_unit_size=256 00:31:53.553 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # echo 128 00:31:53.553 23:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:31:54.120 496+0 records in 00:31:54.120 496+0 records out 00:31:54.120 65011712 bytes (65 MB, 62 MiB) copied, 0.431095 s, 151 MB/s 00:31:54.120 23:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:31:54.120 23:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:54.120 23:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:54.120 23:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:54.120 23:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:31:54.120 23:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:54.120 23:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:54.120 23:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:54.120 [2024-07-13 23:17:43.504693] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:54.120 23:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:54.120 23:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:54.120 23:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:54.120 23:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:54.120 23:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:54.120 23:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:31:54.120 23:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:31:54.120 23:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:31:54.378 [2024-07-13 23:17:43.744487] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:54.378 23:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:54.378 23:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:54.378 23:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:54.378 23:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:54.378 23:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:54.378 23:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:54.378 23:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:54.378 23:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:54.378 23:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:54.378 23:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:54.378 23:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:54.378 23:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:54.946 23:17:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:54.946 "name": "raid_bdev1", 00:31:54.946 "uuid": "f5fae5fd-fc12-4e10-ae83-3252d1616c51", 00:31:54.946 "strip_size_kb": 64, 00:31:54.946 "state": "online", 00:31:54.946 "raid_level": "raid5f", 00:31:54.946 "superblock": true, 00:31:54.946 "num_base_bdevs": 3, 00:31:54.946 "num_base_bdevs_discovered": 2, 00:31:54.946 "num_base_bdevs_operational": 2, 00:31:54.946 "base_bdevs_list": [ 00:31:54.946 { 00:31:54.946 "name": null, 00:31:54.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:54.946 "is_configured": false, 00:31:54.946 "data_offset": 2048, 00:31:54.946 "data_size": 63488 00:31:54.946 }, 00:31:54.946 { 00:31:54.946 "name": "BaseBdev2", 00:31:54.946 "uuid": "8d883b94-1d99-58a9-88df-eeae8f65d339", 00:31:54.946 "is_configured": true, 00:31:54.946 "data_offset": 2048, 00:31:54.946 "data_size": 63488 00:31:54.946 }, 00:31:54.946 { 00:31:54.946 "name": "BaseBdev3", 00:31:54.946 "uuid": "13c10d35-9b61-5285-9767-ae606e7f42d4", 00:31:54.946 "is_configured": true, 00:31:54.946 "data_offset": 2048, 00:31:54.946 "data_size": 63488 00:31:54.946 } 00:31:54.946 ] 00:31:54.946 }' 00:31:54.946 23:17:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:54.946 23:17:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:55.512 23:17:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:55.512 [2024-07-13 23:17:44.832795] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:55.512 [2024-07-13 23:17:44.840160] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000025500 00:31:55.512 [2024-07-13 23:17:44.843428] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:55.512 23:17:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:31:56.887 23:17:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:56.887 23:17:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:56.887 23:17:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:56.887 23:17:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:56.887 23:17:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:56.887 23:17:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:56.887 23:17:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:56.887 23:17:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:56.887 "name": "raid_bdev1", 00:31:56.887 "uuid": "f5fae5fd-fc12-4e10-ae83-3252d1616c51", 00:31:56.887 "strip_size_kb": 64, 00:31:56.887 "state": "online", 00:31:56.887 "raid_level": "raid5f", 00:31:56.887 "superblock": true, 00:31:56.887 "num_base_bdevs": 3, 00:31:56.887 "num_base_bdevs_discovered": 3, 00:31:56.887 "num_base_bdevs_operational": 3, 00:31:56.887 "process": { 00:31:56.887 "type": "rebuild", 00:31:56.887 "target": "spare", 00:31:56.887 "progress": { 00:31:56.887 "blocks": 24576, 00:31:56.887 "percent": 19 00:31:56.887 } 00:31:56.887 }, 00:31:56.887 "base_bdevs_list": [ 00:31:56.887 { 00:31:56.887 "name": "spare", 00:31:56.887 "uuid": "a53b3e8d-44d6-5b7a-b37f-c911bdfa1400", 00:31:56.887 "is_configured": true, 00:31:56.887 "data_offset": 2048, 00:31:56.887 "data_size": 63488 00:31:56.887 }, 00:31:56.888 { 00:31:56.888 "name": "BaseBdev2", 00:31:56.888 "uuid": "8d883b94-1d99-58a9-88df-eeae8f65d339", 00:31:56.888 "is_configured": true, 00:31:56.888 "data_offset": 2048, 00:31:56.888 "data_size": 63488 00:31:56.888 }, 00:31:56.888 { 00:31:56.888 "name": "BaseBdev3", 00:31:56.888 "uuid": "13c10d35-9b61-5285-9767-ae606e7f42d4", 00:31:56.888 "is_configured": true, 00:31:56.888 "data_offset": 2048, 00:31:56.888 "data_size": 63488 00:31:56.888 } 00:31:56.888 ] 00:31:56.888 }' 00:31:56.888 23:17:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:56.888 23:17:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:56.888 23:17:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:56.888 23:17:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:56.888 23:17:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:57.146 [2024-07-13 23:17:46.433696] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:57.146 [2024-07-13 23:17:46.462969] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:57.146 [2024-07-13 23:17:46.463288] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:57.146 [2024-07-13 23:17:46.463444] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:57.146 [2024-07-13 23:17:46.463499] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:57.146 23:17:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:57.146 23:17:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:57.146 23:17:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:57.146 23:17:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:57.146 23:17:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:57.146 23:17:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:57.146 23:17:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:57.146 23:17:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:57.146 23:17:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:57.146 23:17:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:57.146 23:17:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:57.146 23:17:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:57.407 23:17:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:57.407 "name": "raid_bdev1", 00:31:57.407 "uuid": "f5fae5fd-fc12-4e10-ae83-3252d1616c51", 00:31:57.407 "strip_size_kb": 64, 00:31:57.407 "state": "online", 00:31:57.407 "raid_level": "raid5f", 00:31:57.407 "superblock": true, 00:31:57.407 "num_base_bdevs": 3, 00:31:57.407 "num_base_bdevs_discovered": 2, 00:31:57.407 "num_base_bdevs_operational": 2, 00:31:57.407 "base_bdevs_list": [ 00:31:57.407 { 00:31:57.407 "name": null, 00:31:57.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:57.407 "is_configured": false, 00:31:57.407 "data_offset": 2048, 00:31:57.407 "data_size": 63488 00:31:57.407 }, 00:31:57.407 { 00:31:57.407 "name": "BaseBdev2", 00:31:57.407 "uuid": "8d883b94-1d99-58a9-88df-eeae8f65d339", 00:31:57.407 "is_configured": true, 00:31:57.407 "data_offset": 2048, 00:31:57.407 "data_size": 63488 00:31:57.407 }, 00:31:57.407 { 00:31:57.407 "name": "BaseBdev3", 00:31:57.407 "uuid": "13c10d35-9b61-5285-9767-ae606e7f42d4", 00:31:57.407 "is_configured": true, 00:31:57.407 "data_offset": 2048, 00:31:57.407 "data_size": 63488 00:31:57.407 } 00:31:57.407 ] 00:31:57.407 }' 00:31:57.407 23:17:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:57.407 23:17:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:58.341 23:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:58.341 23:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:58.341 23:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:58.341 23:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:58.341 23:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:58.341 23:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:58.341 23:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:58.341 23:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:58.341 "name": "raid_bdev1", 00:31:58.341 "uuid": "f5fae5fd-fc12-4e10-ae83-3252d1616c51", 00:31:58.341 "strip_size_kb": 64, 00:31:58.341 "state": "online", 00:31:58.341 "raid_level": "raid5f", 00:31:58.341 "superblock": true, 00:31:58.341 "num_base_bdevs": 3, 00:31:58.341 "num_base_bdevs_discovered": 2, 00:31:58.341 "num_base_bdevs_operational": 2, 00:31:58.341 "base_bdevs_list": [ 00:31:58.341 { 00:31:58.341 "name": null, 00:31:58.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:58.341 "is_configured": false, 00:31:58.341 "data_offset": 2048, 00:31:58.341 "data_size": 63488 00:31:58.341 }, 00:31:58.341 { 00:31:58.341 "name": "BaseBdev2", 00:31:58.341 "uuid": "8d883b94-1d99-58a9-88df-eeae8f65d339", 00:31:58.341 "is_configured": true, 00:31:58.341 "data_offset": 2048, 00:31:58.341 "data_size": 63488 00:31:58.341 }, 00:31:58.341 { 00:31:58.341 "name": "BaseBdev3", 00:31:58.341 "uuid": "13c10d35-9b61-5285-9767-ae606e7f42d4", 00:31:58.341 "is_configured": true, 00:31:58.341 "data_offset": 2048, 00:31:58.341 "data_size": 63488 00:31:58.341 } 00:31:58.341 ] 00:31:58.341 }' 00:31:58.341 23:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:58.341 23:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:58.341 23:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:58.341 23:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:58.341 23:17:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:58.908 [2024-07-13 23:17:48.008520] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:58.908 [2024-07-13 23:17:48.016126] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000256a0 00:31:58.908 [2024-07-13 23:17:48.019269] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:58.908 23:17:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:31:59.842 23:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:59.842 23:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:59.842 23:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:59.842 23:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:59.842 23:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:59.842 23:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:59.842 23:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:00.100 23:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:00.100 "name": "raid_bdev1", 00:32:00.100 "uuid": "f5fae5fd-fc12-4e10-ae83-3252d1616c51", 00:32:00.100 "strip_size_kb": 64, 00:32:00.100 "state": "online", 00:32:00.100 "raid_level": "raid5f", 00:32:00.100 "superblock": true, 00:32:00.100 "num_base_bdevs": 3, 00:32:00.100 "num_base_bdevs_discovered": 3, 00:32:00.100 "num_base_bdevs_operational": 3, 00:32:00.100 "process": { 00:32:00.100 "type": "rebuild", 00:32:00.100 "target": "spare", 00:32:00.100 "progress": { 00:32:00.100 "blocks": 24576, 00:32:00.100 "percent": 19 00:32:00.100 } 00:32:00.100 }, 00:32:00.100 "base_bdevs_list": [ 00:32:00.100 { 00:32:00.100 "name": "spare", 00:32:00.100 "uuid": "a53b3e8d-44d6-5b7a-b37f-c911bdfa1400", 00:32:00.100 "is_configured": true, 00:32:00.100 "data_offset": 2048, 00:32:00.100 "data_size": 63488 00:32:00.100 }, 00:32:00.100 { 00:32:00.100 "name": "BaseBdev2", 00:32:00.100 "uuid": "8d883b94-1d99-58a9-88df-eeae8f65d339", 00:32:00.100 "is_configured": true, 00:32:00.100 "data_offset": 2048, 00:32:00.100 "data_size": 63488 00:32:00.100 }, 00:32:00.100 { 00:32:00.100 "name": "BaseBdev3", 00:32:00.100 "uuid": "13c10d35-9b61-5285-9767-ae606e7f42d4", 00:32:00.100 "is_configured": true, 00:32:00.100 "data_offset": 2048, 00:32:00.100 "data_size": 63488 00:32:00.100 } 00:32:00.100 ] 00:32:00.100 }' 00:32:00.100 23:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:00.100 23:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:00.100 23:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:00.100 23:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:00.100 23:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:32:00.100 23:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:32:00.100 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:32:00.100 23:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=3 00:32:00.100 23:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid5f = raid1 ']' 00:32:00.100 23:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=1107 00:32:00.100 23:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:32:00.100 23:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:00.100 23:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:00.100 23:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:00.100 23:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:00.100 23:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:00.100 23:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:00.100 23:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:00.359 23:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:00.359 "name": "raid_bdev1", 00:32:00.359 "uuid": "f5fae5fd-fc12-4e10-ae83-3252d1616c51", 00:32:00.359 "strip_size_kb": 64, 00:32:00.359 "state": "online", 00:32:00.359 "raid_level": "raid5f", 00:32:00.359 "superblock": true, 00:32:00.359 "num_base_bdevs": 3, 00:32:00.359 "num_base_bdevs_discovered": 3, 00:32:00.359 "num_base_bdevs_operational": 3, 00:32:00.359 "process": { 00:32:00.359 "type": "rebuild", 00:32:00.359 "target": "spare", 00:32:00.359 "progress": { 00:32:00.359 "blocks": 30720, 00:32:00.359 "percent": 24 00:32:00.359 } 00:32:00.359 }, 00:32:00.359 "base_bdevs_list": [ 00:32:00.359 { 00:32:00.359 "name": "spare", 00:32:00.359 "uuid": "a53b3e8d-44d6-5b7a-b37f-c911bdfa1400", 00:32:00.359 "is_configured": true, 00:32:00.359 "data_offset": 2048, 00:32:00.359 "data_size": 63488 00:32:00.359 }, 00:32:00.359 { 00:32:00.359 "name": "BaseBdev2", 00:32:00.359 "uuid": "8d883b94-1d99-58a9-88df-eeae8f65d339", 00:32:00.359 "is_configured": true, 00:32:00.359 "data_offset": 2048, 00:32:00.359 "data_size": 63488 00:32:00.359 }, 00:32:00.359 { 00:32:00.359 "name": "BaseBdev3", 00:32:00.359 "uuid": "13c10d35-9b61-5285-9767-ae606e7f42d4", 00:32:00.359 "is_configured": true, 00:32:00.359 "data_offset": 2048, 00:32:00.359 "data_size": 63488 00:32:00.359 } 00:32:00.359 ] 00:32:00.359 }' 00:32:00.359 23:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:00.359 23:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:00.359 23:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:00.359 23:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:00.359 23:17:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:32:01.733 23:17:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:32:01.734 23:17:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:01.734 23:17:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:01.734 23:17:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:01.734 23:17:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:01.734 23:17:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:01.734 23:17:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:01.734 23:17:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:01.734 23:17:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:01.734 "name": "raid_bdev1", 00:32:01.734 "uuid": "f5fae5fd-fc12-4e10-ae83-3252d1616c51", 00:32:01.734 "strip_size_kb": 64, 00:32:01.734 "state": "online", 00:32:01.734 "raid_level": "raid5f", 00:32:01.734 "superblock": true, 00:32:01.734 "num_base_bdevs": 3, 00:32:01.734 "num_base_bdevs_discovered": 3, 00:32:01.734 "num_base_bdevs_operational": 3, 00:32:01.734 "process": { 00:32:01.734 "type": "rebuild", 00:32:01.734 "target": "spare", 00:32:01.734 "progress": { 00:32:01.734 "blocks": 59392, 00:32:01.734 "percent": 46 00:32:01.734 } 00:32:01.734 }, 00:32:01.734 "base_bdevs_list": [ 00:32:01.734 { 00:32:01.734 "name": "spare", 00:32:01.734 "uuid": "a53b3e8d-44d6-5b7a-b37f-c911bdfa1400", 00:32:01.734 "is_configured": true, 00:32:01.734 "data_offset": 2048, 00:32:01.734 "data_size": 63488 00:32:01.734 }, 00:32:01.734 { 00:32:01.734 "name": "BaseBdev2", 00:32:01.734 "uuid": "8d883b94-1d99-58a9-88df-eeae8f65d339", 00:32:01.734 "is_configured": true, 00:32:01.734 "data_offset": 2048, 00:32:01.734 "data_size": 63488 00:32:01.734 }, 00:32:01.734 { 00:32:01.734 "name": "BaseBdev3", 00:32:01.734 "uuid": "13c10d35-9b61-5285-9767-ae606e7f42d4", 00:32:01.734 "is_configured": true, 00:32:01.734 "data_offset": 2048, 00:32:01.734 "data_size": 63488 00:32:01.734 } 00:32:01.734 ] 00:32:01.734 }' 00:32:01.734 23:17:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:01.734 23:17:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:01.734 23:17:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:01.734 23:17:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:01.734 23:17:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:32:03.111 23:17:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:32:03.111 23:17:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:03.111 23:17:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:03.111 23:17:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:03.111 23:17:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:03.111 23:17:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:03.111 23:17:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:03.111 23:17:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:03.111 23:17:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:03.111 "name": "raid_bdev1", 00:32:03.111 "uuid": "f5fae5fd-fc12-4e10-ae83-3252d1616c51", 00:32:03.111 "strip_size_kb": 64, 00:32:03.111 "state": "online", 00:32:03.111 "raid_level": "raid5f", 00:32:03.111 "superblock": true, 00:32:03.111 "num_base_bdevs": 3, 00:32:03.111 "num_base_bdevs_discovered": 3, 00:32:03.111 "num_base_bdevs_operational": 3, 00:32:03.111 "process": { 00:32:03.111 "type": "rebuild", 00:32:03.111 "target": "spare", 00:32:03.111 "progress": { 00:32:03.111 "blocks": 86016, 00:32:03.111 "percent": 67 00:32:03.111 } 00:32:03.111 }, 00:32:03.111 "base_bdevs_list": [ 00:32:03.111 { 00:32:03.111 "name": "spare", 00:32:03.111 "uuid": "a53b3e8d-44d6-5b7a-b37f-c911bdfa1400", 00:32:03.111 "is_configured": true, 00:32:03.111 "data_offset": 2048, 00:32:03.111 "data_size": 63488 00:32:03.111 }, 00:32:03.111 { 00:32:03.111 "name": "BaseBdev2", 00:32:03.111 "uuid": "8d883b94-1d99-58a9-88df-eeae8f65d339", 00:32:03.111 "is_configured": true, 00:32:03.111 "data_offset": 2048, 00:32:03.111 "data_size": 63488 00:32:03.111 }, 00:32:03.111 { 00:32:03.111 "name": "BaseBdev3", 00:32:03.111 "uuid": "13c10d35-9b61-5285-9767-ae606e7f42d4", 00:32:03.111 "is_configured": true, 00:32:03.111 "data_offset": 2048, 00:32:03.111 "data_size": 63488 00:32:03.111 } 00:32:03.111 ] 00:32:03.111 }' 00:32:03.111 23:17:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:03.111 23:17:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:03.111 23:17:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:03.111 23:17:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:03.111 23:17:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:32:04.050 23:17:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:32:04.050 23:17:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:04.050 23:17:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:04.051 23:17:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:04.051 23:17:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:04.051 23:17:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:04.051 23:17:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:04.051 23:17:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:04.308 23:17:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:04.308 "name": "raid_bdev1", 00:32:04.308 "uuid": "f5fae5fd-fc12-4e10-ae83-3252d1616c51", 00:32:04.308 "strip_size_kb": 64, 00:32:04.308 "state": "online", 00:32:04.308 "raid_level": "raid5f", 00:32:04.308 "superblock": true, 00:32:04.308 "num_base_bdevs": 3, 00:32:04.308 "num_base_bdevs_discovered": 3, 00:32:04.308 "num_base_bdevs_operational": 3, 00:32:04.308 "process": { 00:32:04.308 "type": "rebuild", 00:32:04.308 "target": "spare", 00:32:04.308 "progress": { 00:32:04.308 "blocks": 114688, 00:32:04.308 "percent": 90 00:32:04.308 } 00:32:04.308 }, 00:32:04.308 "base_bdevs_list": [ 00:32:04.308 { 00:32:04.308 "name": "spare", 00:32:04.308 "uuid": "a53b3e8d-44d6-5b7a-b37f-c911bdfa1400", 00:32:04.308 "is_configured": true, 00:32:04.308 "data_offset": 2048, 00:32:04.308 "data_size": 63488 00:32:04.308 }, 00:32:04.308 { 00:32:04.308 "name": "BaseBdev2", 00:32:04.308 "uuid": "8d883b94-1d99-58a9-88df-eeae8f65d339", 00:32:04.308 "is_configured": true, 00:32:04.308 "data_offset": 2048, 00:32:04.308 "data_size": 63488 00:32:04.308 }, 00:32:04.308 { 00:32:04.308 "name": "BaseBdev3", 00:32:04.308 "uuid": "13c10d35-9b61-5285-9767-ae606e7f42d4", 00:32:04.308 "is_configured": true, 00:32:04.308 "data_offset": 2048, 00:32:04.308 "data_size": 63488 00:32:04.308 } 00:32:04.308 ] 00:32:04.308 }' 00:32:04.308 23:17:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:04.565 23:17:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:04.565 23:17:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:04.565 23:17:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:04.565 23:17:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:32:05.171 [2024-07-13 23:17:54.284346] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:32:05.171 [2024-07-13 23:17:54.284757] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:32:05.171 [2024-07-13 23:17:54.285183] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:05.431 23:17:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:32:05.431 23:17:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:05.431 23:17:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:05.431 23:17:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:05.431 23:17:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:05.431 23:17:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:05.431 23:17:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:05.431 23:17:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:05.690 23:17:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:05.690 "name": "raid_bdev1", 00:32:05.690 "uuid": "f5fae5fd-fc12-4e10-ae83-3252d1616c51", 00:32:05.690 "strip_size_kb": 64, 00:32:05.690 "state": "online", 00:32:05.690 "raid_level": "raid5f", 00:32:05.690 "superblock": true, 00:32:05.690 "num_base_bdevs": 3, 00:32:05.690 "num_base_bdevs_discovered": 3, 00:32:05.690 "num_base_bdevs_operational": 3, 00:32:05.690 "base_bdevs_list": [ 00:32:05.690 { 00:32:05.690 "name": "spare", 00:32:05.690 "uuid": "a53b3e8d-44d6-5b7a-b37f-c911bdfa1400", 00:32:05.690 "is_configured": true, 00:32:05.690 "data_offset": 2048, 00:32:05.690 "data_size": 63488 00:32:05.690 }, 00:32:05.690 { 00:32:05.690 "name": "BaseBdev2", 00:32:05.690 "uuid": "8d883b94-1d99-58a9-88df-eeae8f65d339", 00:32:05.690 "is_configured": true, 00:32:05.690 "data_offset": 2048, 00:32:05.690 "data_size": 63488 00:32:05.690 }, 00:32:05.690 { 00:32:05.690 "name": "BaseBdev3", 00:32:05.690 "uuid": "13c10d35-9b61-5285-9767-ae606e7f42d4", 00:32:05.690 "is_configured": true, 00:32:05.690 "data_offset": 2048, 00:32:05.690 "data_size": 63488 00:32:05.690 } 00:32:05.690 ] 00:32:05.690 }' 00:32:05.690 23:17:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:05.948 23:17:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:32:05.948 23:17:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:05.948 23:17:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:32:05.948 23:17:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:32:05.949 23:17:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:05.949 23:17:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:05.949 23:17:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:05.949 23:17:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:05.949 23:17:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:05.949 23:17:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:05.949 23:17:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:06.206 23:17:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:06.206 "name": "raid_bdev1", 00:32:06.206 "uuid": "f5fae5fd-fc12-4e10-ae83-3252d1616c51", 00:32:06.206 "strip_size_kb": 64, 00:32:06.206 "state": "online", 00:32:06.206 "raid_level": "raid5f", 00:32:06.206 "superblock": true, 00:32:06.206 "num_base_bdevs": 3, 00:32:06.206 "num_base_bdevs_discovered": 3, 00:32:06.206 "num_base_bdevs_operational": 3, 00:32:06.206 "base_bdevs_list": [ 00:32:06.206 { 00:32:06.206 "name": "spare", 00:32:06.206 "uuid": "a53b3e8d-44d6-5b7a-b37f-c911bdfa1400", 00:32:06.206 "is_configured": true, 00:32:06.206 "data_offset": 2048, 00:32:06.206 "data_size": 63488 00:32:06.206 }, 00:32:06.206 { 00:32:06.206 "name": "BaseBdev2", 00:32:06.206 "uuid": "8d883b94-1d99-58a9-88df-eeae8f65d339", 00:32:06.206 "is_configured": true, 00:32:06.206 "data_offset": 2048, 00:32:06.206 "data_size": 63488 00:32:06.206 }, 00:32:06.206 { 00:32:06.206 "name": "BaseBdev3", 00:32:06.206 "uuid": "13c10d35-9b61-5285-9767-ae606e7f42d4", 00:32:06.206 "is_configured": true, 00:32:06.206 "data_offset": 2048, 00:32:06.206 "data_size": 63488 00:32:06.206 } 00:32:06.206 ] 00:32:06.206 }' 00:32:06.206 23:17:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:06.206 23:17:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:06.206 23:17:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:06.206 23:17:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:06.206 23:17:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:06.206 23:17:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:06.206 23:17:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:06.206 23:17:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:06.206 23:17:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:06.206 23:17:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:06.206 23:17:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:06.206 23:17:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:06.206 23:17:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:06.207 23:17:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:06.207 23:17:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:06.207 23:17:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:06.465 23:17:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:06.465 "name": "raid_bdev1", 00:32:06.465 "uuid": "f5fae5fd-fc12-4e10-ae83-3252d1616c51", 00:32:06.465 "strip_size_kb": 64, 00:32:06.465 "state": "online", 00:32:06.465 "raid_level": "raid5f", 00:32:06.465 "superblock": true, 00:32:06.465 "num_base_bdevs": 3, 00:32:06.465 "num_base_bdevs_discovered": 3, 00:32:06.465 "num_base_bdevs_operational": 3, 00:32:06.465 "base_bdevs_list": [ 00:32:06.465 { 00:32:06.465 "name": "spare", 00:32:06.465 "uuid": "a53b3e8d-44d6-5b7a-b37f-c911bdfa1400", 00:32:06.465 "is_configured": true, 00:32:06.465 "data_offset": 2048, 00:32:06.465 "data_size": 63488 00:32:06.465 }, 00:32:06.465 { 00:32:06.465 "name": "BaseBdev2", 00:32:06.465 "uuid": "8d883b94-1d99-58a9-88df-eeae8f65d339", 00:32:06.465 "is_configured": true, 00:32:06.465 "data_offset": 2048, 00:32:06.465 "data_size": 63488 00:32:06.465 }, 00:32:06.465 { 00:32:06.465 "name": "BaseBdev3", 00:32:06.465 "uuid": "13c10d35-9b61-5285-9767-ae606e7f42d4", 00:32:06.465 "is_configured": true, 00:32:06.465 "data_offset": 2048, 00:32:06.465 "data_size": 63488 00:32:06.465 } 00:32:06.465 ] 00:32:06.465 }' 00:32:06.465 23:17:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:06.465 23:17:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:07.030 23:17:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:07.288 [2024-07-13 23:17:56.654343] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:07.288 [2024-07-13 23:17:56.654727] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:07.288 [2024-07-13 23:17:56.655028] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:07.288 [2024-07-13 23:17:56.655305] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:07.288 [2024-07-13 23:17:56.655420] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:32:07.288 23:17:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:07.288 23:17:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:32:07.547 23:17:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:32:07.547 23:17:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:32:07.547 23:17:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:32:07.547 23:17:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:32:07.547 23:17:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:07.547 23:17:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:32:07.547 23:17:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:07.547 23:17:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:07.547 23:17:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:07.547 23:17:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:32:07.547 23:17:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:07.547 23:17:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:07.547 23:17:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:32:07.806 /dev/nbd0 00:32:07.806 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:07.806 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:07.806 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:32:07.806 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:32:07.806 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:32:07.806 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:32:07.806 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:32:07.806 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:32:07.806 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:32:07.806 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:32:07.806 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:07.806 1+0 records in 00:32:07.806 1+0 records out 00:32:07.806 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310048 s, 13.2 MB/s 00:32:07.806 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:07.806 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:32:07.806 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:08.065 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:32:08.065 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:32:08.065 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:08.065 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:08.065 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:32:08.324 /dev/nbd1 00:32:08.324 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:08.324 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:08.324 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:32:08.324 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:32:08.324 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:32:08.324 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:32:08.324 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:32:08.324 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:32:08.324 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:32:08.324 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:32:08.324 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:08.324 1+0 records in 00:32:08.324 1+0 records out 00:32:08.324 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390061 s, 10.5 MB/s 00:32:08.324 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:08.324 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:32:08.324 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:08.324 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:32:08.324 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:32:08.324 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:08.324 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:08.324 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:32:08.324 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:32:08.324 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:08.324 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:08.324 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:08.324 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:32:08.324 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:08.324 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:32:08.583 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:08.583 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:08.583 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:08.583 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:08.583 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:08.583 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:08.583 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:32:08.583 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:32:08.583 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:08.583 23:17:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:32:08.842 23:17:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:08.842 23:17:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:08.842 23:17:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:08.842 23:17:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:08.842 23:17:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:08.842 23:17:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:08.842 23:17:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:32:08.842 23:17:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:32:08.842 23:17:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:32:08.842 23:17:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:09.100 23:17:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:09.359 [2024-07-13 23:17:58.655351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:09.359 [2024-07-13 23:17:58.655475] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:09.359 [2024-07-13 23:17:58.655526] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:32:09.359 [2024-07-13 23:17:58.655587] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:09.359 [2024-07-13 23:17:58.658599] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:09.359 [2024-07-13 23:17:58.658676] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:09.359 [2024-07-13 23:17:58.658816] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:09.359 [2024-07-13 23:17:58.658952] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:09.359 [2024-07-13 23:17:58.659150] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:09.359 [2024-07-13 23:17:58.659316] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:09.359 spare 00:32:09.359 23:17:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:09.359 23:17:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:09.359 23:17:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:09.359 23:17:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:09.359 23:17:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:09.359 23:17:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:09.359 23:17:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:09.359 23:17:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:09.359 23:17:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:09.359 23:17:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:09.359 23:17:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:09.359 23:17:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:09.359 [2024-07-13 23:17:58.759492] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:32:09.359 [2024-07-13 23:17:58.759518] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:09.359 [2024-07-13 23:17:58.759759] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043fc0 00:32:09.359 [2024-07-13 23:17:58.760766] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:32:09.359 [2024-07-13 23:17:58.760798] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:32:09.359 [2024-07-13 23:17:58.761237] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:09.618 23:17:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:09.618 "name": "raid_bdev1", 00:32:09.618 "uuid": "f5fae5fd-fc12-4e10-ae83-3252d1616c51", 00:32:09.618 "strip_size_kb": 64, 00:32:09.618 "state": "online", 00:32:09.618 "raid_level": "raid5f", 00:32:09.618 "superblock": true, 00:32:09.618 "num_base_bdevs": 3, 00:32:09.618 "num_base_bdevs_discovered": 3, 00:32:09.618 "num_base_bdevs_operational": 3, 00:32:09.618 "base_bdevs_list": [ 00:32:09.618 { 00:32:09.618 "name": "spare", 00:32:09.618 "uuid": "a53b3e8d-44d6-5b7a-b37f-c911bdfa1400", 00:32:09.618 "is_configured": true, 00:32:09.618 "data_offset": 2048, 00:32:09.618 "data_size": 63488 00:32:09.618 }, 00:32:09.618 { 00:32:09.618 "name": "BaseBdev2", 00:32:09.618 "uuid": "8d883b94-1d99-58a9-88df-eeae8f65d339", 00:32:09.618 "is_configured": true, 00:32:09.618 "data_offset": 2048, 00:32:09.618 "data_size": 63488 00:32:09.618 }, 00:32:09.618 { 00:32:09.618 "name": "BaseBdev3", 00:32:09.618 "uuid": "13c10d35-9b61-5285-9767-ae606e7f42d4", 00:32:09.618 "is_configured": true, 00:32:09.618 "data_offset": 2048, 00:32:09.618 "data_size": 63488 00:32:09.618 } 00:32:09.618 ] 00:32:09.618 }' 00:32:09.618 23:17:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:09.618 23:17:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:10.185 23:17:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:10.185 23:17:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:10.185 23:17:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:10.185 23:17:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:10.185 23:17:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:10.185 23:17:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:10.185 23:17:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:10.444 23:17:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:10.444 "name": "raid_bdev1", 00:32:10.444 "uuid": "f5fae5fd-fc12-4e10-ae83-3252d1616c51", 00:32:10.444 "strip_size_kb": 64, 00:32:10.444 "state": "online", 00:32:10.444 "raid_level": "raid5f", 00:32:10.444 "superblock": true, 00:32:10.444 "num_base_bdevs": 3, 00:32:10.444 "num_base_bdevs_discovered": 3, 00:32:10.444 "num_base_bdevs_operational": 3, 00:32:10.444 "base_bdevs_list": [ 00:32:10.444 { 00:32:10.444 "name": "spare", 00:32:10.444 "uuid": "a53b3e8d-44d6-5b7a-b37f-c911bdfa1400", 00:32:10.444 "is_configured": true, 00:32:10.444 "data_offset": 2048, 00:32:10.444 "data_size": 63488 00:32:10.444 }, 00:32:10.444 { 00:32:10.444 "name": "BaseBdev2", 00:32:10.444 "uuid": "8d883b94-1d99-58a9-88df-eeae8f65d339", 00:32:10.444 "is_configured": true, 00:32:10.444 "data_offset": 2048, 00:32:10.444 "data_size": 63488 00:32:10.444 }, 00:32:10.444 { 00:32:10.444 "name": "BaseBdev3", 00:32:10.444 "uuid": "13c10d35-9b61-5285-9767-ae606e7f42d4", 00:32:10.444 "is_configured": true, 00:32:10.444 "data_offset": 2048, 00:32:10.444 "data_size": 63488 00:32:10.444 } 00:32:10.444 ] 00:32:10.444 }' 00:32:10.444 23:17:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:10.444 23:17:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:10.444 23:17:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:10.703 23:17:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:10.703 23:17:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:10.703 23:17:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:32:10.962 23:18:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:32:10.962 23:18:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:32:10.962 [2024-07-13 23:18:00.363993] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:11.220 23:18:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:11.220 23:18:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:11.220 23:18:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:11.220 23:18:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:11.220 23:18:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:11.220 23:18:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:11.220 23:18:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:11.220 23:18:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:11.220 23:18:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:11.220 23:18:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:11.220 23:18:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:11.220 23:18:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:11.479 23:18:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:11.479 "name": "raid_bdev1", 00:32:11.479 "uuid": "f5fae5fd-fc12-4e10-ae83-3252d1616c51", 00:32:11.479 "strip_size_kb": 64, 00:32:11.479 "state": "online", 00:32:11.479 "raid_level": "raid5f", 00:32:11.479 "superblock": true, 00:32:11.479 "num_base_bdevs": 3, 00:32:11.479 "num_base_bdevs_discovered": 2, 00:32:11.479 "num_base_bdevs_operational": 2, 00:32:11.479 "base_bdevs_list": [ 00:32:11.479 { 00:32:11.479 "name": null, 00:32:11.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:11.479 "is_configured": false, 00:32:11.479 "data_offset": 2048, 00:32:11.479 "data_size": 63488 00:32:11.479 }, 00:32:11.479 { 00:32:11.479 "name": "BaseBdev2", 00:32:11.479 "uuid": "8d883b94-1d99-58a9-88df-eeae8f65d339", 00:32:11.479 "is_configured": true, 00:32:11.480 "data_offset": 2048, 00:32:11.480 "data_size": 63488 00:32:11.480 }, 00:32:11.480 { 00:32:11.480 "name": "BaseBdev3", 00:32:11.480 "uuid": "13c10d35-9b61-5285-9767-ae606e7f42d4", 00:32:11.480 "is_configured": true, 00:32:11.480 "data_offset": 2048, 00:32:11.480 "data_size": 63488 00:32:11.480 } 00:32:11.480 ] 00:32:11.480 }' 00:32:11.480 23:18:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:11.480 23:18:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:12.046 23:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:12.306 [2024-07-13 23:18:01.552790] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:12.306 [2024-07-13 23:18:01.553170] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:12.306 [2024-07-13 23:18:01.553221] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:12.306 [2024-07-13 23:18:01.553333] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:12.306 [2024-07-13 23:18:01.560235] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000044160 00:32:12.306 [2024-07-13 23:18:01.563208] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:12.306 23:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:32:13.242 23:18:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:13.242 23:18:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:13.242 23:18:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:13.242 23:18:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:13.242 23:18:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:13.242 23:18:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:13.242 23:18:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:13.501 23:18:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:13.501 "name": "raid_bdev1", 00:32:13.501 "uuid": "f5fae5fd-fc12-4e10-ae83-3252d1616c51", 00:32:13.501 "strip_size_kb": 64, 00:32:13.501 "state": "online", 00:32:13.501 "raid_level": "raid5f", 00:32:13.501 "superblock": true, 00:32:13.501 "num_base_bdevs": 3, 00:32:13.501 "num_base_bdevs_discovered": 3, 00:32:13.501 "num_base_bdevs_operational": 3, 00:32:13.501 "process": { 00:32:13.501 "type": "rebuild", 00:32:13.501 "target": "spare", 00:32:13.501 "progress": { 00:32:13.501 "blocks": 24576, 00:32:13.501 "percent": 19 00:32:13.501 } 00:32:13.501 }, 00:32:13.501 "base_bdevs_list": [ 00:32:13.501 { 00:32:13.501 "name": "spare", 00:32:13.501 "uuid": "a53b3e8d-44d6-5b7a-b37f-c911bdfa1400", 00:32:13.501 "is_configured": true, 00:32:13.501 "data_offset": 2048, 00:32:13.501 "data_size": 63488 00:32:13.501 }, 00:32:13.501 { 00:32:13.501 "name": "BaseBdev2", 00:32:13.501 "uuid": "8d883b94-1d99-58a9-88df-eeae8f65d339", 00:32:13.501 "is_configured": true, 00:32:13.501 "data_offset": 2048, 00:32:13.501 "data_size": 63488 00:32:13.501 }, 00:32:13.501 { 00:32:13.501 "name": "BaseBdev3", 00:32:13.501 "uuid": "13c10d35-9b61-5285-9767-ae606e7f42d4", 00:32:13.501 "is_configured": true, 00:32:13.501 "data_offset": 2048, 00:32:13.501 "data_size": 63488 00:32:13.501 } 00:32:13.501 ] 00:32:13.501 }' 00:32:13.501 23:18:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:13.501 23:18:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:13.760 23:18:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:13.760 23:18:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:13.760 23:18:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:14.022 [2024-07-13 23:18:03.197153] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:14.022 [2024-07-13 23:18:03.282135] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:14.022 [2024-07-13 23:18:03.282247] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:14.022 [2024-07-13 23:18:03.282273] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:14.022 [2024-07-13 23:18:03.282283] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:14.022 23:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:14.022 23:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:14.022 23:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:14.022 23:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:14.022 23:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:14.022 23:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:14.022 23:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:14.022 23:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:14.022 23:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:14.022 23:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:14.022 23:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:14.022 23:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:14.282 23:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:14.282 "name": "raid_bdev1", 00:32:14.282 "uuid": "f5fae5fd-fc12-4e10-ae83-3252d1616c51", 00:32:14.282 "strip_size_kb": 64, 00:32:14.282 "state": "online", 00:32:14.282 "raid_level": "raid5f", 00:32:14.282 "superblock": true, 00:32:14.282 "num_base_bdevs": 3, 00:32:14.282 "num_base_bdevs_discovered": 2, 00:32:14.282 "num_base_bdevs_operational": 2, 00:32:14.282 "base_bdevs_list": [ 00:32:14.282 { 00:32:14.282 "name": null, 00:32:14.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:14.282 "is_configured": false, 00:32:14.282 "data_offset": 2048, 00:32:14.282 "data_size": 63488 00:32:14.282 }, 00:32:14.282 { 00:32:14.282 "name": "BaseBdev2", 00:32:14.282 "uuid": "8d883b94-1d99-58a9-88df-eeae8f65d339", 00:32:14.282 "is_configured": true, 00:32:14.282 "data_offset": 2048, 00:32:14.282 "data_size": 63488 00:32:14.282 }, 00:32:14.282 { 00:32:14.282 "name": "BaseBdev3", 00:32:14.282 "uuid": "13c10d35-9b61-5285-9767-ae606e7f42d4", 00:32:14.282 "is_configured": true, 00:32:14.282 "data_offset": 2048, 00:32:14.282 "data_size": 63488 00:32:14.282 } 00:32:14.282 ] 00:32:14.282 }' 00:32:14.282 23:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:14.282 23:18:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:14.848 23:18:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:15.107 [2024-07-13 23:18:04.426800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:15.107 [2024-07-13 23:18:04.427015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:15.107 [2024-07-13 23:18:04.427071] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:32:15.107 [2024-07-13 23:18:04.427107] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:15.107 [2024-07-13 23:18:04.427829] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:15.107 [2024-07-13 23:18:04.427893] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:15.107 [2024-07-13 23:18:04.428082] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:15.107 [2024-07-13 23:18:04.428103] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:15.107 [2024-07-13 23:18:04.428117] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:15.107 [2024-07-13 23:18:04.428190] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:15.107 [2024-07-13 23:18:04.435101] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000444a0 00:32:15.107 spare 00:32:15.107 [2024-07-13 23:18:04.438106] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:15.107 23:18:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:32:16.484 23:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:16.484 23:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:16.484 23:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:16.484 23:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:16.484 23:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:16.484 23:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:16.484 23:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:16.484 23:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:16.484 "name": "raid_bdev1", 00:32:16.484 "uuid": "f5fae5fd-fc12-4e10-ae83-3252d1616c51", 00:32:16.484 "strip_size_kb": 64, 00:32:16.484 "state": "online", 00:32:16.484 "raid_level": "raid5f", 00:32:16.484 "superblock": true, 00:32:16.484 "num_base_bdevs": 3, 00:32:16.484 "num_base_bdevs_discovered": 3, 00:32:16.484 "num_base_bdevs_operational": 3, 00:32:16.484 "process": { 00:32:16.484 "type": "rebuild", 00:32:16.484 "target": "spare", 00:32:16.484 "progress": { 00:32:16.484 "blocks": 24576, 00:32:16.484 "percent": 19 00:32:16.484 } 00:32:16.484 }, 00:32:16.484 "base_bdevs_list": [ 00:32:16.484 { 00:32:16.484 "name": "spare", 00:32:16.484 "uuid": "a53b3e8d-44d6-5b7a-b37f-c911bdfa1400", 00:32:16.484 "is_configured": true, 00:32:16.484 "data_offset": 2048, 00:32:16.484 "data_size": 63488 00:32:16.484 }, 00:32:16.484 { 00:32:16.484 "name": "BaseBdev2", 00:32:16.484 "uuid": "8d883b94-1d99-58a9-88df-eeae8f65d339", 00:32:16.484 "is_configured": true, 00:32:16.484 "data_offset": 2048, 00:32:16.484 "data_size": 63488 00:32:16.484 }, 00:32:16.484 { 00:32:16.484 "name": "BaseBdev3", 00:32:16.484 "uuid": "13c10d35-9b61-5285-9767-ae606e7f42d4", 00:32:16.484 "is_configured": true, 00:32:16.484 "data_offset": 2048, 00:32:16.484 "data_size": 63488 00:32:16.484 } 00:32:16.484 ] 00:32:16.484 }' 00:32:16.484 23:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:16.484 23:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:16.484 23:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:16.484 23:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:16.484 23:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:16.743 [2024-07-13 23:18:06.020074] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:16.743 [2024-07-13 23:18:06.055300] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:16.743 [2024-07-13 23:18:06.055410] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:16.743 [2024-07-13 23:18:06.055433] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:16.743 [2024-07-13 23:18:06.055442] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:16.743 23:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:16.743 23:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:16.743 23:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:16.743 23:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:16.743 23:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:16.743 23:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:16.743 23:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:16.743 23:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:16.743 23:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:16.743 23:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:16.743 23:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:16.743 23:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:17.002 23:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:17.002 "name": "raid_bdev1", 00:32:17.002 "uuid": "f5fae5fd-fc12-4e10-ae83-3252d1616c51", 00:32:17.002 "strip_size_kb": 64, 00:32:17.002 "state": "online", 00:32:17.002 "raid_level": "raid5f", 00:32:17.002 "superblock": true, 00:32:17.002 "num_base_bdevs": 3, 00:32:17.002 "num_base_bdevs_discovered": 2, 00:32:17.002 "num_base_bdevs_operational": 2, 00:32:17.002 "base_bdevs_list": [ 00:32:17.002 { 00:32:17.002 "name": null, 00:32:17.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:17.002 "is_configured": false, 00:32:17.002 "data_offset": 2048, 00:32:17.002 "data_size": 63488 00:32:17.002 }, 00:32:17.002 { 00:32:17.002 "name": "BaseBdev2", 00:32:17.002 "uuid": "8d883b94-1d99-58a9-88df-eeae8f65d339", 00:32:17.002 "is_configured": true, 00:32:17.002 "data_offset": 2048, 00:32:17.002 "data_size": 63488 00:32:17.002 }, 00:32:17.002 { 00:32:17.002 "name": "BaseBdev3", 00:32:17.002 "uuid": "13c10d35-9b61-5285-9767-ae606e7f42d4", 00:32:17.002 "is_configured": true, 00:32:17.002 "data_offset": 2048, 00:32:17.002 "data_size": 63488 00:32:17.002 } 00:32:17.002 ] 00:32:17.002 }' 00:32:17.003 23:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:17.003 23:18:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:17.572 23:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:17.572 23:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:17.572 23:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:17.572 23:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:17.572 23:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:17.853 23:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:17.853 23:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:17.853 23:18:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:17.853 "name": "raid_bdev1", 00:32:17.853 "uuid": "f5fae5fd-fc12-4e10-ae83-3252d1616c51", 00:32:17.853 "strip_size_kb": 64, 00:32:17.853 "state": "online", 00:32:17.853 "raid_level": "raid5f", 00:32:17.853 "superblock": true, 00:32:17.853 "num_base_bdevs": 3, 00:32:17.853 "num_base_bdevs_discovered": 2, 00:32:17.853 "num_base_bdevs_operational": 2, 00:32:17.853 "base_bdevs_list": [ 00:32:17.853 { 00:32:17.853 "name": null, 00:32:17.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:17.853 "is_configured": false, 00:32:17.853 "data_offset": 2048, 00:32:17.853 "data_size": 63488 00:32:17.853 }, 00:32:17.853 { 00:32:17.853 "name": "BaseBdev2", 00:32:17.853 "uuid": "8d883b94-1d99-58a9-88df-eeae8f65d339", 00:32:17.853 "is_configured": true, 00:32:17.853 "data_offset": 2048, 00:32:17.853 "data_size": 63488 00:32:17.853 }, 00:32:17.853 { 00:32:17.853 "name": "BaseBdev3", 00:32:17.853 "uuid": "13c10d35-9b61-5285-9767-ae606e7f42d4", 00:32:17.853 "is_configured": true, 00:32:17.853 "data_offset": 2048, 00:32:17.853 "data_size": 63488 00:32:17.853 } 00:32:17.853 ] 00:32:17.853 }' 00:32:17.853 23:18:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:18.112 23:18:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:18.112 23:18:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:18.112 23:18:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:18.112 23:18:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:32:18.370 23:18:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:18.628 [2024-07-13 23:18:07.804379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:18.628 [2024-07-13 23:18:07.804554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:18.628 [2024-07-13 23:18:07.804675] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:32:18.628 [2024-07-13 23:18:07.804707] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:18.628 [2024-07-13 23:18:07.805376] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:18.628 [2024-07-13 23:18:07.805433] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:18.628 [2024-07-13 23:18:07.805612] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:32:18.628 [2024-07-13 23:18:07.805637] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:18.628 [2024-07-13 23:18:07.805647] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:18.628 BaseBdev1 00:32:18.628 23:18:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:32:19.563 23:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:19.564 23:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:19.564 23:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:19.564 23:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:19.564 23:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:19.564 23:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:19.564 23:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:19.564 23:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:19.564 23:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:19.564 23:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:19.564 23:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:19.564 23:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:19.822 23:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:19.822 "name": "raid_bdev1", 00:32:19.822 "uuid": "f5fae5fd-fc12-4e10-ae83-3252d1616c51", 00:32:19.822 "strip_size_kb": 64, 00:32:19.822 "state": "online", 00:32:19.822 "raid_level": "raid5f", 00:32:19.822 "superblock": true, 00:32:19.822 "num_base_bdevs": 3, 00:32:19.822 "num_base_bdevs_discovered": 2, 00:32:19.822 "num_base_bdevs_operational": 2, 00:32:19.822 "base_bdevs_list": [ 00:32:19.822 { 00:32:19.822 "name": null, 00:32:19.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:19.822 "is_configured": false, 00:32:19.822 "data_offset": 2048, 00:32:19.822 "data_size": 63488 00:32:19.822 }, 00:32:19.822 { 00:32:19.822 "name": "BaseBdev2", 00:32:19.822 "uuid": "8d883b94-1d99-58a9-88df-eeae8f65d339", 00:32:19.822 "is_configured": true, 00:32:19.822 "data_offset": 2048, 00:32:19.822 "data_size": 63488 00:32:19.822 }, 00:32:19.822 { 00:32:19.822 "name": "BaseBdev3", 00:32:19.822 "uuid": "13c10d35-9b61-5285-9767-ae606e7f42d4", 00:32:19.822 "is_configured": true, 00:32:19.822 "data_offset": 2048, 00:32:19.822 "data_size": 63488 00:32:19.822 } 00:32:19.822 ] 00:32:19.822 }' 00:32:19.822 23:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:19.822 23:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:20.389 23:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:20.389 23:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:20.389 23:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:20.389 23:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:20.389 23:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:20.389 23:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:20.389 23:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:20.648 23:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:20.648 "name": "raid_bdev1", 00:32:20.648 "uuid": "f5fae5fd-fc12-4e10-ae83-3252d1616c51", 00:32:20.648 "strip_size_kb": 64, 00:32:20.648 "state": "online", 00:32:20.648 "raid_level": "raid5f", 00:32:20.648 "superblock": true, 00:32:20.648 "num_base_bdevs": 3, 00:32:20.648 "num_base_bdevs_discovered": 2, 00:32:20.648 "num_base_bdevs_operational": 2, 00:32:20.648 "base_bdevs_list": [ 00:32:20.648 { 00:32:20.648 "name": null, 00:32:20.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:20.648 "is_configured": false, 00:32:20.648 "data_offset": 2048, 00:32:20.648 "data_size": 63488 00:32:20.648 }, 00:32:20.648 { 00:32:20.648 "name": "BaseBdev2", 00:32:20.648 "uuid": "8d883b94-1d99-58a9-88df-eeae8f65d339", 00:32:20.648 "is_configured": true, 00:32:20.648 "data_offset": 2048, 00:32:20.648 "data_size": 63488 00:32:20.648 }, 00:32:20.648 { 00:32:20.648 "name": "BaseBdev3", 00:32:20.648 "uuid": "13c10d35-9b61-5285-9767-ae606e7f42d4", 00:32:20.648 "is_configured": true, 00:32:20.648 "data_offset": 2048, 00:32:20.648 "data_size": 63488 00:32:20.648 } 00:32:20.648 ] 00:32:20.648 }' 00:32:20.648 23:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:20.648 23:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:20.648 23:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:20.907 23:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:20.907 23:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:20.907 23:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:32:20.907 23:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:20.907 23:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:20.907 23:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:20.907 23:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:20.907 23:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:20.907 23:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:20.907 23:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:20.907 23:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:20.907 23:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:32:20.907 23:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:20.907 [2024-07-13 23:18:10.300376] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:20.907 [2024-07-13 23:18:10.300656] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:20.907 [2024-07-13 23:18:10.300675] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:20.907 request: 00:32:20.907 { 00:32:20.907 "base_bdev": "BaseBdev1", 00:32:20.907 "raid_bdev": "raid_bdev1", 00:32:20.907 "method": "bdev_raid_add_base_bdev", 00:32:20.907 "req_id": 1 00:32:20.907 } 00:32:20.907 Got JSON-RPC error response 00:32:20.907 response: 00:32:20.907 { 00:32:20.907 "code": -22, 00:32:20.907 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:32:20.907 } 00:32:21.165 23:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:32:21.165 23:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:21.165 23:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:21.165 23:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:21.165 23:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:32:22.101 23:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:22.101 23:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:22.101 23:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:22.101 23:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:22.101 23:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:22.101 23:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:22.101 23:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:22.101 23:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:22.101 23:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:22.101 23:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:22.101 23:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:22.101 23:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:22.359 23:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:22.359 "name": "raid_bdev1", 00:32:22.359 "uuid": "f5fae5fd-fc12-4e10-ae83-3252d1616c51", 00:32:22.359 "strip_size_kb": 64, 00:32:22.359 "state": "online", 00:32:22.359 "raid_level": "raid5f", 00:32:22.359 "superblock": true, 00:32:22.359 "num_base_bdevs": 3, 00:32:22.359 "num_base_bdevs_discovered": 2, 00:32:22.359 "num_base_bdevs_operational": 2, 00:32:22.359 "base_bdevs_list": [ 00:32:22.359 { 00:32:22.359 "name": null, 00:32:22.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:22.359 "is_configured": false, 00:32:22.359 "data_offset": 2048, 00:32:22.359 "data_size": 63488 00:32:22.359 }, 00:32:22.359 { 00:32:22.359 "name": "BaseBdev2", 00:32:22.359 "uuid": "8d883b94-1d99-58a9-88df-eeae8f65d339", 00:32:22.359 "is_configured": true, 00:32:22.359 "data_offset": 2048, 00:32:22.359 "data_size": 63488 00:32:22.359 }, 00:32:22.359 { 00:32:22.359 "name": "BaseBdev3", 00:32:22.359 "uuid": "13c10d35-9b61-5285-9767-ae606e7f42d4", 00:32:22.359 "is_configured": true, 00:32:22.359 "data_offset": 2048, 00:32:22.359 "data_size": 63488 00:32:22.359 } 00:32:22.359 ] 00:32:22.359 }' 00:32:22.359 23:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:22.359 23:18:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:22.925 23:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:22.925 23:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:22.925 23:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:22.925 23:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:22.925 23:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:22.925 23:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:22.925 23:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:23.182 23:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:23.182 "name": "raid_bdev1", 00:32:23.182 "uuid": "f5fae5fd-fc12-4e10-ae83-3252d1616c51", 00:32:23.182 "strip_size_kb": 64, 00:32:23.182 "state": "online", 00:32:23.182 "raid_level": "raid5f", 00:32:23.182 "superblock": true, 00:32:23.182 "num_base_bdevs": 3, 00:32:23.182 "num_base_bdevs_discovered": 2, 00:32:23.182 "num_base_bdevs_operational": 2, 00:32:23.182 "base_bdevs_list": [ 00:32:23.182 { 00:32:23.182 "name": null, 00:32:23.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:23.182 "is_configured": false, 00:32:23.182 "data_offset": 2048, 00:32:23.182 "data_size": 63488 00:32:23.182 }, 00:32:23.182 { 00:32:23.182 "name": "BaseBdev2", 00:32:23.182 "uuid": "8d883b94-1d99-58a9-88df-eeae8f65d339", 00:32:23.182 "is_configured": true, 00:32:23.182 "data_offset": 2048, 00:32:23.182 "data_size": 63488 00:32:23.182 }, 00:32:23.182 { 00:32:23.182 "name": "BaseBdev3", 00:32:23.182 "uuid": "13c10d35-9b61-5285-9767-ae606e7f42d4", 00:32:23.182 "is_configured": true, 00:32:23.182 "data_offset": 2048, 00:32:23.182 "data_size": 63488 00:32:23.182 } 00:32:23.182 ] 00:32:23.182 }' 00:32:23.182 23:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:23.182 23:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:23.439 23:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:23.439 23:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:23.439 23:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 162832 00:32:23.439 23:18:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 162832 ']' 00:32:23.439 23:18:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 162832 00:32:23.439 23:18:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:32:23.439 23:18:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:23.439 23:18:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 162832 00:32:23.439 23:18:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:23.439 23:18:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:23.439 23:18:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 162832' 00:32:23.439 killing process with pid 162832 00:32:23.439 23:18:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 162832 00:32:23.439 Received shutdown signal, test time was about 60.000000 seconds 00:32:23.439 00:32:23.439 Latency(us) 00:32:23.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:23.439 =================================================================================================================== 00:32:23.439 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:23.439 23:18:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 162832 00:32:23.439 [2024-07-13 23:18:12.667217] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:23.439 [2024-07-13 23:18:12.667463] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:23.439 [2024-07-13 23:18:12.667576] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:23.439 [2024-07-13 23:18:12.667606] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:32:23.439 [2024-07-13 23:18:12.724053] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:23.696 23:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:32:23.696 00:32:23.696 real 0m35.350s 00:32:23.696 user 0m56.432s 00:32:23.696 ************************************ 00:32:23.696 END TEST raid5f_rebuild_test_sb 00:32:23.696 ************************************ 00:32:23.696 sys 0m4.044s 00:32:23.696 23:18:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:23.696 23:18:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:23.962 23:18:13 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:32:23.962 23:18:13 bdev_raid -- bdev/bdev_raid.sh@885 -- # for n in {3..4} 00:32:23.962 23:18:13 bdev_raid -- bdev/bdev_raid.sh@886 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:32:23.962 23:18:13 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:32:23.962 23:18:13 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:23.962 23:18:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:23.962 ************************************ 00:32:23.962 START TEST raid5f_state_function_test 00:32:23.962 ************************************ 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid5f 4 false 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=163771 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 163771' 00:32:23.962 Process raid pid: 163771 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 163771 /var/tmp/spdk-raid.sock 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 163771 ']' 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:23.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:23.962 23:18:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.962 [2024-07-13 23:18:13.197252] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:32:23.962 [2024-07-13 23:18:13.197576] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:23.962 [2024-07-13 23:18:13.341701] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:24.222 [2024-07-13 23:18:13.441905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:24.222 [2024-07-13 23:18:13.523126] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:24.787 23:18:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:24.787 23:18:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:32:24.787 23:18:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:32:25.045 [2024-07-13 23:18:14.411187] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:25.045 [2024-07-13 23:18:14.411314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:25.045 [2024-07-13 23:18:14.411331] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:25.045 [2024-07-13 23:18:14.411353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:25.045 [2024-07-13 23:18:14.411362] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:25.045 [2024-07-13 23:18:14.411410] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:25.045 [2024-07-13 23:18:14.411421] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:25.045 [2024-07-13 23:18:14.411447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:25.045 23:18:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:25.045 23:18:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:25.045 23:18:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:25.045 23:18:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:25.045 23:18:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:25.045 23:18:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:25.045 23:18:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:25.045 23:18:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:25.045 23:18:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:25.045 23:18:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:25.045 23:18:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:25.045 23:18:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:25.612 23:18:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:25.612 "name": "Existed_Raid", 00:32:25.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:25.612 "strip_size_kb": 64, 00:32:25.612 "state": "configuring", 00:32:25.612 "raid_level": "raid5f", 00:32:25.612 "superblock": false, 00:32:25.612 "num_base_bdevs": 4, 00:32:25.612 "num_base_bdevs_discovered": 0, 00:32:25.612 "num_base_bdevs_operational": 4, 00:32:25.612 "base_bdevs_list": [ 00:32:25.612 { 00:32:25.612 "name": "BaseBdev1", 00:32:25.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:25.612 "is_configured": false, 00:32:25.612 "data_offset": 0, 00:32:25.612 "data_size": 0 00:32:25.612 }, 00:32:25.612 { 00:32:25.612 "name": "BaseBdev2", 00:32:25.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:25.612 "is_configured": false, 00:32:25.612 "data_offset": 0, 00:32:25.612 "data_size": 0 00:32:25.612 }, 00:32:25.612 { 00:32:25.612 "name": "BaseBdev3", 00:32:25.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:25.612 "is_configured": false, 00:32:25.612 "data_offset": 0, 00:32:25.612 "data_size": 0 00:32:25.612 }, 00:32:25.612 { 00:32:25.612 "name": "BaseBdev4", 00:32:25.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:25.612 "is_configured": false, 00:32:25.612 "data_offset": 0, 00:32:25.612 "data_size": 0 00:32:25.612 } 00:32:25.612 ] 00:32:25.612 }' 00:32:25.612 23:18:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:25.612 23:18:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.178 23:18:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:26.178 [2024-07-13 23:18:15.571384] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:26.178 [2024-07-13 23:18:15.571458] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:32:26.437 23:18:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:32:26.437 [2024-07-13 23:18:15.823378] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:26.437 [2024-07-13 23:18:15.823469] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:26.437 [2024-07-13 23:18:15.823491] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:26.437 [2024-07-13 23:18:15.823555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:26.437 [2024-07-13 23:18:15.823567] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:26.437 [2024-07-13 23:18:15.823588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:26.437 [2024-07-13 23:18:15.823598] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:26.437 [2024-07-13 23:18:15.823631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:26.437 23:18:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:32:26.696 [2024-07-13 23:18:16.039581] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:26.696 BaseBdev1 00:32:26.696 23:18:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:32:26.696 23:18:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:32:26.696 23:18:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:26.696 23:18:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:32:26.696 23:18:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:26.696 23:18:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:26.696 23:18:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:26.954 23:18:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:27.213 [ 00:32:27.213 { 00:32:27.213 "name": "BaseBdev1", 00:32:27.213 "aliases": [ 00:32:27.213 "d9a1da74-91ee-49d2-9bc0-99cc49544193" 00:32:27.213 ], 00:32:27.213 "product_name": "Malloc disk", 00:32:27.213 "block_size": 512, 00:32:27.213 "num_blocks": 65536, 00:32:27.213 "uuid": "d9a1da74-91ee-49d2-9bc0-99cc49544193", 00:32:27.213 "assigned_rate_limits": { 00:32:27.213 "rw_ios_per_sec": 0, 00:32:27.213 "rw_mbytes_per_sec": 0, 00:32:27.213 "r_mbytes_per_sec": 0, 00:32:27.213 "w_mbytes_per_sec": 0 00:32:27.213 }, 00:32:27.213 "claimed": true, 00:32:27.213 "claim_type": "exclusive_write", 00:32:27.213 "zoned": false, 00:32:27.213 "supported_io_types": { 00:32:27.213 "read": true, 00:32:27.213 "write": true, 00:32:27.213 "unmap": true, 00:32:27.213 "flush": true, 00:32:27.213 "reset": true, 00:32:27.213 "nvme_admin": false, 00:32:27.213 "nvme_io": false, 00:32:27.213 "nvme_io_md": false, 00:32:27.213 "write_zeroes": true, 00:32:27.213 "zcopy": true, 00:32:27.213 "get_zone_info": false, 00:32:27.213 "zone_management": false, 00:32:27.213 "zone_append": false, 00:32:27.213 "compare": false, 00:32:27.213 "compare_and_write": false, 00:32:27.213 "abort": true, 00:32:27.213 "seek_hole": false, 00:32:27.213 "seek_data": false, 00:32:27.213 "copy": true, 00:32:27.213 "nvme_iov_md": false 00:32:27.213 }, 00:32:27.213 "memory_domains": [ 00:32:27.213 { 00:32:27.213 "dma_device_id": "system", 00:32:27.213 "dma_device_type": 1 00:32:27.213 }, 00:32:27.213 { 00:32:27.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:27.213 "dma_device_type": 2 00:32:27.213 } 00:32:27.213 ], 00:32:27.213 "driver_specific": {} 00:32:27.213 } 00:32:27.213 ] 00:32:27.213 23:18:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:32:27.213 23:18:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:27.213 23:18:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:27.213 23:18:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:27.213 23:18:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:27.213 23:18:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:27.213 23:18:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:27.213 23:18:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:27.213 23:18:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:27.213 23:18:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:27.213 23:18:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:27.213 23:18:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:27.213 23:18:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:27.472 23:18:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:27.472 "name": "Existed_Raid", 00:32:27.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:27.472 "strip_size_kb": 64, 00:32:27.472 "state": "configuring", 00:32:27.472 "raid_level": "raid5f", 00:32:27.472 "superblock": false, 00:32:27.472 "num_base_bdevs": 4, 00:32:27.472 "num_base_bdevs_discovered": 1, 00:32:27.472 "num_base_bdevs_operational": 4, 00:32:27.472 "base_bdevs_list": [ 00:32:27.472 { 00:32:27.472 "name": "BaseBdev1", 00:32:27.472 "uuid": "d9a1da74-91ee-49d2-9bc0-99cc49544193", 00:32:27.472 "is_configured": true, 00:32:27.472 "data_offset": 0, 00:32:27.472 "data_size": 65536 00:32:27.472 }, 00:32:27.472 { 00:32:27.472 "name": "BaseBdev2", 00:32:27.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:27.472 "is_configured": false, 00:32:27.472 "data_offset": 0, 00:32:27.472 "data_size": 0 00:32:27.472 }, 00:32:27.472 { 00:32:27.472 "name": "BaseBdev3", 00:32:27.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:27.472 "is_configured": false, 00:32:27.472 "data_offset": 0, 00:32:27.473 "data_size": 0 00:32:27.473 }, 00:32:27.473 { 00:32:27.473 "name": "BaseBdev4", 00:32:27.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:27.473 "is_configured": false, 00:32:27.473 "data_offset": 0, 00:32:27.473 "data_size": 0 00:32:27.473 } 00:32:27.473 ] 00:32:27.473 }' 00:32:27.473 23:18:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:27.473 23:18:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:28.040 23:18:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:28.040 [2024-07-13 23:18:17.444104] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:28.040 [2024-07-13 23:18:17.444217] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:32:28.298 23:18:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:32:28.298 [2024-07-13 23:18:17.664215] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:28.298 [2024-07-13 23:18:17.666812] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:28.298 [2024-07-13 23:18:17.666958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:28.298 [2024-07-13 23:18:17.666974] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:28.298 [2024-07-13 23:18:17.667009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:28.298 [2024-07-13 23:18:17.667021] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:28.298 [2024-07-13 23:18:17.667041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:28.298 23:18:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:32:28.298 23:18:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:28.298 23:18:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:28.298 23:18:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:28.299 23:18:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:28.299 23:18:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:28.299 23:18:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:28.299 23:18:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:28.299 23:18:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:28.299 23:18:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:28.299 23:18:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:28.299 23:18:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:28.299 23:18:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:28.299 23:18:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:28.557 23:18:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:28.557 "name": "Existed_Raid", 00:32:28.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:28.557 "strip_size_kb": 64, 00:32:28.557 "state": "configuring", 00:32:28.557 "raid_level": "raid5f", 00:32:28.557 "superblock": false, 00:32:28.557 "num_base_bdevs": 4, 00:32:28.557 "num_base_bdevs_discovered": 1, 00:32:28.557 "num_base_bdevs_operational": 4, 00:32:28.557 "base_bdevs_list": [ 00:32:28.558 { 00:32:28.558 "name": "BaseBdev1", 00:32:28.558 "uuid": "d9a1da74-91ee-49d2-9bc0-99cc49544193", 00:32:28.558 "is_configured": true, 00:32:28.558 "data_offset": 0, 00:32:28.558 "data_size": 65536 00:32:28.558 }, 00:32:28.558 { 00:32:28.558 "name": "BaseBdev2", 00:32:28.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:28.558 "is_configured": false, 00:32:28.558 "data_offset": 0, 00:32:28.558 "data_size": 0 00:32:28.558 }, 00:32:28.558 { 00:32:28.558 "name": "BaseBdev3", 00:32:28.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:28.558 "is_configured": false, 00:32:28.558 "data_offset": 0, 00:32:28.558 "data_size": 0 00:32:28.558 }, 00:32:28.558 { 00:32:28.558 "name": "BaseBdev4", 00:32:28.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:28.558 "is_configured": false, 00:32:28.558 "data_offset": 0, 00:32:28.558 "data_size": 0 00:32:28.558 } 00:32:28.558 ] 00:32:28.558 }' 00:32:28.558 23:18:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:28.558 23:18:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.126 23:18:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:32:29.385 [2024-07-13 23:18:18.766301] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:29.385 BaseBdev2 00:32:29.385 23:18:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:32:29.385 23:18:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:32:29.385 23:18:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:29.385 23:18:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:32:29.385 23:18:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:29.385 23:18:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:29.385 23:18:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:29.643 23:18:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:29.902 [ 00:32:29.902 { 00:32:29.902 "name": "BaseBdev2", 00:32:29.902 "aliases": [ 00:32:29.902 "50ed07e3-f921-4f5e-887e-a45b372a9b25" 00:32:29.902 ], 00:32:29.902 "product_name": "Malloc disk", 00:32:29.902 "block_size": 512, 00:32:29.902 "num_blocks": 65536, 00:32:29.902 "uuid": "50ed07e3-f921-4f5e-887e-a45b372a9b25", 00:32:29.902 "assigned_rate_limits": { 00:32:29.902 "rw_ios_per_sec": 0, 00:32:29.902 "rw_mbytes_per_sec": 0, 00:32:29.902 "r_mbytes_per_sec": 0, 00:32:29.902 "w_mbytes_per_sec": 0 00:32:29.902 }, 00:32:29.902 "claimed": true, 00:32:29.902 "claim_type": "exclusive_write", 00:32:29.902 "zoned": false, 00:32:29.902 "supported_io_types": { 00:32:29.902 "read": true, 00:32:29.902 "write": true, 00:32:29.902 "unmap": true, 00:32:29.902 "flush": true, 00:32:29.902 "reset": true, 00:32:29.902 "nvme_admin": false, 00:32:29.902 "nvme_io": false, 00:32:29.902 "nvme_io_md": false, 00:32:29.902 "write_zeroes": true, 00:32:29.902 "zcopy": true, 00:32:29.902 "get_zone_info": false, 00:32:29.902 "zone_management": false, 00:32:29.902 "zone_append": false, 00:32:29.902 "compare": false, 00:32:29.902 "compare_and_write": false, 00:32:29.902 "abort": true, 00:32:29.902 "seek_hole": false, 00:32:29.902 "seek_data": false, 00:32:29.902 "copy": true, 00:32:29.902 "nvme_iov_md": false 00:32:29.902 }, 00:32:29.902 "memory_domains": [ 00:32:29.902 { 00:32:29.902 "dma_device_id": "system", 00:32:29.902 "dma_device_type": 1 00:32:29.902 }, 00:32:29.902 { 00:32:29.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:29.902 "dma_device_type": 2 00:32:29.902 } 00:32:29.902 ], 00:32:29.902 "driver_specific": {} 00:32:29.902 } 00:32:29.902 ] 00:32:29.902 23:18:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:32:29.902 23:18:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:32:29.902 23:18:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:29.902 23:18:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:29.902 23:18:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:29.902 23:18:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:29.902 23:18:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:29.902 23:18:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:29.902 23:18:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:29.902 23:18:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:29.902 23:18:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:29.902 23:18:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:29.902 23:18:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:29.902 23:18:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:29.902 23:18:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:30.161 23:18:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:30.161 "name": "Existed_Raid", 00:32:30.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:30.161 "strip_size_kb": 64, 00:32:30.161 "state": "configuring", 00:32:30.161 "raid_level": "raid5f", 00:32:30.161 "superblock": false, 00:32:30.161 "num_base_bdevs": 4, 00:32:30.161 "num_base_bdevs_discovered": 2, 00:32:30.161 "num_base_bdevs_operational": 4, 00:32:30.161 "base_bdevs_list": [ 00:32:30.161 { 00:32:30.161 "name": "BaseBdev1", 00:32:30.161 "uuid": "d9a1da74-91ee-49d2-9bc0-99cc49544193", 00:32:30.161 "is_configured": true, 00:32:30.161 "data_offset": 0, 00:32:30.161 "data_size": 65536 00:32:30.161 }, 00:32:30.161 { 00:32:30.161 "name": "BaseBdev2", 00:32:30.161 "uuid": "50ed07e3-f921-4f5e-887e-a45b372a9b25", 00:32:30.161 "is_configured": true, 00:32:30.161 "data_offset": 0, 00:32:30.161 "data_size": 65536 00:32:30.161 }, 00:32:30.161 { 00:32:30.161 "name": "BaseBdev3", 00:32:30.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:30.161 "is_configured": false, 00:32:30.161 "data_offset": 0, 00:32:30.161 "data_size": 0 00:32:30.161 }, 00:32:30.161 { 00:32:30.161 "name": "BaseBdev4", 00:32:30.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:30.161 "is_configured": false, 00:32:30.161 "data_offset": 0, 00:32:30.161 "data_size": 0 00:32:30.161 } 00:32:30.161 ] 00:32:30.161 }' 00:32:30.161 23:18:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:30.161 23:18:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.737 23:18:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:32:31.011 [2024-07-13 23:18:20.188985] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:31.011 BaseBdev3 00:32:31.011 23:18:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:32:31.011 23:18:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:32:31.011 23:18:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:31.011 23:18:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:32:31.011 23:18:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:31.011 23:18:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:31.011 23:18:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:31.270 23:18:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:31.270 [ 00:32:31.270 { 00:32:31.270 "name": "BaseBdev3", 00:32:31.270 "aliases": [ 00:32:31.270 "97bd92d8-61d2-4952-b0c1-06ea947ac8b9" 00:32:31.270 ], 00:32:31.270 "product_name": "Malloc disk", 00:32:31.270 "block_size": 512, 00:32:31.270 "num_blocks": 65536, 00:32:31.270 "uuid": "97bd92d8-61d2-4952-b0c1-06ea947ac8b9", 00:32:31.270 "assigned_rate_limits": { 00:32:31.270 "rw_ios_per_sec": 0, 00:32:31.270 "rw_mbytes_per_sec": 0, 00:32:31.270 "r_mbytes_per_sec": 0, 00:32:31.270 "w_mbytes_per_sec": 0 00:32:31.270 }, 00:32:31.270 "claimed": true, 00:32:31.270 "claim_type": "exclusive_write", 00:32:31.270 "zoned": false, 00:32:31.270 "supported_io_types": { 00:32:31.270 "read": true, 00:32:31.270 "write": true, 00:32:31.270 "unmap": true, 00:32:31.270 "flush": true, 00:32:31.270 "reset": true, 00:32:31.270 "nvme_admin": false, 00:32:31.270 "nvme_io": false, 00:32:31.270 "nvme_io_md": false, 00:32:31.270 "write_zeroes": true, 00:32:31.270 "zcopy": true, 00:32:31.270 "get_zone_info": false, 00:32:31.270 "zone_management": false, 00:32:31.270 "zone_append": false, 00:32:31.270 "compare": false, 00:32:31.270 "compare_and_write": false, 00:32:31.270 "abort": true, 00:32:31.270 "seek_hole": false, 00:32:31.270 "seek_data": false, 00:32:31.270 "copy": true, 00:32:31.270 "nvme_iov_md": false 00:32:31.270 }, 00:32:31.270 "memory_domains": [ 00:32:31.270 { 00:32:31.270 "dma_device_id": "system", 00:32:31.270 "dma_device_type": 1 00:32:31.270 }, 00:32:31.270 { 00:32:31.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:31.270 "dma_device_type": 2 00:32:31.270 } 00:32:31.270 ], 00:32:31.270 "driver_specific": {} 00:32:31.270 } 00:32:31.270 ] 00:32:31.528 23:18:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:32:31.528 23:18:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:32:31.528 23:18:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:31.529 23:18:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:31.529 23:18:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:31.529 23:18:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:31.529 23:18:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:31.529 23:18:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:31.529 23:18:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:31.529 23:18:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:31.529 23:18:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:31.529 23:18:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:31.529 23:18:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:31.529 23:18:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:31.529 23:18:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:31.529 23:18:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:31.529 "name": "Existed_Raid", 00:32:31.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:31.529 "strip_size_kb": 64, 00:32:31.529 "state": "configuring", 00:32:31.529 "raid_level": "raid5f", 00:32:31.529 "superblock": false, 00:32:31.529 "num_base_bdevs": 4, 00:32:31.529 "num_base_bdevs_discovered": 3, 00:32:31.529 "num_base_bdevs_operational": 4, 00:32:31.529 "base_bdevs_list": [ 00:32:31.529 { 00:32:31.529 "name": "BaseBdev1", 00:32:31.529 "uuid": "d9a1da74-91ee-49d2-9bc0-99cc49544193", 00:32:31.529 "is_configured": true, 00:32:31.529 "data_offset": 0, 00:32:31.529 "data_size": 65536 00:32:31.529 }, 00:32:31.529 { 00:32:31.529 "name": "BaseBdev2", 00:32:31.529 "uuid": "50ed07e3-f921-4f5e-887e-a45b372a9b25", 00:32:31.529 "is_configured": true, 00:32:31.529 "data_offset": 0, 00:32:31.529 "data_size": 65536 00:32:31.529 }, 00:32:31.529 { 00:32:31.529 "name": "BaseBdev3", 00:32:31.529 "uuid": "97bd92d8-61d2-4952-b0c1-06ea947ac8b9", 00:32:31.529 "is_configured": true, 00:32:31.529 "data_offset": 0, 00:32:31.529 "data_size": 65536 00:32:31.529 }, 00:32:31.529 { 00:32:31.529 "name": "BaseBdev4", 00:32:31.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:31.529 "is_configured": false, 00:32:31.529 "data_offset": 0, 00:32:31.529 "data_size": 0 00:32:31.529 } 00:32:31.529 ] 00:32:31.529 }' 00:32:31.529 23:18:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:31.529 23:18:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:32.463 23:18:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:32:32.463 [2024-07-13 23:18:21.726853] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:32.463 [2024-07-13 23:18:21.727013] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:32:32.463 [2024-07-13 23:18:21.727027] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:32:32.463 [2024-07-13 23:18:21.727217] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:32:32.463 [2024-07-13 23:18:21.728269] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:32:32.463 [2024-07-13 23:18:21.728294] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:32:32.463 [2024-07-13 23:18:21.728613] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:32.463 BaseBdev4 00:32:32.463 23:18:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:32:32.464 23:18:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:32:32.464 23:18:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:32.464 23:18:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:32:32.464 23:18:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:32.464 23:18:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:32.464 23:18:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:32.721 23:18:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:32:32.980 [ 00:32:32.980 { 00:32:32.980 "name": "BaseBdev4", 00:32:32.980 "aliases": [ 00:32:32.980 "10349278-d7ac-48ff-9ab0-bfca1fea804c" 00:32:32.980 ], 00:32:32.980 "product_name": "Malloc disk", 00:32:32.980 "block_size": 512, 00:32:32.980 "num_blocks": 65536, 00:32:32.980 "uuid": "10349278-d7ac-48ff-9ab0-bfca1fea804c", 00:32:32.980 "assigned_rate_limits": { 00:32:32.980 "rw_ios_per_sec": 0, 00:32:32.980 "rw_mbytes_per_sec": 0, 00:32:32.980 "r_mbytes_per_sec": 0, 00:32:32.980 "w_mbytes_per_sec": 0 00:32:32.980 }, 00:32:32.980 "claimed": true, 00:32:32.980 "claim_type": "exclusive_write", 00:32:32.980 "zoned": false, 00:32:32.980 "supported_io_types": { 00:32:32.980 "read": true, 00:32:32.980 "write": true, 00:32:32.980 "unmap": true, 00:32:32.980 "flush": true, 00:32:32.980 "reset": true, 00:32:32.980 "nvme_admin": false, 00:32:32.980 "nvme_io": false, 00:32:32.980 "nvme_io_md": false, 00:32:32.980 "write_zeroes": true, 00:32:32.980 "zcopy": true, 00:32:32.980 "get_zone_info": false, 00:32:32.980 "zone_management": false, 00:32:32.980 "zone_append": false, 00:32:32.980 "compare": false, 00:32:32.980 "compare_and_write": false, 00:32:32.980 "abort": true, 00:32:32.980 "seek_hole": false, 00:32:32.980 "seek_data": false, 00:32:32.980 "copy": true, 00:32:32.980 "nvme_iov_md": false 00:32:32.980 }, 00:32:32.980 "memory_domains": [ 00:32:32.980 { 00:32:32.980 "dma_device_id": "system", 00:32:32.980 "dma_device_type": 1 00:32:32.980 }, 00:32:32.980 { 00:32:32.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:32.980 "dma_device_type": 2 00:32:32.980 } 00:32:32.980 ], 00:32:32.980 "driver_specific": {} 00:32:32.980 } 00:32:32.980 ] 00:32:32.980 23:18:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:32:32.980 23:18:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:32:32.980 23:18:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:32.980 23:18:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:32:32.980 23:18:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:32.980 23:18:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:32.980 23:18:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:32.980 23:18:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:32.980 23:18:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:32.980 23:18:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:32.980 23:18:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:32.980 23:18:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:32.980 23:18:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:32.980 23:18:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:32.980 23:18:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:33.238 23:18:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:33.238 "name": "Existed_Raid", 00:32:33.238 "uuid": "582f4acf-cd90-452a-91af-500283327bbd", 00:32:33.238 "strip_size_kb": 64, 00:32:33.238 "state": "online", 00:32:33.238 "raid_level": "raid5f", 00:32:33.238 "superblock": false, 00:32:33.238 "num_base_bdevs": 4, 00:32:33.238 "num_base_bdevs_discovered": 4, 00:32:33.238 "num_base_bdevs_operational": 4, 00:32:33.238 "base_bdevs_list": [ 00:32:33.238 { 00:32:33.238 "name": "BaseBdev1", 00:32:33.238 "uuid": "d9a1da74-91ee-49d2-9bc0-99cc49544193", 00:32:33.238 "is_configured": true, 00:32:33.238 "data_offset": 0, 00:32:33.238 "data_size": 65536 00:32:33.238 }, 00:32:33.238 { 00:32:33.238 "name": "BaseBdev2", 00:32:33.238 "uuid": "50ed07e3-f921-4f5e-887e-a45b372a9b25", 00:32:33.238 "is_configured": true, 00:32:33.238 "data_offset": 0, 00:32:33.238 "data_size": 65536 00:32:33.238 }, 00:32:33.238 { 00:32:33.238 "name": "BaseBdev3", 00:32:33.238 "uuid": "97bd92d8-61d2-4952-b0c1-06ea947ac8b9", 00:32:33.238 "is_configured": true, 00:32:33.238 "data_offset": 0, 00:32:33.238 "data_size": 65536 00:32:33.238 }, 00:32:33.238 { 00:32:33.238 "name": "BaseBdev4", 00:32:33.238 "uuid": "10349278-d7ac-48ff-9ab0-bfca1fea804c", 00:32:33.238 "is_configured": true, 00:32:33.238 "data_offset": 0, 00:32:33.238 "data_size": 65536 00:32:33.238 } 00:32:33.238 ] 00:32:33.238 }' 00:32:33.238 23:18:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:33.238 23:18:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:33.805 23:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:32:33.805 23:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:32:33.805 23:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:33.805 23:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:33.805 23:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:33.805 23:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:32:33.805 23:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:32:33.805 23:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:34.063 [2024-07-13 23:18:23.301946] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:34.064 23:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:34.064 "name": "Existed_Raid", 00:32:34.064 "aliases": [ 00:32:34.064 "582f4acf-cd90-452a-91af-500283327bbd" 00:32:34.064 ], 00:32:34.064 "product_name": "Raid Volume", 00:32:34.064 "block_size": 512, 00:32:34.064 "num_blocks": 196608, 00:32:34.064 "uuid": "582f4acf-cd90-452a-91af-500283327bbd", 00:32:34.064 "assigned_rate_limits": { 00:32:34.064 "rw_ios_per_sec": 0, 00:32:34.064 "rw_mbytes_per_sec": 0, 00:32:34.064 "r_mbytes_per_sec": 0, 00:32:34.064 "w_mbytes_per_sec": 0 00:32:34.064 }, 00:32:34.064 "claimed": false, 00:32:34.064 "zoned": false, 00:32:34.064 "supported_io_types": { 00:32:34.064 "read": true, 00:32:34.064 "write": true, 00:32:34.064 "unmap": false, 00:32:34.064 "flush": false, 00:32:34.064 "reset": true, 00:32:34.064 "nvme_admin": false, 00:32:34.064 "nvme_io": false, 00:32:34.064 "nvme_io_md": false, 00:32:34.064 "write_zeroes": true, 00:32:34.064 "zcopy": false, 00:32:34.064 "get_zone_info": false, 00:32:34.064 "zone_management": false, 00:32:34.064 "zone_append": false, 00:32:34.064 "compare": false, 00:32:34.064 "compare_and_write": false, 00:32:34.064 "abort": false, 00:32:34.064 "seek_hole": false, 00:32:34.064 "seek_data": false, 00:32:34.064 "copy": false, 00:32:34.064 "nvme_iov_md": false 00:32:34.064 }, 00:32:34.064 "driver_specific": { 00:32:34.064 "raid": { 00:32:34.064 "uuid": "582f4acf-cd90-452a-91af-500283327bbd", 00:32:34.064 "strip_size_kb": 64, 00:32:34.064 "state": "online", 00:32:34.064 "raid_level": "raid5f", 00:32:34.064 "superblock": false, 00:32:34.064 "num_base_bdevs": 4, 00:32:34.064 "num_base_bdevs_discovered": 4, 00:32:34.064 "num_base_bdevs_operational": 4, 00:32:34.064 "base_bdevs_list": [ 00:32:34.064 { 00:32:34.064 "name": "BaseBdev1", 00:32:34.064 "uuid": "d9a1da74-91ee-49d2-9bc0-99cc49544193", 00:32:34.064 "is_configured": true, 00:32:34.064 "data_offset": 0, 00:32:34.064 "data_size": 65536 00:32:34.064 }, 00:32:34.064 { 00:32:34.064 "name": "BaseBdev2", 00:32:34.064 "uuid": "50ed07e3-f921-4f5e-887e-a45b372a9b25", 00:32:34.064 "is_configured": true, 00:32:34.064 "data_offset": 0, 00:32:34.064 "data_size": 65536 00:32:34.064 }, 00:32:34.064 { 00:32:34.064 "name": "BaseBdev3", 00:32:34.064 "uuid": "97bd92d8-61d2-4952-b0c1-06ea947ac8b9", 00:32:34.064 "is_configured": true, 00:32:34.064 "data_offset": 0, 00:32:34.064 "data_size": 65536 00:32:34.064 }, 00:32:34.064 { 00:32:34.064 "name": "BaseBdev4", 00:32:34.064 "uuid": "10349278-d7ac-48ff-9ab0-bfca1fea804c", 00:32:34.064 "is_configured": true, 00:32:34.064 "data_offset": 0, 00:32:34.064 "data_size": 65536 00:32:34.064 } 00:32:34.064 ] 00:32:34.064 } 00:32:34.064 } 00:32:34.064 }' 00:32:34.064 23:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:34.064 23:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:32:34.064 BaseBdev2 00:32:34.064 BaseBdev3 00:32:34.064 BaseBdev4' 00:32:34.064 23:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:34.064 23:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:34.064 23:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:32:34.323 23:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:34.323 "name": "BaseBdev1", 00:32:34.323 "aliases": [ 00:32:34.323 "d9a1da74-91ee-49d2-9bc0-99cc49544193" 00:32:34.323 ], 00:32:34.323 "product_name": "Malloc disk", 00:32:34.323 "block_size": 512, 00:32:34.323 "num_blocks": 65536, 00:32:34.323 "uuid": "d9a1da74-91ee-49d2-9bc0-99cc49544193", 00:32:34.323 "assigned_rate_limits": { 00:32:34.323 "rw_ios_per_sec": 0, 00:32:34.323 "rw_mbytes_per_sec": 0, 00:32:34.323 "r_mbytes_per_sec": 0, 00:32:34.323 "w_mbytes_per_sec": 0 00:32:34.323 }, 00:32:34.323 "claimed": true, 00:32:34.323 "claim_type": "exclusive_write", 00:32:34.323 "zoned": false, 00:32:34.323 "supported_io_types": { 00:32:34.323 "read": true, 00:32:34.323 "write": true, 00:32:34.323 "unmap": true, 00:32:34.323 "flush": true, 00:32:34.323 "reset": true, 00:32:34.323 "nvme_admin": false, 00:32:34.323 "nvme_io": false, 00:32:34.323 "nvme_io_md": false, 00:32:34.323 "write_zeroes": true, 00:32:34.323 "zcopy": true, 00:32:34.323 "get_zone_info": false, 00:32:34.323 "zone_management": false, 00:32:34.323 "zone_append": false, 00:32:34.323 "compare": false, 00:32:34.323 "compare_and_write": false, 00:32:34.323 "abort": true, 00:32:34.323 "seek_hole": false, 00:32:34.323 "seek_data": false, 00:32:34.323 "copy": true, 00:32:34.323 "nvme_iov_md": false 00:32:34.323 }, 00:32:34.323 "memory_domains": [ 00:32:34.323 { 00:32:34.323 "dma_device_id": "system", 00:32:34.323 "dma_device_type": 1 00:32:34.323 }, 00:32:34.323 { 00:32:34.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:34.323 "dma_device_type": 2 00:32:34.323 } 00:32:34.323 ], 00:32:34.323 "driver_specific": {} 00:32:34.323 }' 00:32:34.323 23:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:34.323 23:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:34.323 23:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:34.323 23:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:34.323 23:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:34.582 23:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:34.582 23:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:34.582 23:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:34.582 23:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:34.582 23:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:34.582 23:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:34.582 23:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:34.582 23:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:34.582 23:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:32:34.582 23:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:35.150 23:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:35.150 "name": "BaseBdev2", 00:32:35.150 "aliases": [ 00:32:35.150 "50ed07e3-f921-4f5e-887e-a45b372a9b25" 00:32:35.150 ], 00:32:35.150 "product_name": "Malloc disk", 00:32:35.150 "block_size": 512, 00:32:35.150 "num_blocks": 65536, 00:32:35.150 "uuid": "50ed07e3-f921-4f5e-887e-a45b372a9b25", 00:32:35.150 "assigned_rate_limits": { 00:32:35.150 "rw_ios_per_sec": 0, 00:32:35.150 "rw_mbytes_per_sec": 0, 00:32:35.150 "r_mbytes_per_sec": 0, 00:32:35.150 "w_mbytes_per_sec": 0 00:32:35.150 }, 00:32:35.150 "claimed": true, 00:32:35.150 "claim_type": "exclusive_write", 00:32:35.150 "zoned": false, 00:32:35.150 "supported_io_types": { 00:32:35.150 "read": true, 00:32:35.150 "write": true, 00:32:35.150 "unmap": true, 00:32:35.150 "flush": true, 00:32:35.150 "reset": true, 00:32:35.150 "nvme_admin": false, 00:32:35.150 "nvme_io": false, 00:32:35.150 "nvme_io_md": false, 00:32:35.150 "write_zeroes": true, 00:32:35.150 "zcopy": true, 00:32:35.150 "get_zone_info": false, 00:32:35.150 "zone_management": false, 00:32:35.150 "zone_append": false, 00:32:35.150 "compare": false, 00:32:35.150 "compare_and_write": false, 00:32:35.150 "abort": true, 00:32:35.150 "seek_hole": false, 00:32:35.150 "seek_data": false, 00:32:35.150 "copy": true, 00:32:35.150 "nvme_iov_md": false 00:32:35.150 }, 00:32:35.150 "memory_domains": [ 00:32:35.150 { 00:32:35.150 "dma_device_id": "system", 00:32:35.150 "dma_device_type": 1 00:32:35.150 }, 00:32:35.150 { 00:32:35.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:35.150 "dma_device_type": 2 00:32:35.150 } 00:32:35.150 ], 00:32:35.150 "driver_specific": {} 00:32:35.150 }' 00:32:35.150 23:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:35.150 23:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:35.150 23:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:35.150 23:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:35.150 23:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:35.150 23:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:35.150 23:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:35.150 23:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:35.150 23:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:35.150 23:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:35.409 23:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:35.409 23:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:35.409 23:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:35.409 23:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:32:35.409 23:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:35.668 23:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:35.668 "name": "BaseBdev3", 00:32:35.668 "aliases": [ 00:32:35.668 "97bd92d8-61d2-4952-b0c1-06ea947ac8b9" 00:32:35.668 ], 00:32:35.668 "product_name": "Malloc disk", 00:32:35.668 "block_size": 512, 00:32:35.668 "num_blocks": 65536, 00:32:35.668 "uuid": "97bd92d8-61d2-4952-b0c1-06ea947ac8b9", 00:32:35.668 "assigned_rate_limits": { 00:32:35.668 "rw_ios_per_sec": 0, 00:32:35.668 "rw_mbytes_per_sec": 0, 00:32:35.668 "r_mbytes_per_sec": 0, 00:32:35.668 "w_mbytes_per_sec": 0 00:32:35.668 }, 00:32:35.668 "claimed": true, 00:32:35.668 "claim_type": "exclusive_write", 00:32:35.668 "zoned": false, 00:32:35.668 "supported_io_types": { 00:32:35.668 "read": true, 00:32:35.668 "write": true, 00:32:35.668 "unmap": true, 00:32:35.668 "flush": true, 00:32:35.668 "reset": true, 00:32:35.668 "nvme_admin": false, 00:32:35.668 "nvme_io": false, 00:32:35.668 "nvme_io_md": false, 00:32:35.668 "write_zeroes": true, 00:32:35.668 "zcopy": true, 00:32:35.668 "get_zone_info": false, 00:32:35.668 "zone_management": false, 00:32:35.668 "zone_append": false, 00:32:35.668 "compare": false, 00:32:35.668 "compare_and_write": false, 00:32:35.668 "abort": true, 00:32:35.668 "seek_hole": false, 00:32:35.668 "seek_data": false, 00:32:35.668 "copy": true, 00:32:35.668 "nvme_iov_md": false 00:32:35.668 }, 00:32:35.668 "memory_domains": [ 00:32:35.668 { 00:32:35.668 "dma_device_id": "system", 00:32:35.668 "dma_device_type": 1 00:32:35.668 }, 00:32:35.668 { 00:32:35.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:35.668 "dma_device_type": 2 00:32:35.668 } 00:32:35.668 ], 00:32:35.668 "driver_specific": {} 00:32:35.668 }' 00:32:35.668 23:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:35.668 23:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:35.668 23:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:35.668 23:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:35.668 23:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:35.668 23:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:35.668 23:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:35.927 23:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:35.927 23:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:35.927 23:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:35.927 23:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:35.927 23:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:35.927 23:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:35.928 23:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:32:35.928 23:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:36.187 23:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:36.187 "name": "BaseBdev4", 00:32:36.187 "aliases": [ 00:32:36.187 "10349278-d7ac-48ff-9ab0-bfca1fea804c" 00:32:36.187 ], 00:32:36.187 "product_name": "Malloc disk", 00:32:36.187 "block_size": 512, 00:32:36.187 "num_blocks": 65536, 00:32:36.187 "uuid": "10349278-d7ac-48ff-9ab0-bfca1fea804c", 00:32:36.187 "assigned_rate_limits": { 00:32:36.187 "rw_ios_per_sec": 0, 00:32:36.187 "rw_mbytes_per_sec": 0, 00:32:36.187 "r_mbytes_per_sec": 0, 00:32:36.187 "w_mbytes_per_sec": 0 00:32:36.187 }, 00:32:36.187 "claimed": true, 00:32:36.187 "claim_type": "exclusive_write", 00:32:36.187 "zoned": false, 00:32:36.187 "supported_io_types": { 00:32:36.187 "read": true, 00:32:36.187 "write": true, 00:32:36.187 "unmap": true, 00:32:36.187 "flush": true, 00:32:36.187 "reset": true, 00:32:36.187 "nvme_admin": false, 00:32:36.187 "nvme_io": false, 00:32:36.187 "nvme_io_md": false, 00:32:36.187 "write_zeroes": true, 00:32:36.187 "zcopy": true, 00:32:36.187 "get_zone_info": false, 00:32:36.187 "zone_management": false, 00:32:36.187 "zone_append": false, 00:32:36.187 "compare": false, 00:32:36.187 "compare_and_write": false, 00:32:36.187 "abort": true, 00:32:36.187 "seek_hole": false, 00:32:36.187 "seek_data": false, 00:32:36.187 "copy": true, 00:32:36.187 "nvme_iov_md": false 00:32:36.187 }, 00:32:36.187 "memory_domains": [ 00:32:36.187 { 00:32:36.187 "dma_device_id": "system", 00:32:36.187 "dma_device_type": 1 00:32:36.187 }, 00:32:36.187 { 00:32:36.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:36.187 "dma_device_type": 2 00:32:36.187 } 00:32:36.187 ], 00:32:36.187 "driver_specific": {} 00:32:36.187 }' 00:32:36.187 23:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:36.187 23:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:36.446 23:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:36.446 23:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:36.446 23:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:36.447 23:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:36.447 23:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:36.447 23:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:36.447 23:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:36.447 23:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:36.706 23:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:36.706 23:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:36.706 23:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:32:36.964 [2024-07-13 23:18:26.162022] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:36.964 23:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:32:36.964 23:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:32:36.964 23:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:32:36.964 23:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:32:36.964 23:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:32:36.965 23:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:32:36.965 23:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:36.965 23:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:36.965 23:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:36.965 23:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:36.965 23:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:36.965 23:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:36.965 23:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:36.965 23:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:36.965 23:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:36.965 23:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:36.965 23:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:37.222 23:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:37.222 "name": "Existed_Raid", 00:32:37.222 "uuid": "582f4acf-cd90-452a-91af-500283327bbd", 00:32:37.222 "strip_size_kb": 64, 00:32:37.222 "state": "online", 00:32:37.222 "raid_level": "raid5f", 00:32:37.222 "superblock": false, 00:32:37.222 "num_base_bdevs": 4, 00:32:37.222 "num_base_bdevs_discovered": 3, 00:32:37.222 "num_base_bdevs_operational": 3, 00:32:37.222 "base_bdevs_list": [ 00:32:37.222 { 00:32:37.222 "name": null, 00:32:37.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:37.222 "is_configured": false, 00:32:37.222 "data_offset": 0, 00:32:37.222 "data_size": 65536 00:32:37.222 }, 00:32:37.222 { 00:32:37.222 "name": "BaseBdev2", 00:32:37.222 "uuid": "50ed07e3-f921-4f5e-887e-a45b372a9b25", 00:32:37.222 "is_configured": true, 00:32:37.222 "data_offset": 0, 00:32:37.222 "data_size": 65536 00:32:37.222 }, 00:32:37.222 { 00:32:37.222 "name": "BaseBdev3", 00:32:37.222 "uuid": "97bd92d8-61d2-4952-b0c1-06ea947ac8b9", 00:32:37.222 "is_configured": true, 00:32:37.222 "data_offset": 0, 00:32:37.222 "data_size": 65536 00:32:37.222 }, 00:32:37.222 { 00:32:37.222 "name": "BaseBdev4", 00:32:37.222 "uuid": "10349278-d7ac-48ff-9ab0-bfca1fea804c", 00:32:37.222 "is_configured": true, 00:32:37.222 "data_offset": 0, 00:32:37.222 "data_size": 65536 00:32:37.222 } 00:32:37.222 ] 00:32:37.222 }' 00:32:37.222 23:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:37.222 23:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:37.811 23:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:32:37.811 23:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:37.811 23:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:37.811 23:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:32:38.069 23:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:32:38.069 23:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:38.069 23:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:32:38.328 [2024-07-13 23:18:27.637517] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:38.328 [2024-07-13 23:18:27.637644] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:38.328 [2024-07-13 23:18:27.647415] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:38.328 23:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:32:38.328 23:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:38.328 23:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:38.328 23:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:32:38.587 23:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:32:38.587 23:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:38.587 23:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:32:38.846 [2024-07-13 23:18:28.183611] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:38.846 23:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:32:38.846 23:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:38.846 23:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:38.846 23:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:32:39.105 23:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:32:39.105 23:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:39.105 23:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:32:39.363 [2024-07-13 23:18:28.625806] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:32:39.363 [2024-07-13 23:18:28.625882] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:32:39.363 23:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:32:39.363 23:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:39.363 23:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:39.363 23:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:32:39.621 23:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:32:39.621 23:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:32:39.621 23:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:32:39.621 23:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:32:39.621 23:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:32:39.621 23:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:32:39.879 BaseBdev2 00:32:39.879 23:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:32:39.879 23:18:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:32:39.879 23:18:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:39.879 23:18:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:32:39.879 23:18:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:39.879 23:18:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:39.879 23:18:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:40.137 23:18:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:40.395 [ 00:32:40.395 { 00:32:40.395 "name": "BaseBdev2", 00:32:40.395 "aliases": [ 00:32:40.395 "aba6f75f-3043-4ab6-99b7-a0b66f47a9d5" 00:32:40.395 ], 00:32:40.395 "product_name": "Malloc disk", 00:32:40.395 "block_size": 512, 00:32:40.395 "num_blocks": 65536, 00:32:40.395 "uuid": "aba6f75f-3043-4ab6-99b7-a0b66f47a9d5", 00:32:40.395 "assigned_rate_limits": { 00:32:40.395 "rw_ios_per_sec": 0, 00:32:40.395 "rw_mbytes_per_sec": 0, 00:32:40.395 "r_mbytes_per_sec": 0, 00:32:40.395 "w_mbytes_per_sec": 0 00:32:40.395 }, 00:32:40.395 "claimed": false, 00:32:40.395 "zoned": false, 00:32:40.395 "supported_io_types": { 00:32:40.395 "read": true, 00:32:40.395 "write": true, 00:32:40.395 "unmap": true, 00:32:40.395 "flush": true, 00:32:40.395 "reset": true, 00:32:40.395 "nvme_admin": false, 00:32:40.395 "nvme_io": false, 00:32:40.395 "nvme_io_md": false, 00:32:40.395 "write_zeroes": true, 00:32:40.395 "zcopy": true, 00:32:40.395 "get_zone_info": false, 00:32:40.395 "zone_management": false, 00:32:40.395 "zone_append": false, 00:32:40.395 "compare": false, 00:32:40.395 "compare_and_write": false, 00:32:40.395 "abort": true, 00:32:40.395 "seek_hole": false, 00:32:40.395 "seek_data": false, 00:32:40.395 "copy": true, 00:32:40.395 "nvme_iov_md": false 00:32:40.395 }, 00:32:40.395 "memory_domains": [ 00:32:40.395 { 00:32:40.395 "dma_device_id": "system", 00:32:40.395 "dma_device_type": 1 00:32:40.395 }, 00:32:40.395 { 00:32:40.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:40.395 "dma_device_type": 2 00:32:40.395 } 00:32:40.395 ], 00:32:40.395 "driver_specific": {} 00:32:40.395 } 00:32:40.395 ] 00:32:40.395 23:18:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:32:40.395 23:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:32:40.395 23:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:32:40.395 23:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:32:40.653 BaseBdev3 00:32:40.653 23:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:32:40.653 23:18:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:32:40.653 23:18:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:40.653 23:18:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:32:40.653 23:18:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:40.653 23:18:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:40.653 23:18:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:40.912 23:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:40.912 [ 00:32:40.912 { 00:32:40.912 "name": "BaseBdev3", 00:32:40.912 "aliases": [ 00:32:40.912 "574e423d-8abe-42ad-b8da-81f812d86cc7" 00:32:40.912 ], 00:32:40.912 "product_name": "Malloc disk", 00:32:40.912 "block_size": 512, 00:32:40.912 "num_blocks": 65536, 00:32:40.912 "uuid": "574e423d-8abe-42ad-b8da-81f812d86cc7", 00:32:40.912 "assigned_rate_limits": { 00:32:40.912 "rw_ios_per_sec": 0, 00:32:40.912 "rw_mbytes_per_sec": 0, 00:32:40.912 "r_mbytes_per_sec": 0, 00:32:40.912 "w_mbytes_per_sec": 0 00:32:40.912 }, 00:32:40.912 "claimed": false, 00:32:40.912 "zoned": false, 00:32:40.912 "supported_io_types": { 00:32:40.912 "read": true, 00:32:40.912 "write": true, 00:32:40.912 "unmap": true, 00:32:40.912 "flush": true, 00:32:40.912 "reset": true, 00:32:40.912 "nvme_admin": false, 00:32:40.912 "nvme_io": false, 00:32:40.912 "nvme_io_md": false, 00:32:40.912 "write_zeroes": true, 00:32:40.912 "zcopy": true, 00:32:40.912 "get_zone_info": false, 00:32:40.912 "zone_management": false, 00:32:40.912 "zone_append": false, 00:32:40.912 "compare": false, 00:32:40.912 "compare_and_write": false, 00:32:40.912 "abort": true, 00:32:40.912 "seek_hole": false, 00:32:40.912 "seek_data": false, 00:32:40.912 "copy": true, 00:32:40.912 "nvme_iov_md": false 00:32:40.912 }, 00:32:40.912 "memory_domains": [ 00:32:40.912 { 00:32:40.912 "dma_device_id": "system", 00:32:40.912 "dma_device_type": 1 00:32:40.912 }, 00:32:40.912 { 00:32:40.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:40.912 "dma_device_type": 2 00:32:40.912 } 00:32:40.912 ], 00:32:40.912 "driver_specific": {} 00:32:40.912 } 00:32:40.912 ] 00:32:41.170 23:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:32:41.170 23:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:32:41.170 23:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:32:41.170 23:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:32:41.170 BaseBdev4 00:32:41.170 23:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:32:41.170 23:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:32:41.170 23:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:41.170 23:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:32:41.170 23:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:41.170 23:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:41.170 23:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:41.428 23:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:32:41.685 [ 00:32:41.685 { 00:32:41.685 "name": "BaseBdev4", 00:32:41.685 "aliases": [ 00:32:41.685 "04d67f6c-7f74-4798-9e04-b38278a3e631" 00:32:41.685 ], 00:32:41.685 "product_name": "Malloc disk", 00:32:41.685 "block_size": 512, 00:32:41.685 "num_blocks": 65536, 00:32:41.685 "uuid": "04d67f6c-7f74-4798-9e04-b38278a3e631", 00:32:41.685 "assigned_rate_limits": { 00:32:41.685 "rw_ios_per_sec": 0, 00:32:41.685 "rw_mbytes_per_sec": 0, 00:32:41.685 "r_mbytes_per_sec": 0, 00:32:41.685 "w_mbytes_per_sec": 0 00:32:41.685 }, 00:32:41.685 "claimed": false, 00:32:41.685 "zoned": false, 00:32:41.685 "supported_io_types": { 00:32:41.685 "read": true, 00:32:41.685 "write": true, 00:32:41.685 "unmap": true, 00:32:41.685 "flush": true, 00:32:41.685 "reset": true, 00:32:41.685 "nvme_admin": false, 00:32:41.685 "nvme_io": false, 00:32:41.685 "nvme_io_md": false, 00:32:41.685 "write_zeroes": true, 00:32:41.685 "zcopy": true, 00:32:41.685 "get_zone_info": false, 00:32:41.685 "zone_management": false, 00:32:41.685 "zone_append": false, 00:32:41.685 "compare": false, 00:32:41.685 "compare_and_write": false, 00:32:41.685 "abort": true, 00:32:41.685 "seek_hole": false, 00:32:41.685 "seek_data": false, 00:32:41.685 "copy": true, 00:32:41.686 "nvme_iov_md": false 00:32:41.686 }, 00:32:41.686 "memory_domains": [ 00:32:41.686 { 00:32:41.686 "dma_device_id": "system", 00:32:41.686 "dma_device_type": 1 00:32:41.686 }, 00:32:41.686 { 00:32:41.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:41.686 "dma_device_type": 2 00:32:41.686 } 00:32:41.686 ], 00:32:41.686 "driver_specific": {} 00:32:41.686 } 00:32:41.686 ] 00:32:41.686 23:18:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:32:41.686 23:18:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:32:41.686 23:18:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:32:41.686 23:18:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:32:41.944 [2024-07-13 23:18:31.215402] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:41.944 [2024-07-13 23:18:31.216204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:41.944 [2024-07-13 23:18:31.216251] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:41.944 [2024-07-13 23:18:31.218603] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:41.944 [2024-07-13 23:18:31.218664] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:41.944 23:18:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:41.944 23:18:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:41.944 23:18:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:41.944 23:18:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:41.944 23:18:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:41.944 23:18:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:41.944 23:18:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:41.944 23:18:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:41.945 23:18:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:41.945 23:18:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:41.945 23:18:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:41.945 23:18:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:42.205 23:18:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:42.205 "name": "Existed_Raid", 00:32:42.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:42.205 "strip_size_kb": 64, 00:32:42.205 "state": "configuring", 00:32:42.205 "raid_level": "raid5f", 00:32:42.205 "superblock": false, 00:32:42.205 "num_base_bdevs": 4, 00:32:42.205 "num_base_bdevs_discovered": 3, 00:32:42.205 "num_base_bdevs_operational": 4, 00:32:42.205 "base_bdevs_list": [ 00:32:42.205 { 00:32:42.205 "name": "BaseBdev1", 00:32:42.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:42.205 "is_configured": false, 00:32:42.205 "data_offset": 0, 00:32:42.205 "data_size": 0 00:32:42.205 }, 00:32:42.205 { 00:32:42.205 "name": "BaseBdev2", 00:32:42.205 "uuid": "aba6f75f-3043-4ab6-99b7-a0b66f47a9d5", 00:32:42.205 "is_configured": true, 00:32:42.205 "data_offset": 0, 00:32:42.205 "data_size": 65536 00:32:42.205 }, 00:32:42.205 { 00:32:42.205 "name": "BaseBdev3", 00:32:42.205 "uuid": "574e423d-8abe-42ad-b8da-81f812d86cc7", 00:32:42.205 "is_configured": true, 00:32:42.205 "data_offset": 0, 00:32:42.205 "data_size": 65536 00:32:42.205 }, 00:32:42.205 { 00:32:42.205 "name": "BaseBdev4", 00:32:42.205 "uuid": "04d67f6c-7f74-4798-9e04-b38278a3e631", 00:32:42.205 "is_configured": true, 00:32:42.205 "data_offset": 0, 00:32:42.205 "data_size": 65536 00:32:42.205 } 00:32:42.205 ] 00:32:42.205 }' 00:32:42.205 23:18:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:42.205 23:18:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:42.774 23:18:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:32:43.032 [2024-07-13 23:18:32.321786] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:43.032 23:18:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:43.033 23:18:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:43.033 23:18:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:43.033 23:18:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:43.033 23:18:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:43.033 23:18:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:43.033 23:18:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:43.033 23:18:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:43.033 23:18:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:43.033 23:18:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:43.033 23:18:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:43.033 23:18:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:43.291 23:18:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:43.291 "name": "Existed_Raid", 00:32:43.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:43.291 "strip_size_kb": 64, 00:32:43.291 "state": "configuring", 00:32:43.291 "raid_level": "raid5f", 00:32:43.291 "superblock": false, 00:32:43.291 "num_base_bdevs": 4, 00:32:43.291 "num_base_bdevs_discovered": 2, 00:32:43.291 "num_base_bdevs_operational": 4, 00:32:43.291 "base_bdevs_list": [ 00:32:43.291 { 00:32:43.291 "name": "BaseBdev1", 00:32:43.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:43.291 "is_configured": false, 00:32:43.291 "data_offset": 0, 00:32:43.291 "data_size": 0 00:32:43.291 }, 00:32:43.291 { 00:32:43.291 "name": null, 00:32:43.291 "uuid": "aba6f75f-3043-4ab6-99b7-a0b66f47a9d5", 00:32:43.291 "is_configured": false, 00:32:43.291 "data_offset": 0, 00:32:43.291 "data_size": 65536 00:32:43.291 }, 00:32:43.291 { 00:32:43.291 "name": "BaseBdev3", 00:32:43.291 "uuid": "574e423d-8abe-42ad-b8da-81f812d86cc7", 00:32:43.291 "is_configured": true, 00:32:43.291 "data_offset": 0, 00:32:43.291 "data_size": 65536 00:32:43.291 }, 00:32:43.291 { 00:32:43.291 "name": "BaseBdev4", 00:32:43.291 "uuid": "04d67f6c-7f74-4798-9e04-b38278a3e631", 00:32:43.291 "is_configured": true, 00:32:43.291 "data_offset": 0, 00:32:43.291 "data_size": 65536 00:32:43.291 } 00:32:43.291 ] 00:32:43.291 }' 00:32:43.291 23:18:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:43.291 23:18:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:43.857 23:18:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:43.857 23:18:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:44.114 23:18:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:32:44.114 23:18:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:32:44.372 [2024-07-13 23:18:33.747057] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:44.372 BaseBdev1 00:32:44.372 23:18:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:32:44.372 23:18:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:32:44.372 23:18:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:44.372 23:18:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:32:44.372 23:18:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:44.372 23:18:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:44.372 23:18:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:44.630 23:18:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:44.888 [ 00:32:44.888 { 00:32:44.888 "name": "BaseBdev1", 00:32:44.888 "aliases": [ 00:32:44.888 "11c8c37a-889e-49d8-ae97-3c87ac38daa5" 00:32:44.888 ], 00:32:44.888 "product_name": "Malloc disk", 00:32:44.888 "block_size": 512, 00:32:44.888 "num_blocks": 65536, 00:32:44.888 "uuid": "11c8c37a-889e-49d8-ae97-3c87ac38daa5", 00:32:44.888 "assigned_rate_limits": { 00:32:44.888 "rw_ios_per_sec": 0, 00:32:44.888 "rw_mbytes_per_sec": 0, 00:32:44.888 "r_mbytes_per_sec": 0, 00:32:44.888 "w_mbytes_per_sec": 0 00:32:44.888 }, 00:32:44.888 "claimed": true, 00:32:44.888 "claim_type": "exclusive_write", 00:32:44.888 "zoned": false, 00:32:44.888 "supported_io_types": { 00:32:44.888 "read": true, 00:32:44.888 "write": true, 00:32:44.888 "unmap": true, 00:32:44.888 "flush": true, 00:32:44.888 "reset": true, 00:32:44.888 "nvme_admin": false, 00:32:44.888 "nvme_io": false, 00:32:44.888 "nvme_io_md": false, 00:32:44.888 "write_zeroes": true, 00:32:44.888 "zcopy": true, 00:32:44.888 "get_zone_info": false, 00:32:44.888 "zone_management": false, 00:32:44.888 "zone_append": false, 00:32:44.888 "compare": false, 00:32:44.888 "compare_and_write": false, 00:32:44.888 "abort": true, 00:32:44.888 "seek_hole": false, 00:32:44.888 "seek_data": false, 00:32:44.888 "copy": true, 00:32:44.888 "nvme_iov_md": false 00:32:44.888 }, 00:32:44.888 "memory_domains": [ 00:32:44.888 { 00:32:44.888 "dma_device_id": "system", 00:32:44.888 "dma_device_type": 1 00:32:44.888 }, 00:32:44.888 { 00:32:44.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:44.888 "dma_device_type": 2 00:32:44.888 } 00:32:44.888 ], 00:32:44.888 "driver_specific": {} 00:32:44.888 } 00:32:44.888 ] 00:32:44.888 23:18:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:32:44.888 23:18:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:44.888 23:18:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:44.888 23:18:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:44.888 23:18:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:44.888 23:18:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:44.888 23:18:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:44.888 23:18:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:44.888 23:18:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:44.888 23:18:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:44.888 23:18:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:44.888 23:18:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:44.888 23:18:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:45.146 23:18:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:45.146 "name": "Existed_Raid", 00:32:45.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:45.146 "strip_size_kb": 64, 00:32:45.146 "state": "configuring", 00:32:45.146 "raid_level": "raid5f", 00:32:45.146 "superblock": false, 00:32:45.146 "num_base_bdevs": 4, 00:32:45.146 "num_base_bdevs_discovered": 3, 00:32:45.146 "num_base_bdevs_operational": 4, 00:32:45.146 "base_bdevs_list": [ 00:32:45.146 { 00:32:45.146 "name": "BaseBdev1", 00:32:45.146 "uuid": "11c8c37a-889e-49d8-ae97-3c87ac38daa5", 00:32:45.146 "is_configured": true, 00:32:45.146 "data_offset": 0, 00:32:45.146 "data_size": 65536 00:32:45.146 }, 00:32:45.146 { 00:32:45.146 "name": null, 00:32:45.146 "uuid": "aba6f75f-3043-4ab6-99b7-a0b66f47a9d5", 00:32:45.146 "is_configured": false, 00:32:45.146 "data_offset": 0, 00:32:45.146 "data_size": 65536 00:32:45.146 }, 00:32:45.146 { 00:32:45.146 "name": "BaseBdev3", 00:32:45.146 "uuid": "574e423d-8abe-42ad-b8da-81f812d86cc7", 00:32:45.146 "is_configured": true, 00:32:45.146 "data_offset": 0, 00:32:45.146 "data_size": 65536 00:32:45.146 }, 00:32:45.146 { 00:32:45.146 "name": "BaseBdev4", 00:32:45.146 "uuid": "04d67f6c-7f74-4798-9e04-b38278a3e631", 00:32:45.146 "is_configured": true, 00:32:45.146 "data_offset": 0, 00:32:45.146 "data_size": 65536 00:32:45.146 } 00:32:45.146 ] 00:32:45.146 }' 00:32:45.146 23:18:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:45.146 23:18:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:45.714 23:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:45.714 23:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:45.973 23:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:32:45.973 23:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:32:46.232 [2024-07-13 23:18:35.519526] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:46.232 23:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:46.232 23:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:46.232 23:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:46.232 23:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:46.232 23:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:46.232 23:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:46.232 23:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:46.232 23:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:46.232 23:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:46.232 23:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:46.232 23:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:46.232 23:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:46.490 23:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:46.490 "name": "Existed_Raid", 00:32:46.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:46.490 "strip_size_kb": 64, 00:32:46.490 "state": "configuring", 00:32:46.490 "raid_level": "raid5f", 00:32:46.490 "superblock": false, 00:32:46.490 "num_base_bdevs": 4, 00:32:46.490 "num_base_bdevs_discovered": 2, 00:32:46.490 "num_base_bdevs_operational": 4, 00:32:46.490 "base_bdevs_list": [ 00:32:46.491 { 00:32:46.491 "name": "BaseBdev1", 00:32:46.491 "uuid": "11c8c37a-889e-49d8-ae97-3c87ac38daa5", 00:32:46.491 "is_configured": true, 00:32:46.491 "data_offset": 0, 00:32:46.491 "data_size": 65536 00:32:46.491 }, 00:32:46.491 { 00:32:46.491 "name": null, 00:32:46.491 "uuid": "aba6f75f-3043-4ab6-99b7-a0b66f47a9d5", 00:32:46.491 "is_configured": false, 00:32:46.491 "data_offset": 0, 00:32:46.491 "data_size": 65536 00:32:46.491 }, 00:32:46.491 { 00:32:46.491 "name": null, 00:32:46.491 "uuid": "574e423d-8abe-42ad-b8da-81f812d86cc7", 00:32:46.491 "is_configured": false, 00:32:46.491 "data_offset": 0, 00:32:46.491 "data_size": 65536 00:32:46.491 }, 00:32:46.491 { 00:32:46.491 "name": "BaseBdev4", 00:32:46.491 "uuid": "04d67f6c-7f74-4798-9e04-b38278a3e631", 00:32:46.491 "is_configured": true, 00:32:46.491 "data_offset": 0, 00:32:46.491 "data_size": 65536 00:32:46.491 } 00:32:46.491 ] 00:32:46.491 }' 00:32:46.491 23:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:46.491 23:18:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.058 23:18:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:47.058 23:18:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:47.317 23:18:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:32:47.317 23:18:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:32:47.576 [2024-07-13 23:18:36.921722] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:47.576 23:18:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:47.576 23:18:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:47.576 23:18:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:47.576 23:18:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:47.576 23:18:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:47.576 23:18:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:47.576 23:18:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:47.576 23:18:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:47.576 23:18:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:47.576 23:18:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:47.576 23:18:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:47.576 23:18:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:47.835 23:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:47.835 "name": "Existed_Raid", 00:32:47.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:47.835 "strip_size_kb": 64, 00:32:47.835 "state": "configuring", 00:32:47.835 "raid_level": "raid5f", 00:32:47.835 "superblock": false, 00:32:47.835 "num_base_bdevs": 4, 00:32:47.835 "num_base_bdevs_discovered": 3, 00:32:47.835 "num_base_bdevs_operational": 4, 00:32:47.835 "base_bdevs_list": [ 00:32:47.835 { 00:32:47.835 "name": "BaseBdev1", 00:32:47.835 "uuid": "11c8c37a-889e-49d8-ae97-3c87ac38daa5", 00:32:47.835 "is_configured": true, 00:32:47.835 "data_offset": 0, 00:32:47.835 "data_size": 65536 00:32:47.835 }, 00:32:47.835 { 00:32:47.835 "name": null, 00:32:47.835 "uuid": "aba6f75f-3043-4ab6-99b7-a0b66f47a9d5", 00:32:47.835 "is_configured": false, 00:32:47.835 "data_offset": 0, 00:32:47.835 "data_size": 65536 00:32:47.835 }, 00:32:47.835 { 00:32:47.835 "name": "BaseBdev3", 00:32:47.835 "uuid": "574e423d-8abe-42ad-b8da-81f812d86cc7", 00:32:47.835 "is_configured": true, 00:32:47.835 "data_offset": 0, 00:32:47.835 "data_size": 65536 00:32:47.835 }, 00:32:47.835 { 00:32:47.835 "name": "BaseBdev4", 00:32:47.835 "uuid": "04d67f6c-7f74-4798-9e04-b38278a3e631", 00:32:47.835 "is_configured": true, 00:32:47.835 "data_offset": 0, 00:32:47.835 "data_size": 65536 00:32:47.835 } 00:32:47.835 ] 00:32:47.835 }' 00:32:47.835 23:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:47.835 23:18:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.770 23:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:48.770 23:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:48.770 23:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:32:48.770 23:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:32:49.029 [2024-07-13 23:18:38.386112] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:49.029 23:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:49.029 23:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:49.029 23:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:49.029 23:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:49.029 23:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:49.029 23:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:49.029 23:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:49.029 23:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:49.029 23:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:49.029 23:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:49.029 23:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:49.029 23:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:49.290 23:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:49.290 "name": "Existed_Raid", 00:32:49.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:49.290 "strip_size_kb": 64, 00:32:49.290 "state": "configuring", 00:32:49.290 "raid_level": "raid5f", 00:32:49.290 "superblock": false, 00:32:49.290 "num_base_bdevs": 4, 00:32:49.290 "num_base_bdevs_discovered": 2, 00:32:49.290 "num_base_bdevs_operational": 4, 00:32:49.290 "base_bdevs_list": [ 00:32:49.290 { 00:32:49.290 "name": null, 00:32:49.290 "uuid": "11c8c37a-889e-49d8-ae97-3c87ac38daa5", 00:32:49.290 "is_configured": false, 00:32:49.290 "data_offset": 0, 00:32:49.290 "data_size": 65536 00:32:49.290 }, 00:32:49.290 { 00:32:49.290 "name": null, 00:32:49.290 "uuid": "aba6f75f-3043-4ab6-99b7-a0b66f47a9d5", 00:32:49.290 "is_configured": false, 00:32:49.290 "data_offset": 0, 00:32:49.290 "data_size": 65536 00:32:49.290 }, 00:32:49.290 { 00:32:49.290 "name": "BaseBdev3", 00:32:49.290 "uuid": "574e423d-8abe-42ad-b8da-81f812d86cc7", 00:32:49.290 "is_configured": true, 00:32:49.290 "data_offset": 0, 00:32:49.290 "data_size": 65536 00:32:49.290 }, 00:32:49.290 { 00:32:49.290 "name": "BaseBdev4", 00:32:49.290 "uuid": "04d67f6c-7f74-4798-9e04-b38278a3e631", 00:32:49.290 "is_configured": true, 00:32:49.290 "data_offset": 0, 00:32:49.290 "data_size": 65536 00:32:49.290 } 00:32:49.290 ] 00:32:49.290 }' 00:32:49.290 23:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:49.290 23:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.225 23:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:50.225 23:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:50.225 23:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:32:50.225 23:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:32:50.484 [2024-07-13 23:18:39.671981] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:50.484 23:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:50.484 23:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:50.484 23:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:50.484 23:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:50.484 23:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:50.484 23:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:50.484 23:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:50.484 23:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:50.484 23:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:50.484 23:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:50.484 23:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:50.484 23:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:50.743 23:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:50.743 "name": "Existed_Raid", 00:32:50.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:50.743 "strip_size_kb": 64, 00:32:50.743 "state": "configuring", 00:32:50.743 "raid_level": "raid5f", 00:32:50.743 "superblock": false, 00:32:50.743 "num_base_bdevs": 4, 00:32:50.743 "num_base_bdevs_discovered": 3, 00:32:50.743 "num_base_bdevs_operational": 4, 00:32:50.743 "base_bdevs_list": [ 00:32:50.743 { 00:32:50.743 "name": null, 00:32:50.743 "uuid": "11c8c37a-889e-49d8-ae97-3c87ac38daa5", 00:32:50.743 "is_configured": false, 00:32:50.743 "data_offset": 0, 00:32:50.743 "data_size": 65536 00:32:50.743 }, 00:32:50.743 { 00:32:50.743 "name": "BaseBdev2", 00:32:50.743 "uuid": "aba6f75f-3043-4ab6-99b7-a0b66f47a9d5", 00:32:50.743 "is_configured": true, 00:32:50.743 "data_offset": 0, 00:32:50.743 "data_size": 65536 00:32:50.743 }, 00:32:50.743 { 00:32:50.743 "name": "BaseBdev3", 00:32:50.743 "uuid": "574e423d-8abe-42ad-b8da-81f812d86cc7", 00:32:50.743 "is_configured": true, 00:32:50.743 "data_offset": 0, 00:32:50.743 "data_size": 65536 00:32:50.743 }, 00:32:50.743 { 00:32:50.743 "name": "BaseBdev4", 00:32:50.743 "uuid": "04d67f6c-7f74-4798-9e04-b38278a3e631", 00:32:50.743 "is_configured": true, 00:32:50.743 "data_offset": 0, 00:32:50.743 "data_size": 65536 00:32:50.743 } 00:32:50.743 ] 00:32:50.743 }' 00:32:50.743 23:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:50.743 23:18:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.311 23:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:51.311 23:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:51.569 23:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:32:51.569 23:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:32:51.569 23:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:51.827 23:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 11c8c37a-889e-49d8-ae97-3c87ac38daa5 00:32:52.085 [2024-07-13 23:18:41.433702] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:32:52.085 [2024-07-13 23:18:41.433830] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:32:52.085 [2024-07-13 23:18:41.433842] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:32:52.085 [2024-07-13 23:18:41.433959] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:32:52.085 [2024-07-13 23:18:41.434780] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:32:52.085 [2024-07-13 23:18:41.434822] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008180 00:32:52.085 [2024-07-13 23:18:41.435077] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:52.085 NewBaseBdev 00:32:52.085 23:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:32:52.085 23:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:32:52.085 23:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:52.085 23:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:32:52.085 23:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:52.085 23:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:52.085 23:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:52.343 23:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:32:52.602 [ 00:32:52.602 { 00:32:52.602 "name": "NewBaseBdev", 00:32:52.602 "aliases": [ 00:32:52.602 "11c8c37a-889e-49d8-ae97-3c87ac38daa5" 00:32:52.602 ], 00:32:52.602 "product_name": "Malloc disk", 00:32:52.602 "block_size": 512, 00:32:52.602 "num_blocks": 65536, 00:32:52.602 "uuid": "11c8c37a-889e-49d8-ae97-3c87ac38daa5", 00:32:52.602 "assigned_rate_limits": { 00:32:52.602 "rw_ios_per_sec": 0, 00:32:52.602 "rw_mbytes_per_sec": 0, 00:32:52.602 "r_mbytes_per_sec": 0, 00:32:52.602 "w_mbytes_per_sec": 0 00:32:52.602 }, 00:32:52.602 "claimed": true, 00:32:52.602 "claim_type": "exclusive_write", 00:32:52.602 "zoned": false, 00:32:52.602 "supported_io_types": { 00:32:52.602 "read": true, 00:32:52.602 "write": true, 00:32:52.602 "unmap": true, 00:32:52.602 "flush": true, 00:32:52.602 "reset": true, 00:32:52.602 "nvme_admin": false, 00:32:52.602 "nvme_io": false, 00:32:52.602 "nvme_io_md": false, 00:32:52.602 "write_zeroes": true, 00:32:52.602 "zcopy": true, 00:32:52.602 "get_zone_info": false, 00:32:52.602 "zone_management": false, 00:32:52.602 "zone_append": false, 00:32:52.602 "compare": false, 00:32:52.602 "compare_and_write": false, 00:32:52.602 "abort": true, 00:32:52.602 "seek_hole": false, 00:32:52.602 "seek_data": false, 00:32:52.602 "copy": true, 00:32:52.602 "nvme_iov_md": false 00:32:52.602 }, 00:32:52.602 "memory_domains": [ 00:32:52.602 { 00:32:52.602 "dma_device_id": "system", 00:32:52.602 "dma_device_type": 1 00:32:52.602 }, 00:32:52.602 { 00:32:52.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:52.602 "dma_device_type": 2 00:32:52.602 } 00:32:52.602 ], 00:32:52.602 "driver_specific": {} 00:32:52.602 } 00:32:52.602 ] 00:32:52.602 23:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:32:52.602 23:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:32:52.602 23:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:52.602 23:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:52.602 23:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:52.602 23:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:52.602 23:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:52.602 23:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:52.602 23:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:52.602 23:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:52.602 23:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:52.602 23:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:52.602 23:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:52.860 23:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:52.860 "name": "Existed_Raid", 00:32:52.860 "uuid": "44940832-aaaf-46ac-87f5-b48ae1a1ed03", 00:32:52.860 "strip_size_kb": 64, 00:32:52.860 "state": "online", 00:32:52.860 "raid_level": "raid5f", 00:32:52.860 "superblock": false, 00:32:52.860 "num_base_bdevs": 4, 00:32:52.860 "num_base_bdevs_discovered": 4, 00:32:52.860 "num_base_bdevs_operational": 4, 00:32:52.860 "base_bdevs_list": [ 00:32:52.860 { 00:32:52.860 "name": "NewBaseBdev", 00:32:52.860 "uuid": "11c8c37a-889e-49d8-ae97-3c87ac38daa5", 00:32:52.860 "is_configured": true, 00:32:52.860 "data_offset": 0, 00:32:52.860 "data_size": 65536 00:32:52.860 }, 00:32:52.860 { 00:32:52.860 "name": "BaseBdev2", 00:32:52.860 "uuid": "aba6f75f-3043-4ab6-99b7-a0b66f47a9d5", 00:32:52.860 "is_configured": true, 00:32:52.860 "data_offset": 0, 00:32:52.860 "data_size": 65536 00:32:52.860 }, 00:32:52.860 { 00:32:52.860 "name": "BaseBdev3", 00:32:52.860 "uuid": "574e423d-8abe-42ad-b8da-81f812d86cc7", 00:32:52.860 "is_configured": true, 00:32:52.860 "data_offset": 0, 00:32:52.860 "data_size": 65536 00:32:52.860 }, 00:32:52.860 { 00:32:52.860 "name": "BaseBdev4", 00:32:52.860 "uuid": "04d67f6c-7f74-4798-9e04-b38278a3e631", 00:32:52.860 "is_configured": true, 00:32:52.860 "data_offset": 0, 00:32:52.860 "data_size": 65536 00:32:52.860 } 00:32:52.860 ] 00:32:52.860 }' 00:32:52.860 23:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:52.860 23:18:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:53.426 23:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:32:53.426 23:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:32:53.426 23:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:53.426 23:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:53.426 23:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:53.426 23:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:32:53.427 23:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:32:53.427 23:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:53.684 [2024-07-13 23:18:42.986260] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:53.684 23:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:53.684 "name": "Existed_Raid", 00:32:53.684 "aliases": [ 00:32:53.684 "44940832-aaaf-46ac-87f5-b48ae1a1ed03" 00:32:53.684 ], 00:32:53.684 "product_name": "Raid Volume", 00:32:53.684 "block_size": 512, 00:32:53.684 "num_blocks": 196608, 00:32:53.684 "uuid": "44940832-aaaf-46ac-87f5-b48ae1a1ed03", 00:32:53.684 "assigned_rate_limits": { 00:32:53.684 "rw_ios_per_sec": 0, 00:32:53.684 "rw_mbytes_per_sec": 0, 00:32:53.684 "r_mbytes_per_sec": 0, 00:32:53.684 "w_mbytes_per_sec": 0 00:32:53.684 }, 00:32:53.684 "claimed": false, 00:32:53.684 "zoned": false, 00:32:53.684 "supported_io_types": { 00:32:53.684 "read": true, 00:32:53.684 "write": true, 00:32:53.684 "unmap": false, 00:32:53.684 "flush": false, 00:32:53.684 "reset": true, 00:32:53.684 "nvme_admin": false, 00:32:53.684 "nvme_io": false, 00:32:53.684 "nvme_io_md": false, 00:32:53.684 "write_zeroes": true, 00:32:53.684 "zcopy": false, 00:32:53.684 "get_zone_info": false, 00:32:53.684 "zone_management": false, 00:32:53.684 "zone_append": false, 00:32:53.684 "compare": false, 00:32:53.684 "compare_and_write": false, 00:32:53.685 "abort": false, 00:32:53.685 "seek_hole": false, 00:32:53.685 "seek_data": false, 00:32:53.685 "copy": false, 00:32:53.685 "nvme_iov_md": false 00:32:53.685 }, 00:32:53.685 "driver_specific": { 00:32:53.685 "raid": { 00:32:53.685 "uuid": "44940832-aaaf-46ac-87f5-b48ae1a1ed03", 00:32:53.685 "strip_size_kb": 64, 00:32:53.685 "state": "online", 00:32:53.685 "raid_level": "raid5f", 00:32:53.685 "superblock": false, 00:32:53.685 "num_base_bdevs": 4, 00:32:53.685 "num_base_bdevs_discovered": 4, 00:32:53.685 "num_base_bdevs_operational": 4, 00:32:53.685 "base_bdevs_list": [ 00:32:53.685 { 00:32:53.685 "name": "NewBaseBdev", 00:32:53.685 "uuid": "11c8c37a-889e-49d8-ae97-3c87ac38daa5", 00:32:53.685 "is_configured": true, 00:32:53.685 "data_offset": 0, 00:32:53.685 "data_size": 65536 00:32:53.685 }, 00:32:53.685 { 00:32:53.685 "name": "BaseBdev2", 00:32:53.685 "uuid": "aba6f75f-3043-4ab6-99b7-a0b66f47a9d5", 00:32:53.685 "is_configured": true, 00:32:53.685 "data_offset": 0, 00:32:53.685 "data_size": 65536 00:32:53.685 }, 00:32:53.685 { 00:32:53.685 "name": "BaseBdev3", 00:32:53.685 "uuid": "574e423d-8abe-42ad-b8da-81f812d86cc7", 00:32:53.685 "is_configured": true, 00:32:53.685 "data_offset": 0, 00:32:53.685 "data_size": 65536 00:32:53.685 }, 00:32:53.685 { 00:32:53.685 "name": "BaseBdev4", 00:32:53.685 "uuid": "04d67f6c-7f74-4798-9e04-b38278a3e631", 00:32:53.685 "is_configured": true, 00:32:53.685 "data_offset": 0, 00:32:53.685 "data_size": 65536 00:32:53.685 } 00:32:53.685 ] 00:32:53.685 } 00:32:53.685 } 00:32:53.685 }' 00:32:53.685 23:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:53.685 23:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:32:53.685 BaseBdev2 00:32:53.685 BaseBdev3 00:32:53.685 BaseBdev4' 00:32:53.685 23:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:53.685 23:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:32:53.685 23:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:53.943 23:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:53.943 "name": "NewBaseBdev", 00:32:53.943 "aliases": [ 00:32:53.943 "11c8c37a-889e-49d8-ae97-3c87ac38daa5" 00:32:53.943 ], 00:32:53.943 "product_name": "Malloc disk", 00:32:53.943 "block_size": 512, 00:32:53.943 "num_blocks": 65536, 00:32:53.943 "uuid": "11c8c37a-889e-49d8-ae97-3c87ac38daa5", 00:32:53.943 "assigned_rate_limits": { 00:32:53.943 "rw_ios_per_sec": 0, 00:32:53.943 "rw_mbytes_per_sec": 0, 00:32:53.943 "r_mbytes_per_sec": 0, 00:32:53.943 "w_mbytes_per_sec": 0 00:32:53.943 }, 00:32:53.943 "claimed": true, 00:32:53.943 "claim_type": "exclusive_write", 00:32:53.943 "zoned": false, 00:32:53.943 "supported_io_types": { 00:32:53.943 "read": true, 00:32:53.943 "write": true, 00:32:53.943 "unmap": true, 00:32:53.943 "flush": true, 00:32:53.943 "reset": true, 00:32:53.943 "nvme_admin": false, 00:32:53.943 "nvme_io": false, 00:32:53.943 "nvme_io_md": false, 00:32:53.943 "write_zeroes": true, 00:32:53.943 "zcopy": true, 00:32:53.943 "get_zone_info": false, 00:32:53.943 "zone_management": false, 00:32:53.943 "zone_append": false, 00:32:53.943 "compare": false, 00:32:53.943 "compare_and_write": false, 00:32:53.943 "abort": true, 00:32:53.943 "seek_hole": false, 00:32:53.943 "seek_data": false, 00:32:53.943 "copy": true, 00:32:53.943 "nvme_iov_md": false 00:32:53.943 }, 00:32:53.943 "memory_domains": [ 00:32:53.943 { 00:32:53.943 "dma_device_id": "system", 00:32:53.943 "dma_device_type": 1 00:32:53.943 }, 00:32:53.943 { 00:32:53.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:53.943 "dma_device_type": 2 00:32:53.943 } 00:32:53.943 ], 00:32:53.943 "driver_specific": {} 00:32:53.943 }' 00:32:53.943 23:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:54.212 23:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:54.212 23:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:54.212 23:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:54.212 23:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:54.212 23:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:54.212 23:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:54.212 23:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:54.488 23:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:54.488 23:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:54.488 23:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:54.488 23:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:54.488 23:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:54.488 23:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:32:54.488 23:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:54.749 23:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:54.749 "name": "BaseBdev2", 00:32:54.749 "aliases": [ 00:32:54.749 "aba6f75f-3043-4ab6-99b7-a0b66f47a9d5" 00:32:54.749 ], 00:32:54.749 "product_name": "Malloc disk", 00:32:54.749 "block_size": 512, 00:32:54.749 "num_blocks": 65536, 00:32:54.749 "uuid": "aba6f75f-3043-4ab6-99b7-a0b66f47a9d5", 00:32:54.749 "assigned_rate_limits": { 00:32:54.749 "rw_ios_per_sec": 0, 00:32:54.749 "rw_mbytes_per_sec": 0, 00:32:54.749 "r_mbytes_per_sec": 0, 00:32:54.749 "w_mbytes_per_sec": 0 00:32:54.749 }, 00:32:54.749 "claimed": true, 00:32:54.749 "claim_type": "exclusive_write", 00:32:54.749 "zoned": false, 00:32:54.749 "supported_io_types": { 00:32:54.749 "read": true, 00:32:54.749 "write": true, 00:32:54.749 "unmap": true, 00:32:54.749 "flush": true, 00:32:54.749 "reset": true, 00:32:54.749 "nvme_admin": false, 00:32:54.749 "nvme_io": false, 00:32:54.749 "nvme_io_md": false, 00:32:54.749 "write_zeroes": true, 00:32:54.749 "zcopy": true, 00:32:54.749 "get_zone_info": false, 00:32:54.749 "zone_management": false, 00:32:54.749 "zone_append": false, 00:32:54.749 "compare": false, 00:32:54.749 "compare_and_write": false, 00:32:54.749 "abort": true, 00:32:54.749 "seek_hole": false, 00:32:54.749 "seek_data": false, 00:32:54.749 "copy": true, 00:32:54.749 "nvme_iov_md": false 00:32:54.749 }, 00:32:54.749 "memory_domains": [ 00:32:54.749 { 00:32:54.749 "dma_device_id": "system", 00:32:54.749 "dma_device_type": 1 00:32:54.749 }, 00:32:54.749 { 00:32:54.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:54.749 "dma_device_type": 2 00:32:54.749 } 00:32:54.749 ], 00:32:54.750 "driver_specific": {} 00:32:54.750 }' 00:32:54.750 23:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:54.750 23:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:54.750 23:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:54.750 23:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:55.007 23:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:55.007 23:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:55.007 23:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:55.007 23:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:55.008 23:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:55.008 23:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:55.008 23:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:55.266 23:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:55.266 23:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:55.266 23:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:32:55.266 23:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:55.524 23:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:55.524 "name": "BaseBdev3", 00:32:55.524 "aliases": [ 00:32:55.524 "574e423d-8abe-42ad-b8da-81f812d86cc7" 00:32:55.524 ], 00:32:55.524 "product_name": "Malloc disk", 00:32:55.524 "block_size": 512, 00:32:55.524 "num_blocks": 65536, 00:32:55.524 "uuid": "574e423d-8abe-42ad-b8da-81f812d86cc7", 00:32:55.524 "assigned_rate_limits": { 00:32:55.524 "rw_ios_per_sec": 0, 00:32:55.524 "rw_mbytes_per_sec": 0, 00:32:55.524 "r_mbytes_per_sec": 0, 00:32:55.524 "w_mbytes_per_sec": 0 00:32:55.524 }, 00:32:55.524 "claimed": true, 00:32:55.524 "claim_type": "exclusive_write", 00:32:55.524 "zoned": false, 00:32:55.524 "supported_io_types": { 00:32:55.524 "read": true, 00:32:55.524 "write": true, 00:32:55.524 "unmap": true, 00:32:55.524 "flush": true, 00:32:55.524 "reset": true, 00:32:55.524 "nvme_admin": false, 00:32:55.524 "nvme_io": false, 00:32:55.524 "nvme_io_md": false, 00:32:55.524 "write_zeroes": true, 00:32:55.524 "zcopy": true, 00:32:55.524 "get_zone_info": false, 00:32:55.524 "zone_management": false, 00:32:55.524 "zone_append": false, 00:32:55.524 "compare": false, 00:32:55.524 "compare_and_write": false, 00:32:55.524 "abort": true, 00:32:55.524 "seek_hole": false, 00:32:55.524 "seek_data": false, 00:32:55.524 "copy": true, 00:32:55.524 "nvme_iov_md": false 00:32:55.524 }, 00:32:55.524 "memory_domains": [ 00:32:55.524 { 00:32:55.524 "dma_device_id": "system", 00:32:55.524 "dma_device_type": 1 00:32:55.524 }, 00:32:55.524 { 00:32:55.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:55.524 "dma_device_type": 2 00:32:55.524 } 00:32:55.524 ], 00:32:55.524 "driver_specific": {} 00:32:55.524 }' 00:32:55.524 23:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:55.524 23:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:55.524 23:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:55.524 23:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:55.524 23:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:55.524 23:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:55.524 23:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:55.782 23:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:55.782 23:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:55.782 23:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:55.782 23:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:55.782 23:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:55.782 23:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:55.782 23:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:32:55.782 23:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:56.040 23:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:56.040 "name": "BaseBdev4", 00:32:56.040 "aliases": [ 00:32:56.040 "04d67f6c-7f74-4798-9e04-b38278a3e631" 00:32:56.040 ], 00:32:56.040 "product_name": "Malloc disk", 00:32:56.040 "block_size": 512, 00:32:56.040 "num_blocks": 65536, 00:32:56.040 "uuid": "04d67f6c-7f74-4798-9e04-b38278a3e631", 00:32:56.040 "assigned_rate_limits": { 00:32:56.040 "rw_ios_per_sec": 0, 00:32:56.040 "rw_mbytes_per_sec": 0, 00:32:56.040 "r_mbytes_per_sec": 0, 00:32:56.040 "w_mbytes_per_sec": 0 00:32:56.040 }, 00:32:56.040 "claimed": true, 00:32:56.040 "claim_type": "exclusive_write", 00:32:56.040 "zoned": false, 00:32:56.040 "supported_io_types": { 00:32:56.040 "read": true, 00:32:56.040 "write": true, 00:32:56.040 "unmap": true, 00:32:56.040 "flush": true, 00:32:56.040 "reset": true, 00:32:56.040 "nvme_admin": false, 00:32:56.040 "nvme_io": false, 00:32:56.040 "nvme_io_md": false, 00:32:56.040 "write_zeroes": true, 00:32:56.040 "zcopy": true, 00:32:56.040 "get_zone_info": false, 00:32:56.040 "zone_management": false, 00:32:56.040 "zone_append": false, 00:32:56.040 "compare": false, 00:32:56.040 "compare_and_write": false, 00:32:56.040 "abort": true, 00:32:56.040 "seek_hole": false, 00:32:56.040 "seek_data": false, 00:32:56.040 "copy": true, 00:32:56.040 "nvme_iov_md": false 00:32:56.040 }, 00:32:56.040 "memory_domains": [ 00:32:56.040 { 00:32:56.040 "dma_device_id": "system", 00:32:56.040 "dma_device_type": 1 00:32:56.040 }, 00:32:56.040 { 00:32:56.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:56.040 "dma_device_type": 2 00:32:56.040 } 00:32:56.040 ], 00:32:56.040 "driver_specific": {} 00:32:56.040 }' 00:32:56.040 23:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:56.040 23:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:56.297 23:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:56.297 23:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:56.297 23:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:56.297 23:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:56.297 23:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:56.297 23:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:56.297 23:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:56.297 23:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:56.555 23:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:56.555 23:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:56.555 23:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:56.814 [2024-07-13 23:18:46.040318] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:56.814 [2024-07-13 23:18:46.040372] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:56.814 [2024-07-13 23:18:46.040471] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:56.814 [2024-07-13 23:18:46.040787] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:56.814 [2024-07-13 23:18:46.040812] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name Existed_Raid, state offline 00:32:56.814 23:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 163771 00:32:56.814 23:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 163771 ']' 00:32:56.814 23:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # kill -0 163771 00:32:56.814 23:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@953 -- # uname 00:32:56.814 23:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:56.814 23:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 163771 00:32:56.814 23:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:56.814 23:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:56.814 23:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 163771' 00:32:56.814 killing process with pid 163771 00:32:56.814 23:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@967 -- # kill 163771 00:32:56.814 [2024-07-13 23:18:46.090993] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:56.814 23:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # wait 163771 00:32:56.814 [2024-07-13 23:18:46.127068] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:57.073 23:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:32:57.073 00:32:57.073 real 0m33.218s 00:32:57.073 user 1m3.149s 00:32:57.073 sys 0m4.073s 00:32:57.073 23:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:57.073 23:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:57.073 ************************************ 00:32:57.073 END TEST raid5f_state_function_test 00:32:57.073 ************************************ 00:32:57.073 23:18:46 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:32:57.073 23:18:46 bdev_raid -- bdev/bdev_raid.sh@887 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:32:57.073 23:18:46 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:32:57.073 23:18:46 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:57.073 23:18:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:57.073 ************************************ 00:32:57.073 START TEST raid5f_state_function_test_sb 00:32:57.073 ************************************ 00:32:57.073 23:18:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid5f 4 true 00:32:57.073 23:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:32:57.073 23:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:32:57.073 23:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:32:57.073 23:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:32:57.073 23:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:32:57.074 23:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:57.074 23:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:32:57.074 23:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:57.074 23:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:57.074 23:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:32:57.074 23:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:57.074 23:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:57.074 23:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:32:57.074 23:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:57.074 23:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:57.074 23:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:32:57.074 23:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:57.074 23:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:57.074 23:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:32:57.074 23:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:32:57.074 23:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:32:57.074 23:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:32:57.074 23:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:32:57.074 23:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:32:57.074 23:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:32:57.074 23:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:32:57.074 23:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:32:57.074 23:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:32:57.074 23:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:32:57.074 23:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=164850 00:32:57.074 Process raid pid: 164850 00:32:57.074 23:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 164850' 00:32:57.074 23:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:32:57.074 23:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 164850 /var/tmp/spdk-raid.sock 00:32:57.074 23:18:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 164850 ']' 00:32:57.074 23:18:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:57.074 23:18:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:57.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:57.074 23:18:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:57.074 23:18:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:57.074 23:18:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:57.332 [2024-07-13 23:18:46.491919] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:32:57.332 [2024-07-13 23:18:46.492194] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:57.332 [2024-07-13 23:18:46.644665] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.332 [2024-07-13 23:18:46.720625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:57.591 [2024-07-13 23:18:46.780779] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:58.157 23:18:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:58.157 23:18:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:32:58.157 23:18:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:32:58.416 [2024-07-13 23:18:47.639542] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:58.416 [2024-07-13 23:18:47.639628] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:58.416 [2024-07-13 23:18:47.639642] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:58.416 [2024-07-13 23:18:47.639660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:58.416 [2024-07-13 23:18:47.639668] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:58.416 [2024-07-13 23:18:47.639709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:58.416 [2024-07-13 23:18:47.639719] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:58.416 [2024-07-13 23:18:47.639740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:58.416 23:18:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:58.416 23:18:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:58.416 23:18:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:58.416 23:18:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:58.416 23:18:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:58.416 23:18:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:58.416 23:18:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:58.416 23:18:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:58.416 23:18:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:58.416 23:18:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:58.416 23:18:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:58.416 23:18:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:58.675 23:18:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:58.675 "name": "Existed_Raid", 00:32:58.675 "uuid": "938021ed-b2df-48b9-8df6-13bad9788a89", 00:32:58.675 "strip_size_kb": 64, 00:32:58.675 "state": "configuring", 00:32:58.675 "raid_level": "raid5f", 00:32:58.675 "superblock": true, 00:32:58.675 "num_base_bdevs": 4, 00:32:58.675 "num_base_bdevs_discovered": 0, 00:32:58.675 "num_base_bdevs_operational": 4, 00:32:58.675 "base_bdevs_list": [ 00:32:58.675 { 00:32:58.675 "name": "BaseBdev1", 00:32:58.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:58.675 "is_configured": false, 00:32:58.675 "data_offset": 0, 00:32:58.675 "data_size": 0 00:32:58.675 }, 00:32:58.675 { 00:32:58.675 "name": "BaseBdev2", 00:32:58.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:58.676 "is_configured": false, 00:32:58.676 "data_offset": 0, 00:32:58.676 "data_size": 0 00:32:58.676 }, 00:32:58.676 { 00:32:58.676 "name": "BaseBdev3", 00:32:58.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:58.676 "is_configured": false, 00:32:58.676 "data_offset": 0, 00:32:58.676 "data_size": 0 00:32:58.676 }, 00:32:58.676 { 00:32:58.676 "name": "BaseBdev4", 00:32:58.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:58.676 "is_configured": false, 00:32:58.676 "data_offset": 0, 00:32:58.676 "data_size": 0 00:32:58.676 } 00:32:58.676 ] 00:32:58.676 }' 00:32:58.676 23:18:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:58.676 23:18:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:59.243 23:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:59.503 [2024-07-13 23:18:48.763701] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:59.503 [2024-07-13 23:18:48.763772] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:32:59.503 23:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:32:59.762 [2024-07-13 23:18:49.007753] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:59.762 [2024-07-13 23:18:49.007826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:59.762 [2024-07-13 23:18:49.007854] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:59.762 [2024-07-13 23:18:49.007879] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:59.762 [2024-07-13 23:18:49.007888] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:59.762 [2024-07-13 23:18:49.007904] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:59.762 [2024-07-13 23:18:49.007912] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:59.762 [2024-07-13 23:18:49.007939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:59.762 23:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:33:00.020 [2024-07-13 23:18:49.258539] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:00.021 BaseBdev1 00:33:00.021 23:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:33:00.021 23:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:33:00.021 23:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:33:00.021 23:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:33:00.021 23:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:33:00.021 23:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:33:00.021 23:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:00.279 23:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:00.538 [ 00:33:00.538 { 00:33:00.538 "name": "BaseBdev1", 00:33:00.538 "aliases": [ 00:33:00.538 "25fce5fc-bc12-4afe-b25a-a5738ff7c6cd" 00:33:00.538 ], 00:33:00.538 "product_name": "Malloc disk", 00:33:00.538 "block_size": 512, 00:33:00.538 "num_blocks": 65536, 00:33:00.538 "uuid": "25fce5fc-bc12-4afe-b25a-a5738ff7c6cd", 00:33:00.538 "assigned_rate_limits": { 00:33:00.538 "rw_ios_per_sec": 0, 00:33:00.538 "rw_mbytes_per_sec": 0, 00:33:00.538 "r_mbytes_per_sec": 0, 00:33:00.538 "w_mbytes_per_sec": 0 00:33:00.538 }, 00:33:00.538 "claimed": true, 00:33:00.538 "claim_type": "exclusive_write", 00:33:00.538 "zoned": false, 00:33:00.538 "supported_io_types": { 00:33:00.538 "read": true, 00:33:00.538 "write": true, 00:33:00.538 "unmap": true, 00:33:00.538 "flush": true, 00:33:00.538 "reset": true, 00:33:00.538 "nvme_admin": false, 00:33:00.538 "nvme_io": false, 00:33:00.538 "nvme_io_md": false, 00:33:00.538 "write_zeroes": true, 00:33:00.538 "zcopy": true, 00:33:00.538 "get_zone_info": false, 00:33:00.538 "zone_management": false, 00:33:00.538 "zone_append": false, 00:33:00.538 "compare": false, 00:33:00.538 "compare_and_write": false, 00:33:00.538 "abort": true, 00:33:00.538 "seek_hole": false, 00:33:00.538 "seek_data": false, 00:33:00.538 "copy": true, 00:33:00.538 "nvme_iov_md": false 00:33:00.538 }, 00:33:00.538 "memory_domains": [ 00:33:00.538 { 00:33:00.538 "dma_device_id": "system", 00:33:00.538 "dma_device_type": 1 00:33:00.538 }, 00:33:00.538 { 00:33:00.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:00.538 "dma_device_type": 2 00:33:00.538 } 00:33:00.538 ], 00:33:00.538 "driver_specific": {} 00:33:00.538 } 00:33:00.538 ] 00:33:00.538 23:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:33:00.538 23:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:00.538 23:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:00.538 23:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:00.538 23:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:00.538 23:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:00.538 23:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:00.538 23:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:00.538 23:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:00.538 23:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:00.538 23:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:00.538 23:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:00.538 23:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:00.796 23:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:00.796 "name": "Existed_Raid", 00:33:00.796 "uuid": "26a403ee-aae0-4e8d-9438-2bac042b794b", 00:33:00.796 "strip_size_kb": 64, 00:33:00.796 "state": "configuring", 00:33:00.796 "raid_level": "raid5f", 00:33:00.796 "superblock": true, 00:33:00.796 "num_base_bdevs": 4, 00:33:00.796 "num_base_bdevs_discovered": 1, 00:33:00.796 "num_base_bdevs_operational": 4, 00:33:00.796 "base_bdevs_list": [ 00:33:00.796 { 00:33:00.796 "name": "BaseBdev1", 00:33:00.796 "uuid": "25fce5fc-bc12-4afe-b25a-a5738ff7c6cd", 00:33:00.796 "is_configured": true, 00:33:00.796 "data_offset": 2048, 00:33:00.796 "data_size": 63488 00:33:00.796 }, 00:33:00.796 { 00:33:00.796 "name": "BaseBdev2", 00:33:00.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:00.796 "is_configured": false, 00:33:00.796 "data_offset": 0, 00:33:00.796 "data_size": 0 00:33:00.796 }, 00:33:00.796 { 00:33:00.796 "name": "BaseBdev3", 00:33:00.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:00.796 "is_configured": false, 00:33:00.796 "data_offset": 0, 00:33:00.796 "data_size": 0 00:33:00.796 }, 00:33:00.796 { 00:33:00.796 "name": "BaseBdev4", 00:33:00.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:00.796 "is_configured": false, 00:33:00.796 "data_offset": 0, 00:33:00.796 "data_size": 0 00:33:00.796 } 00:33:00.796 ] 00:33:00.796 }' 00:33:00.796 23:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:00.796 23:18:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:01.363 23:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:33:01.622 [2024-07-13 23:18:50.902980] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:01.622 [2024-07-13 23:18:50.903067] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:33:01.622 23:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:33:01.880 [2024-07-13 23:18:51.155068] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:01.880 [2024-07-13 23:18:51.157305] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:01.880 [2024-07-13 23:18:51.157416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:01.880 [2024-07-13 23:18:51.157444] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:01.880 [2024-07-13 23:18:51.157473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:01.880 [2024-07-13 23:18:51.157482] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:33:01.880 [2024-07-13 23:18:51.157499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:33:01.880 23:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:33:01.880 23:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:33:01.880 23:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:01.880 23:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:01.880 23:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:01.880 23:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:01.880 23:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:01.880 23:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:01.880 23:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:01.880 23:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:01.880 23:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:01.880 23:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:01.880 23:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:01.880 23:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:02.138 23:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:02.138 "name": "Existed_Raid", 00:33:02.138 "uuid": "21664ac2-27c5-41a1-9dad-c0fcb8269b5f", 00:33:02.138 "strip_size_kb": 64, 00:33:02.138 "state": "configuring", 00:33:02.138 "raid_level": "raid5f", 00:33:02.138 "superblock": true, 00:33:02.138 "num_base_bdevs": 4, 00:33:02.138 "num_base_bdevs_discovered": 1, 00:33:02.138 "num_base_bdevs_operational": 4, 00:33:02.138 "base_bdevs_list": [ 00:33:02.138 { 00:33:02.138 "name": "BaseBdev1", 00:33:02.138 "uuid": "25fce5fc-bc12-4afe-b25a-a5738ff7c6cd", 00:33:02.138 "is_configured": true, 00:33:02.138 "data_offset": 2048, 00:33:02.138 "data_size": 63488 00:33:02.138 }, 00:33:02.138 { 00:33:02.138 "name": "BaseBdev2", 00:33:02.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:02.138 "is_configured": false, 00:33:02.138 "data_offset": 0, 00:33:02.138 "data_size": 0 00:33:02.138 }, 00:33:02.138 { 00:33:02.138 "name": "BaseBdev3", 00:33:02.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:02.138 "is_configured": false, 00:33:02.138 "data_offset": 0, 00:33:02.138 "data_size": 0 00:33:02.138 }, 00:33:02.138 { 00:33:02.138 "name": "BaseBdev4", 00:33:02.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:02.138 "is_configured": false, 00:33:02.138 "data_offset": 0, 00:33:02.138 "data_size": 0 00:33:02.138 } 00:33:02.138 ] 00:33:02.138 }' 00:33:02.138 23:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:02.138 23:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:02.734 23:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:33:03.002 [2024-07-13 23:18:52.277042] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:03.002 BaseBdev2 00:33:03.002 23:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:33:03.002 23:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:33:03.002 23:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:33:03.002 23:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:33:03.002 23:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:33:03.002 23:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:33:03.002 23:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:03.260 23:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:03.518 [ 00:33:03.518 { 00:33:03.518 "name": "BaseBdev2", 00:33:03.518 "aliases": [ 00:33:03.518 "54ec5787-6269-4478-a6c0-2bb430cc553c" 00:33:03.518 ], 00:33:03.518 "product_name": "Malloc disk", 00:33:03.518 "block_size": 512, 00:33:03.518 "num_blocks": 65536, 00:33:03.518 "uuid": "54ec5787-6269-4478-a6c0-2bb430cc553c", 00:33:03.518 "assigned_rate_limits": { 00:33:03.518 "rw_ios_per_sec": 0, 00:33:03.518 "rw_mbytes_per_sec": 0, 00:33:03.518 "r_mbytes_per_sec": 0, 00:33:03.518 "w_mbytes_per_sec": 0 00:33:03.518 }, 00:33:03.518 "claimed": true, 00:33:03.518 "claim_type": "exclusive_write", 00:33:03.518 "zoned": false, 00:33:03.518 "supported_io_types": { 00:33:03.518 "read": true, 00:33:03.518 "write": true, 00:33:03.518 "unmap": true, 00:33:03.518 "flush": true, 00:33:03.518 "reset": true, 00:33:03.518 "nvme_admin": false, 00:33:03.518 "nvme_io": false, 00:33:03.518 "nvme_io_md": false, 00:33:03.518 "write_zeroes": true, 00:33:03.518 "zcopy": true, 00:33:03.518 "get_zone_info": false, 00:33:03.518 "zone_management": false, 00:33:03.518 "zone_append": false, 00:33:03.518 "compare": false, 00:33:03.518 "compare_and_write": false, 00:33:03.518 "abort": true, 00:33:03.518 "seek_hole": false, 00:33:03.518 "seek_data": false, 00:33:03.518 "copy": true, 00:33:03.518 "nvme_iov_md": false 00:33:03.518 }, 00:33:03.518 "memory_domains": [ 00:33:03.518 { 00:33:03.518 "dma_device_id": "system", 00:33:03.518 "dma_device_type": 1 00:33:03.518 }, 00:33:03.518 { 00:33:03.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:03.518 "dma_device_type": 2 00:33:03.518 } 00:33:03.518 ], 00:33:03.518 "driver_specific": {} 00:33:03.518 } 00:33:03.518 ] 00:33:03.518 23:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:33:03.518 23:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:33:03.518 23:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:33:03.518 23:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:03.518 23:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:03.518 23:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:03.518 23:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:03.518 23:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:03.518 23:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:03.518 23:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:03.518 23:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:03.518 23:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:03.518 23:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:03.518 23:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:03.518 23:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:03.776 23:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:03.776 "name": "Existed_Raid", 00:33:03.776 "uuid": "21664ac2-27c5-41a1-9dad-c0fcb8269b5f", 00:33:03.776 "strip_size_kb": 64, 00:33:03.776 "state": "configuring", 00:33:03.776 "raid_level": "raid5f", 00:33:03.776 "superblock": true, 00:33:03.776 "num_base_bdevs": 4, 00:33:03.776 "num_base_bdevs_discovered": 2, 00:33:03.776 "num_base_bdevs_operational": 4, 00:33:03.776 "base_bdevs_list": [ 00:33:03.776 { 00:33:03.776 "name": "BaseBdev1", 00:33:03.776 "uuid": "25fce5fc-bc12-4afe-b25a-a5738ff7c6cd", 00:33:03.776 "is_configured": true, 00:33:03.776 "data_offset": 2048, 00:33:03.776 "data_size": 63488 00:33:03.776 }, 00:33:03.776 { 00:33:03.776 "name": "BaseBdev2", 00:33:03.776 "uuid": "54ec5787-6269-4478-a6c0-2bb430cc553c", 00:33:03.776 "is_configured": true, 00:33:03.776 "data_offset": 2048, 00:33:03.776 "data_size": 63488 00:33:03.776 }, 00:33:03.776 { 00:33:03.776 "name": "BaseBdev3", 00:33:03.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:03.776 "is_configured": false, 00:33:03.776 "data_offset": 0, 00:33:03.776 "data_size": 0 00:33:03.776 }, 00:33:03.776 { 00:33:03.776 "name": "BaseBdev4", 00:33:03.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:03.776 "is_configured": false, 00:33:03.776 "data_offset": 0, 00:33:03.776 "data_size": 0 00:33:03.776 } 00:33:03.776 ] 00:33:03.776 }' 00:33:03.776 23:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:03.776 23:18:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:04.342 23:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:33:04.600 [2024-07-13 23:18:53.910042] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:04.600 BaseBdev3 00:33:04.600 23:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:33:04.600 23:18:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:33:04.600 23:18:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:33:04.600 23:18:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:33:04.600 23:18:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:33:04.600 23:18:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:33:04.600 23:18:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:04.858 23:18:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:05.116 [ 00:33:05.116 { 00:33:05.116 "name": "BaseBdev3", 00:33:05.116 "aliases": [ 00:33:05.116 "b4998f82-7e92-468d-a89c-5e26a5ba0af2" 00:33:05.116 ], 00:33:05.116 "product_name": "Malloc disk", 00:33:05.116 "block_size": 512, 00:33:05.116 "num_blocks": 65536, 00:33:05.116 "uuid": "b4998f82-7e92-468d-a89c-5e26a5ba0af2", 00:33:05.116 "assigned_rate_limits": { 00:33:05.116 "rw_ios_per_sec": 0, 00:33:05.116 "rw_mbytes_per_sec": 0, 00:33:05.116 "r_mbytes_per_sec": 0, 00:33:05.116 "w_mbytes_per_sec": 0 00:33:05.116 }, 00:33:05.116 "claimed": true, 00:33:05.116 "claim_type": "exclusive_write", 00:33:05.116 "zoned": false, 00:33:05.116 "supported_io_types": { 00:33:05.116 "read": true, 00:33:05.116 "write": true, 00:33:05.116 "unmap": true, 00:33:05.116 "flush": true, 00:33:05.116 "reset": true, 00:33:05.116 "nvme_admin": false, 00:33:05.116 "nvme_io": false, 00:33:05.116 "nvme_io_md": false, 00:33:05.116 "write_zeroes": true, 00:33:05.116 "zcopy": true, 00:33:05.116 "get_zone_info": false, 00:33:05.116 "zone_management": false, 00:33:05.116 "zone_append": false, 00:33:05.116 "compare": false, 00:33:05.116 "compare_and_write": false, 00:33:05.116 "abort": true, 00:33:05.116 "seek_hole": false, 00:33:05.116 "seek_data": false, 00:33:05.116 "copy": true, 00:33:05.116 "nvme_iov_md": false 00:33:05.116 }, 00:33:05.116 "memory_domains": [ 00:33:05.116 { 00:33:05.116 "dma_device_id": "system", 00:33:05.116 "dma_device_type": 1 00:33:05.116 }, 00:33:05.116 { 00:33:05.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:05.116 "dma_device_type": 2 00:33:05.116 } 00:33:05.116 ], 00:33:05.116 "driver_specific": {} 00:33:05.116 } 00:33:05.116 ] 00:33:05.116 23:18:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:33:05.116 23:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:33:05.116 23:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:33:05.116 23:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:05.116 23:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:05.116 23:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:05.116 23:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:05.116 23:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:05.116 23:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:05.117 23:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:05.117 23:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:05.117 23:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:05.117 23:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:05.117 23:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:05.117 23:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:05.375 23:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:05.375 "name": "Existed_Raid", 00:33:05.375 "uuid": "21664ac2-27c5-41a1-9dad-c0fcb8269b5f", 00:33:05.375 "strip_size_kb": 64, 00:33:05.375 "state": "configuring", 00:33:05.375 "raid_level": "raid5f", 00:33:05.375 "superblock": true, 00:33:05.375 "num_base_bdevs": 4, 00:33:05.375 "num_base_bdevs_discovered": 3, 00:33:05.375 "num_base_bdevs_operational": 4, 00:33:05.375 "base_bdevs_list": [ 00:33:05.375 { 00:33:05.375 "name": "BaseBdev1", 00:33:05.375 "uuid": "25fce5fc-bc12-4afe-b25a-a5738ff7c6cd", 00:33:05.375 "is_configured": true, 00:33:05.375 "data_offset": 2048, 00:33:05.375 "data_size": 63488 00:33:05.375 }, 00:33:05.375 { 00:33:05.375 "name": "BaseBdev2", 00:33:05.375 "uuid": "54ec5787-6269-4478-a6c0-2bb430cc553c", 00:33:05.375 "is_configured": true, 00:33:05.375 "data_offset": 2048, 00:33:05.376 "data_size": 63488 00:33:05.376 }, 00:33:05.376 { 00:33:05.376 "name": "BaseBdev3", 00:33:05.376 "uuid": "b4998f82-7e92-468d-a89c-5e26a5ba0af2", 00:33:05.376 "is_configured": true, 00:33:05.376 "data_offset": 2048, 00:33:05.376 "data_size": 63488 00:33:05.376 }, 00:33:05.376 { 00:33:05.376 "name": "BaseBdev4", 00:33:05.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:05.376 "is_configured": false, 00:33:05.376 "data_offset": 0, 00:33:05.376 "data_size": 0 00:33:05.376 } 00:33:05.376 ] 00:33:05.376 }' 00:33:05.376 23:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:05.376 23:18:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:05.942 23:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:33:06.200 [2024-07-13 23:18:55.564513] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:06.200 [2024-07-13 23:18:55.564783] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:33:06.200 [2024-07-13 23:18:55.564799] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:33:06.200 [2024-07-13 23:18:55.565008] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:33:06.200 BaseBdev4 00:33:06.200 [2024-07-13 23:18:55.565948] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:33:06.200 [2024-07-13 23:18:55.565966] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:33:06.200 [2024-07-13 23:18:55.566122] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:06.200 23:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:33:06.200 23:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:33:06.200 23:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:33:06.200 23:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:33:06.200 23:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:33:06.200 23:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:33:06.200 23:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:06.458 23:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:33:06.717 [ 00:33:06.717 { 00:33:06.717 "name": "BaseBdev4", 00:33:06.717 "aliases": [ 00:33:06.717 "4aa12883-060a-4ead-be76-3122e6398342" 00:33:06.717 ], 00:33:06.717 "product_name": "Malloc disk", 00:33:06.717 "block_size": 512, 00:33:06.717 "num_blocks": 65536, 00:33:06.717 "uuid": "4aa12883-060a-4ead-be76-3122e6398342", 00:33:06.717 "assigned_rate_limits": { 00:33:06.717 "rw_ios_per_sec": 0, 00:33:06.717 "rw_mbytes_per_sec": 0, 00:33:06.717 "r_mbytes_per_sec": 0, 00:33:06.717 "w_mbytes_per_sec": 0 00:33:06.717 }, 00:33:06.717 "claimed": true, 00:33:06.717 "claim_type": "exclusive_write", 00:33:06.717 "zoned": false, 00:33:06.717 "supported_io_types": { 00:33:06.717 "read": true, 00:33:06.717 "write": true, 00:33:06.717 "unmap": true, 00:33:06.717 "flush": true, 00:33:06.717 "reset": true, 00:33:06.717 "nvme_admin": false, 00:33:06.717 "nvme_io": false, 00:33:06.717 "nvme_io_md": false, 00:33:06.717 "write_zeroes": true, 00:33:06.717 "zcopy": true, 00:33:06.717 "get_zone_info": false, 00:33:06.717 "zone_management": false, 00:33:06.717 "zone_append": false, 00:33:06.717 "compare": false, 00:33:06.717 "compare_and_write": false, 00:33:06.717 "abort": true, 00:33:06.717 "seek_hole": false, 00:33:06.717 "seek_data": false, 00:33:06.717 "copy": true, 00:33:06.717 "nvme_iov_md": false 00:33:06.717 }, 00:33:06.717 "memory_domains": [ 00:33:06.717 { 00:33:06.717 "dma_device_id": "system", 00:33:06.717 "dma_device_type": 1 00:33:06.717 }, 00:33:06.717 { 00:33:06.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:06.717 "dma_device_type": 2 00:33:06.717 } 00:33:06.717 ], 00:33:06.717 "driver_specific": {} 00:33:06.717 } 00:33:06.717 ] 00:33:06.717 23:18:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:33:06.717 23:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:33:06.717 23:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:33:06.717 23:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:33:06.717 23:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:06.717 23:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:06.717 23:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:06.717 23:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:06.717 23:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:06.717 23:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:06.717 23:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:06.717 23:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:06.717 23:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:06.717 23:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:06.717 23:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:06.975 23:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:06.975 "name": "Existed_Raid", 00:33:06.975 "uuid": "21664ac2-27c5-41a1-9dad-c0fcb8269b5f", 00:33:06.975 "strip_size_kb": 64, 00:33:06.975 "state": "online", 00:33:06.975 "raid_level": "raid5f", 00:33:06.975 "superblock": true, 00:33:06.975 "num_base_bdevs": 4, 00:33:06.975 "num_base_bdevs_discovered": 4, 00:33:06.975 "num_base_bdevs_operational": 4, 00:33:06.975 "base_bdevs_list": [ 00:33:06.975 { 00:33:06.975 "name": "BaseBdev1", 00:33:06.975 "uuid": "25fce5fc-bc12-4afe-b25a-a5738ff7c6cd", 00:33:06.975 "is_configured": true, 00:33:06.975 "data_offset": 2048, 00:33:06.975 "data_size": 63488 00:33:06.975 }, 00:33:06.975 { 00:33:06.975 "name": "BaseBdev2", 00:33:06.975 "uuid": "54ec5787-6269-4478-a6c0-2bb430cc553c", 00:33:06.975 "is_configured": true, 00:33:06.975 "data_offset": 2048, 00:33:06.975 "data_size": 63488 00:33:06.975 }, 00:33:06.975 { 00:33:06.975 "name": "BaseBdev3", 00:33:06.975 "uuid": "b4998f82-7e92-468d-a89c-5e26a5ba0af2", 00:33:06.975 "is_configured": true, 00:33:06.975 "data_offset": 2048, 00:33:06.975 "data_size": 63488 00:33:06.975 }, 00:33:06.975 { 00:33:06.975 "name": "BaseBdev4", 00:33:06.975 "uuid": "4aa12883-060a-4ead-be76-3122e6398342", 00:33:06.975 "is_configured": true, 00:33:06.975 "data_offset": 2048, 00:33:06.975 "data_size": 63488 00:33:06.975 } 00:33:06.975 ] 00:33:06.975 }' 00:33:06.975 23:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:06.975 23:18:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:07.908 23:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:33:07.908 23:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:33:07.908 23:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:07.908 23:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:07.908 23:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:07.908 23:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:33:07.908 23:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:33:07.908 23:18:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:07.908 [2024-07-13 23:18:57.249208] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:07.908 23:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:33:07.908 "name": "Existed_Raid", 00:33:07.908 "aliases": [ 00:33:07.908 "21664ac2-27c5-41a1-9dad-c0fcb8269b5f" 00:33:07.908 ], 00:33:07.908 "product_name": "Raid Volume", 00:33:07.908 "block_size": 512, 00:33:07.908 "num_blocks": 190464, 00:33:07.908 "uuid": "21664ac2-27c5-41a1-9dad-c0fcb8269b5f", 00:33:07.908 "assigned_rate_limits": { 00:33:07.908 "rw_ios_per_sec": 0, 00:33:07.908 "rw_mbytes_per_sec": 0, 00:33:07.908 "r_mbytes_per_sec": 0, 00:33:07.908 "w_mbytes_per_sec": 0 00:33:07.908 }, 00:33:07.908 "claimed": false, 00:33:07.908 "zoned": false, 00:33:07.908 "supported_io_types": { 00:33:07.908 "read": true, 00:33:07.908 "write": true, 00:33:07.908 "unmap": false, 00:33:07.908 "flush": false, 00:33:07.908 "reset": true, 00:33:07.908 "nvme_admin": false, 00:33:07.908 "nvme_io": false, 00:33:07.908 "nvme_io_md": false, 00:33:07.908 "write_zeroes": true, 00:33:07.908 "zcopy": false, 00:33:07.908 "get_zone_info": false, 00:33:07.908 "zone_management": false, 00:33:07.908 "zone_append": false, 00:33:07.908 "compare": false, 00:33:07.908 "compare_and_write": false, 00:33:07.908 "abort": false, 00:33:07.908 "seek_hole": false, 00:33:07.908 "seek_data": false, 00:33:07.908 "copy": false, 00:33:07.908 "nvme_iov_md": false 00:33:07.908 }, 00:33:07.908 "driver_specific": { 00:33:07.908 "raid": { 00:33:07.908 "uuid": "21664ac2-27c5-41a1-9dad-c0fcb8269b5f", 00:33:07.908 "strip_size_kb": 64, 00:33:07.908 "state": "online", 00:33:07.908 "raid_level": "raid5f", 00:33:07.908 "superblock": true, 00:33:07.908 "num_base_bdevs": 4, 00:33:07.908 "num_base_bdevs_discovered": 4, 00:33:07.908 "num_base_bdevs_operational": 4, 00:33:07.908 "base_bdevs_list": [ 00:33:07.908 { 00:33:07.908 "name": "BaseBdev1", 00:33:07.909 "uuid": "25fce5fc-bc12-4afe-b25a-a5738ff7c6cd", 00:33:07.909 "is_configured": true, 00:33:07.909 "data_offset": 2048, 00:33:07.909 "data_size": 63488 00:33:07.909 }, 00:33:07.909 { 00:33:07.909 "name": "BaseBdev2", 00:33:07.909 "uuid": "54ec5787-6269-4478-a6c0-2bb430cc553c", 00:33:07.909 "is_configured": true, 00:33:07.909 "data_offset": 2048, 00:33:07.909 "data_size": 63488 00:33:07.909 }, 00:33:07.909 { 00:33:07.909 "name": "BaseBdev3", 00:33:07.909 "uuid": "b4998f82-7e92-468d-a89c-5e26a5ba0af2", 00:33:07.909 "is_configured": true, 00:33:07.909 "data_offset": 2048, 00:33:07.909 "data_size": 63488 00:33:07.909 }, 00:33:07.909 { 00:33:07.909 "name": "BaseBdev4", 00:33:07.909 "uuid": "4aa12883-060a-4ead-be76-3122e6398342", 00:33:07.909 "is_configured": true, 00:33:07.909 "data_offset": 2048, 00:33:07.909 "data_size": 63488 00:33:07.909 } 00:33:07.909 ] 00:33:07.909 } 00:33:07.909 } 00:33:07.909 }' 00:33:07.909 23:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:08.166 23:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:33:08.166 BaseBdev2 00:33:08.166 BaseBdev3 00:33:08.166 BaseBdev4' 00:33:08.166 23:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:08.166 23:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:33:08.166 23:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:08.423 23:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:08.423 "name": "BaseBdev1", 00:33:08.423 "aliases": [ 00:33:08.423 "25fce5fc-bc12-4afe-b25a-a5738ff7c6cd" 00:33:08.423 ], 00:33:08.423 "product_name": "Malloc disk", 00:33:08.423 "block_size": 512, 00:33:08.423 "num_blocks": 65536, 00:33:08.423 "uuid": "25fce5fc-bc12-4afe-b25a-a5738ff7c6cd", 00:33:08.423 "assigned_rate_limits": { 00:33:08.423 "rw_ios_per_sec": 0, 00:33:08.423 "rw_mbytes_per_sec": 0, 00:33:08.423 "r_mbytes_per_sec": 0, 00:33:08.423 "w_mbytes_per_sec": 0 00:33:08.423 }, 00:33:08.423 "claimed": true, 00:33:08.423 "claim_type": "exclusive_write", 00:33:08.423 "zoned": false, 00:33:08.423 "supported_io_types": { 00:33:08.423 "read": true, 00:33:08.423 "write": true, 00:33:08.423 "unmap": true, 00:33:08.423 "flush": true, 00:33:08.423 "reset": true, 00:33:08.423 "nvme_admin": false, 00:33:08.423 "nvme_io": false, 00:33:08.423 "nvme_io_md": false, 00:33:08.423 "write_zeroes": true, 00:33:08.423 "zcopy": true, 00:33:08.423 "get_zone_info": false, 00:33:08.423 "zone_management": false, 00:33:08.423 "zone_append": false, 00:33:08.423 "compare": false, 00:33:08.423 "compare_and_write": false, 00:33:08.423 "abort": true, 00:33:08.423 "seek_hole": false, 00:33:08.423 "seek_data": false, 00:33:08.423 "copy": true, 00:33:08.423 "nvme_iov_md": false 00:33:08.423 }, 00:33:08.423 "memory_domains": [ 00:33:08.423 { 00:33:08.423 "dma_device_id": "system", 00:33:08.423 "dma_device_type": 1 00:33:08.423 }, 00:33:08.423 { 00:33:08.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:08.423 "dma_device_type": 2 00:33:08.423 } 00:33:08.423 ], 00:33:08.423 "driver_specific": {} 00:33:08.423 }' 00:33:08.423 23:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:08.423 23:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:08.423 23:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:08.423 23:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:08.423 23:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:08.423 23:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:08.423 23:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:08.681 23:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:08.681 23:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:08.681 23:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:08.681 23:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:08.681 23:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:08.681 23:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:08.681 23:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:33:08.681 23:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:08.939 23:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:08.939 "name": "BaseBdev2", 00:33:08.939 "aliases": [ 00:33:08.939 "54ec5787-6269-4478-a6c0-2bb430cc553c" 00:33:08.939 ], 00:33:08.939 "product_name": "Malloc disk", 00:33:08.939 "block_size": 512, 00:33:08.939 "num_blocks": 65536, 00:33:08.939 "uuid": "54ec5787-6269-4478-a6c0-2bb430cc553c", 00:33:08.939 "assigned_rate_limits": { 00:33:08.939 "rw_ios_per_sec": 0, 00:33:08.939 "rw_mbytes_per_sec": 0, 00:33:08.939 "r_mbytes_per_sec": 0, 00:33:08.939 "w_mbytes_per_sec": 0 00:33:08.939 }, 00:33:08.939 "claimed": true, 00:33:08.939 "claim_type": "exclusive_write", 00:33:08.939 "zoned": false, 00:33:08.939 "supported_io_types": { 00:33:08.939 "read": true, 00:33:08.939 "write": true, 00:33:08.939 "unmap": true, 00:33:08.939 "flush": true, 00:33:08.939 "reset": true, 00:33:08.939 "nvme_admin": false, 00:33:08.939 "nvme_io": false, 00:33:08.939 "nvme_io_md": false, 00:33:08.939 "write_zeroes": true, 00:33:08.939 "zcopy": true, 00:33:08.939 "get_zone_info": false, 00:33:08.939 "zone_management": false, 00:33:08.939 "zone_append": false, 00:33:08.939 "compare": false, 00:33:08.939 "compare_and_write": false, 00:33:08.939 "abort": true, 00:33:08.939 "seek_hole": false, 00:33:08.939 "seek_data": false, 00:33:08.939 "copy": true, 00:33:08.939 "nvme_iov_md": false 00:33:08.939 }, 00:33:08.939 "memory_domains": [ 00:33:08.939 { 00:33:08.939 "dma_device_id": "system", 00:33:08.939 "dma_device_type": 1 00:33:08.939 }, 00:33:08.939 { 00:33:08.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:08.939 "dma_device_type": 2 00:33:08.939 } 00:33:08.939 ], 00:33:08.939 "driver_specific": {} 00:33:08.939 }' 00:33:08.939 23:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:08.939 23:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:09.196 23:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:09.196 23:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:09.196 23:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:09.196 23:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:09.196 23:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:09.196 23:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:09.196 23:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:09.196 23:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:09.455 23:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:09.455 23:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:09.455 23:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:09.455 23:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:33:09.455 23:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:09.713 23:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:09.713 "name": "BaseBdev3", 00:33:09.713 "aliases": [ 00:33:09.713 "b4998f82-7e92-468d-a89c-5e26a5ba0af2" 00:33:09.713 ], 00:33:09.713 "product_name": "Malloc disk", 00:33:09.713 "block_size": 512, 00:33:09.713 "num_blocks": 65536, 00:33:09.713 "uuid": "b4998f82-7e92-468d-a89c-5e26a5ba0af2", 00:33:09.713 "assigned_rate_limits": { 00:33:09.713 "rw_ios_per_sec": 0, 00:33:09.713 "rw_mbytes_per_sec": 0, 00:33:09.713 "r_mbytes_per_sec": 0, 00:33:09.713 "w_mbytes_per_sec": 0 00:33:09.713 }, 00:33:09.713 "claimed": true, 00:33:09.713 "claim_type": "exclusive_write", 00:33:09.713 "zoned": false, 00:33:09.713 "supported_io_types": { 00:33:09.713 "read": true, 00:33:09.713 "write": true, 00:33:09.713 "unmap": true, 00:33:09.713 "flush": true, 00:33:09.713 "reset": true, 00:33:09.713 "nvme_admin": false, 00:33:09.713 "nvme_io": false, 00:33:09.713 "nvme_io_md": false, 00:33:09.713 "write_zeroes": true, 00:33:09.713 "zcopy": true, 00:33:09.713 "get_zone_info": false, 00:33:09.713 "zone_management": false, 00:33:09.713 "zone_append": false, 00:33:09.713 "compare": false, 00:33:09.713 "compare_and_write": false, 00:33:09.713 "abort": true, 00:33:09.713 "seek_hole": false, 00:33:09.713 "seek_data": false, 00:33:09.713 "copy": true, 00:33:09.713 "nvme_iov_md": false 00:33:09.713 }, 00:33:09.713 "memory_domains": [ 00:33:09.713 { 00:33:09.713 "dma_device_id": "system", 00:33:09.713 "dma_device_type": 1 00:33:09.713 }, 00:33:09.713 { 00:33:09.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:09.713 "dma_device_type": 2 00:33:09.713 } 00:33:09.713 ], 00:33:09.713 "driver_specific": {} 00:33:09.713 }' 00:33:09.713 23:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:09.713 23:18:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:09.713 23:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:09.713 23:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:09.713 23:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:09.713 23:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:09.713 23:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:09.971 23:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:09.971 23:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:09.971 23:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:09.971 23:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:09.971 23:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:09.971 23:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:09.971 23:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:33:09.971 23:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:10.229 23:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:10.229 "name": "BaseBdev4", 00:33:10.229 "aliases": [ 00:33:10.229 "4aa12883-060a-4ead-be76-3122e6398342" 00:33:10.229 ], 00:33:10.229 "product_name": "Malloc disk", 00:33:10.229 "block_size": 512, 00:33:10.229 "num_blocks": 65536, 00:33:10.229 "uuid": "4aa12883-060a-4ead-be76-3122e6398342", 00:33:10.229 "assigned_rate_limits": { 00:33:10.229 "rw_ios_per_sec": 0, 00:33:10.229 "rw_mbytes_per_sec": 0, 00:33:10.229 "r_mbytes_per_sec": 0, 00:33:10.229 "w_mbytes_per_sec": 0 00:33:10.229 }, 00:33:10.229 "claimed": true, 00:33:10.230 "claim_type": "exclusive_write", 00:33:10.230 "zoned": false, 00:33:10.230 "supported_io_types": { 00:33:10.230 "read": true, 00:33:10.230 "write": true, 00:33:10.230 "unmap": true, 00:33:10.230 "flush": true, 00:33:10.230 "reset": true, 00:33:10.230 "nvme_admin": false, 00:33:10.230 "nvme_io": false, 00:33:10.230 "nvme_io_md": false, 00:33:10.230 "write_zeroes": true, 00:33:10.230 "zcopy": true, 00:33:10.230 "get_zone_info": false, 00:33:10.230 "zone_management": false, 00:33:10.230 "zone_append": false, 00:33:10.230 "compare": false, 00:33:10.230 "compare_and_write": false, 00:33:10.230 "abort": true, 00:33:10.230 "seek_hole": false, 00:33:10.230 "seek_data": false, 00:33:10.230 "copy": true, 00:33:10.230 "nvme_iov_md": false 00:33:10.230 }, 00:33:10.230 "memory_domains": [ 00:33:10.230 { 00:33:10.230 "dma_device_id": "system", 00:33:10.230 "dma_device_type": 1 00:33:10.230 }, 00:33:10.230 { 00:33:10.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:10.230 "dma_device_type": 2 00:33:10.230 } 00:33:10.230 ], 00:33:10.230 "driver_specific": {} 00:33:10.230 }' 00:33:10.230 23:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:10.230 23:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:10.230 23:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:10.487 23:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:10.487 23:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:10.487 23:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:10.487 23:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:10.487 23:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:10.487 23:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:10.488 23:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:10.488 23:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:10.746 23:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:10.746 23:18:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:33:11.003 [2024-07-13 23:19:00.182072] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:11.003 23:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:33:11.003 23:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:33:11.003 23:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:33:11.003 23:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:33:11.003 23:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:33:11.003 23:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:33:11.003 23:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:11.003 23:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:11.003 23:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:11.003 23:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:11.003 23:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:11.003 23:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:11.003 23:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:11.003 23:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:11.003 23:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:11.003 23:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:11.003 23:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:11.261 23:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:11.261 "name": "Existed_Raid", 00:33:11.261 "uuid": "21664ac2-27c5-41a1-9dad-c0fcb8269b5f", 00:33:11.261 "strip_size_kb": 64, 00:33:11.261 "state": "online", 00:33:11.261 "raid_level": "raid5f", 00:33:11.261 "superblock": true, 00:33:11.261 "num_base_bdevs": 4, 00:33:11.261 "num_base_bdevs_discovered": 3, 00:33:11.261 "num_base_bdevs_operational": 3, 00:33:11.261 "base_bdevs_list": [ 00:33:11.261 { 00:33:11.261 "name": null, 00:33:11.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:11.261 "is_configured": false, 00:33:11.261 "data_offset": 2048, 00:33:11.261 "data_size": 63488 00:33:11.261 }, 00:33:11.261 { 00:33:11.261 "name": "BaseBdev2", 00:33:11.261 "uuid": "54ec5787-6269-4478-a6c0-2bb430cc553c", 00:33:11.261 "is_configured": true, 00:33:11.261 "data_offset": 2048, 00:33:11.261 "data_size": 63488 00:33:11.261 }, 00:33:11.261 { 00:33:11.261 "name": "BaseBdev3", 00:33:11.261 "uuid": "b4998f82-7e92-468d-a89c-5e26a5ba0af2", 00:33:11.261 "is_configured": true, 00:33:11.261 "data_offset": 2048, 00:33:11.261 "data_size": 63488 00:33:11.261 }, 00:33:11.261 { 00:33:11.261 "name": "BaseBdev4", 00:33:11.261 "uuid": "4aa12883-060a-4ead-be76-3122e6398342", 00:33:11.261 "is_configured": true, 00:33:11.261 "data_offset": 2048, 00:33:11.261 "data_size": 63488 00:33:11.261 } 00:33:11.261 ] 00:33:11.261 }' 00:33:11.261 23:19:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:11.261 23:19:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:11.826 23:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:33:11.826 23:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:33:11.826 23:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:11.826 23:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:33:12.085 23:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:33:12.085 23:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:12.085 23:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:33:12.343 [2024-07-13 23:19:01.572899] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:12.343 [2024-07-13 23:19:01.573316] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:12.343 [2024-07-13 23:19:01.583731] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:12.344 23:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:33:12.344 23:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:33:12.344 23:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:12.344 23:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:33:12.602 23:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:33:12.602 23:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:12.602 23:19:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:33:12.897 [2024-07-13 23:19:02.079951] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:12.897 23:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:33:12.897 23:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:33:12.897 23:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:12.897 23:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:33:13.156 23:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:33:13.156 23:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:13.156 23:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:33:13.415 [2024-07-13 23:19:02.577960] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:33:13.415 [2024-07-13 23:19:02.578191] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:33:13.415 23:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:33:13.415 23:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:33:13.415 23:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:13.415 23:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:33:13.415 23:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:33:13.416 23:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:33:13.416 23:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:33:13.416 23:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:33:13.416 23:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:33:13.416 23:19:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:33:13.987 BaseBdev2 00:33:13.987 23:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:33:13.987 23:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:33:13.987 23:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:33:13.987 23:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:33:13.987 23:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:33:13.987 23:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:33:13.987 23:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:13.987 23:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:14.248 [ 00:33:14.248 { 00:33:14.248 "name": "BaseBdev2", 00:33:14.248 "aliases": [ 00:33:14.248 "63bf96a4-cb47-4d16-9693-4fdad3debf80" 00:33:14.248 ], 00:33:14.248 "product_name": "Malloc disk", 00:33:14.248 "block_size": 512, 00:33:14.248 "num_blocks": 65536, 00:33:14.248 "uuid": "63bf96a4-cb47-4d16-9693-4fdad3debf80", 00:33:14.248 "assigned_rate_limits": { 00:33:14.248 "rw_ios_per_sec": 0, 00:33:14.248 "rw_mbytes_per_sec": 0, 00:33:14.248 "r_mbytes_per_sec": 0, 00:33:14.248 "w_mbytes_per_sec": 0 00:33:14.248 }, 00:33:14.248 "claimed": false, 00:33:14.248 "zoned": false, 00:33:14.248 "supported_io_types": { 00:33:14.248 "read": true, 00:33:14.248 "write": true, 00:33:14.248 "unmap": true, 00:33:14.248 "flush": true, 00:33:14.248 "reset": true, 00:33:14.248 "nvme_admin": false, 00:33:14.248 "nvme_io": false, 00:33:14.248 "nvme_io_md": false, 00:33:14.248 "write_zeroes": true, 00:33:14.248 "zcopy": true, 00:33:14.248 "get_zone_info": false, 00:33:14.248 "zone_management": false, 00:33:14.248 "zone_append": false, 00:33:14.248 "compare": false, 00:33:14.248 "compare_and_write": false, 00:33:14.248 "abort": true, 00:33:14.248 "seek_hole": false, 00:33:14.248 "seek_data": false, 00:33:14.248 "copy": true, 00:33:14.248 "nvme_iov_md": false 00:33:14.248 }, 00:33:14.248 "memory_domains": [ 00:33:14.248 { 00:33:14.248 "dma_device_id": "system", 00:33:14.248 "dma_device_type": 1 00:33:14.248 }, 00:33:14.248 { 00:33:14.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:14.248 "dma_device_type": 2 00:33:14.248 } 00:33:14.248 ], 00:33:14.248 "driver_specific": {} 00:33:14.248 } 00:33:14.248 ] 00:33:14.248 23:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:33:14.248 23:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:33:14.248 23:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:33:14.248 23:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:33:14.506 BaseBdev3 00:33:14.506 23:19:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:33:14.506 23:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:33:14.506 23:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:33:14.506 23:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:33:14.506 23:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:33:14.506 23:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:33:14.506 23:19:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:14.764 23:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:15.023 [ 00:33:15.023 { 00:33:15.023 "name": "BaseBdev3", 00:33:15.023 "aliases": [ 00:33:15.023 "6367774e-0f63-423b-82e2-691718fbdfd9" 00:33:15.023 ], 00:33:15.023 "product_name": "Malloc disk", 00:33:15.023 "block_size": 512, 00:33:15.023 "num_blocks": 65536, 00:33:15.023 "uuid": "6367774e-0f63-423b-82e2-691718fbdfd9", 00:33:15.023 "assigned_rate_limits": { 00:33:15.023 "rw_ios_per_sec": 0, 00:33:15.023 "rw_mbytes_per_sec": 0, 00:33:15.023 "r_mbytes_per_sec": 0, 00:33:15.023 "w_mbytes_per_sec": 0 00:33:15.023 }, 00:33:15.023 "claimed": false, 00:33:15.023 "zoned": false, 00:33:15.023 "supported_io_types": { 00:33:15.023 "read": true, 00:33:15.023 "write": true, 00:33:15.023 "unmap": true, 00:33:15.023 "flush": true, 00:33:15.023 "reset": true, 00:33:15.023 "nvme_admin": false, 00:33:15.023 "nvme_io": false, 00:33:15.023 "nvme_io_md": false, 00:33:15.023 "write_zeroes": true, 00:33:15.023 "zcopy": true, 00:33:15.023 "get_zone_info": false, 00:33:15.023 "zone_management": false, 00:33:15.023 "zone_append": false, 00:33:15.023 "compare": false, 00:33:15.023 "compare_and_write": false, 00:33:15.023 "abort": true, 00:33:15.023 "seek_hole": false, 00:33:15.023 "seek_data": false, 00:33:15.023 "copy": true, 00:33:15.023 "nvme_iov_md": false 00:33:15.023 }, 00:33:15.023 "memory_domains": [ 00:33:15.023 { 00:33:15.023 "dma_device_id": "system", 00:33:15.023 "dma_device_type": 1 00:33:15.023 }, 00:33:15.023 { 00:33:15.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:15.023 "dma_device_type": 2 00:33:15.023 } 00:33:15.023 ], 00:33:15.023 "driver_specific": {} 00:33:15.023 } 00:33:15.023 ] 00:33:15.023 23:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:33:15.023 23:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:33:15.023 23:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:33:15.023 23:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:33:15.281 BaseBdev4 00:33:15.281 23:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:33:15.281 23:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:33:15.281 23:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:33:15.281 23:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:33:15.281 23:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:33:15.281 23:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:33:15.281 23:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:15.539 23:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:33:15.539 [ 00:33:15.539 { 00:33:15.539 "name": "BaseBdev4", 00:33:15.539 "aliases": [ 00:33:15.539 "3aca7ac6-e7a1-4e21-9ed3-d4d8391618d7" 00:33:15.539 ], 00:33:15.539 "product_name": "Malloc disk", 00:33:15.539 "block_size": 512, 00:33:15.539 "num_blocks": 65536, 00:33:15.539 "uuid": "3aca7ac6-e7a1-4e21-9ed3-d4d8391618d7", 00:33:15.539 "assigned_rate_limits": { 00:33:15.539 "rw_ios_per_sec": 0, 00:33:15.539 "rw_mbytes_per_sec": 0, 00:33:15.539 "r_mbytes_per_sec": 0, 00:33:15.539 "w_mbytes_per_sec": 0 00:33:15.539 }, 00:33:15.539 "claimed": false, 00:33:15.539 "zoned": false, 00:33:15.539 "supported_io_types": { 00:33:15.539 "read": true, 00:33:15.539 "write": true, 00:33:15.539 "unmap": true, 00:33:15.539 "flush": true, 00:33:15.539 "reset": true, 00:33:15.539 "nvme_admin": false, 00:33:15.539 "nvme_io": false, 00:33:15.539 "nvme_io_md": false, 00:33:15.539 "write_zeroes": true, 00:33:15.539 "zcopy": true, 00:33:15.539 "get_zone_info": false, 00:33:15.539 "zone_management": false, 00:33:15.539 "zone_append": false, 00:33:15.539 "compare": false, 00:33:15.539 "compare_and_write": false, 00:33:15.539 "abort": true, 00:33:15.539 "seek_hole": false, 00:33:15.539 "seek_data": false, 00:33:15.539 "copy": true, 00:33:15.539 "nvme_iov_md": false 00:33:15.539 }, 00:33:15.539 "memory_domains": [ 00:33:15.539 { 00:33:15.539 "dma_device_id": "system", 00:33:15.539 "dma_device_type": 1 00:33:15.539 }, 00:33:15.539 { 00:33:15.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:15.539 "dma_device_type": 2 00:33:15.539 } 00:33:15.539 ], 00:33:15.539 "driver_specific": {} 00:33:15.539 } 00:33:15.539 ] 00:33:15.539 23:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:33:15.539 23:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:33:15.539 23:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:33:15.539 23:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:33:15.798 [2024-07-13 23:19:05.121057] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:15.798 [2024-07-13 23:19:05.121364] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:15.798 [2024-07-13 23:19:05.121512] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:15.798 [2024-07-13 23:19:05.123551] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:15.798 [2024-07-13 23:19:05.123749] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:15.798 23:19:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:15.798 23:19:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:15.798 23:19:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:15.798 23:19:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:15.798 23:19:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:15.798 23:19:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:15.798 23:19:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:15.798 23:19:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:15.798 23:19:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:15.798 23:19:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:15.798 23:19:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:15.798 23:19:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:16.057 23:19:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:16.057 "name": "Existed_Raid", 00:33:16.057 "uuid": "e98c438a-eb72-40d2-be62-6c7133cf320f", 00:33:16.057 "strip_size_kb": 64, 00:33:16.057 "state": "configuring", 00:33:16.057 "raid_level": "raid5f", 00:33:16.057 "superblock": true, 00:33:16.057 "num_base_bdevs": 4, 00:33:16.057 "num_base_bdevs_discovered": 3, 00:33:16.057 "num_base_bdevs_operational": 4, 00:33:16.057 "base_bdevs_list": [ 00:33:16.057 { 00:33:16.057 "name": "BaseBdev1", 00:33:16.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:16.057 "is_configured": false, 00:33:16.057 "data_offset": 0, 00:33:16.057 "data_size": 0 00:33:16.057 }, 00:33:16.057 { 00:33:16.057 "name": "BaseBdev2", 00:33:16.057 "uuid": "63bf96a4-cb47-4d16-9693-4fdad3debf80", 00:33:16.057 "is_configured": true, 00:33:16.057 "data_offset": 2048, 00:33:16.057 "data_size": 63488 00:33:16.057 }, 00:33:16.057 { 00:33:16.057 "name": "BaseBdev3", 00:33:16.057 "uuid": "6367774e-0f63-423b-82e2-691718fbdfd9", 00:33:16.057 "is_configured": true, 00:33:16.057 "data_offset": 2048, 00:33:16.057 "data_size": 63488 00:33:16.057 }, 00:33:16.057 { 00:33:16.057 "name": "BaseBdev4", 00:33:16.057 "uuid": "3aca7ac6-e7a1-4e21-9ed3-d4d8391618d7", 00:33:16.057 "is_configured": true, 00:33:16.057 "data_offset": 2048, 00:33:16.058 "data_size": 63488 00:33:16.058 } 00:33:16.058 ] 00:33:16.058 }' 00:33:16.058 23:19:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:16.058 23:19:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.625 23:19:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:33:16.883 [2024-07-13 23:19:06.209388] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:16.883 23:19:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:16.883 23:19:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:16.883 23:19:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:16.883 23:19:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:16.883 23:19:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:16.883 23:19:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:16.883 23:19:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:16.883 23:19:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:16.883 23:19:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:16.883 23:19:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:16.883 23:19:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:16.883 23:19:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:17.142 23:19:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:17.142 "name": "Existed_Raid", 00:33:17.142 "uuid": "e98c438a-eb72-40d2-be62-6c7133cf320f", 00:33:17.142 "strip_size_kb": 64, 00:33:17.142 "state": "configuring", 00:33:17.142 "raid_level": "raid5f", 00:33:17.142 "superblock": true, 00:33:17.142 "num_base_bdevs": 4, 00:33:17.142 "num_base_bdevs_discovered": 2, 00:33:17.142 "num_base_bdevs_operational": 4, 00:33:17.142 "base_bdevs_list": [ 00:33:17.142 { 00:33:17.142 "name": "BaseBdev1", 00:33:17.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:17.142 "is_configured": false, 00:33:17.142 "data_offset": 0, 00:33:17.142 "data_size": 0 00:33:17.142 }, 00:33:17.142 { 00:33:17.142 "name": null, 00:33:17.142 "uuid": "63bf96a4-cb47-4d16-9693-4fdad3debf80", 00:33:17.142 "is_configured": false, 00:33:17.142 "data_offset": 2048, 00:33:17.142 "data_size": 63488 00:33:17.142 }, 00:33:17.142 { 00:33:17.142 "name": "BaseBdev3", 00:33:17.142 "uuid": "6367774e-0f63-423b-82e2-691718fbdfd9", 00:33:17.142 "is_configured": true, 00:33:17.142 "data_offset": 2048, 00:33:17.142 "data_size": 63488 00:33:17.142 }, 00:33:17.142 { 00:33:17.142 "name": "BaseBdev4", 00:33:17.142 "uuid": "3aca7ac6-e7a1-4e21-9ed3-d4d8391618d7", 00:33:17.142 "is_configured": true, 00:33:17.142 "data_offset": 2048, 00:33:17.142 "data_size": 63488 00:33:17.142 } 00:33:17.142 ] 00:33:17.142 }' 00:33:17.142 23:19:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:17.142 23:19:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.708 23:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:17.709 23:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:17.967 23:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:33:17.967 23:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:33:18.225 [2024-07-13 23:19:07.574426] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:18.225 BaseBdev1 00:33:18.225 23:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:33:18.225 23:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:33:18.225 23:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:33:18.225 23:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:33:18.225 23:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:33:18.226 23:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:33:18.226 23:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:18.484 23:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:18.742 [ 00:33:18.742 { 00:33:18.742 "name": "BaseBdev1", 00:33:18.742 "aliases": [ 00:33:18.742 "26f1f92a-6b0c-47c5-9386-1ab444d321d8" 00:33:18.742 ], 00:33:18.742 "product_name": "Malloc disk", 00:33:18.742 "block_size": 512, 00:33:18.742 "num_blocks": 65536, 00:33:18.742 "uuid": "26f1f92a-6b0c-47c5-9386-1ab444d321d8", 00:33:18.742 "assigned_rate_limits": { 00:33:18.742 "rw_ios_per_sec": 0, 00:33:18.742 "rw_mbytes_per_sec": 0, 00:33:18.742 "r_mbytes_per_sec": 0, 00:33:18.742 "w_mbytes_per_sec": 0 00:33:18.742 }, 00:33:18.742 "claimed": true, 00:33:18.742 "claim_type": "exclusive_write", 00:33:18.742 "zoned": false, 00:33:18.742 "supported_io_types": { 00:33:18.742 "read": true, 00:33:18.742 "write": true, 00:33:18.742 "unmap": true, 00:33:18.742 "flush": true, 00:33:18.742 "reset": true, 00:33:18.742 "nvme_admin": false, 00:33:18.742 "nvme_io": false, 00:33:18.742 "nvme_io_md": false, 00:33:18.742 "write_zeroes": true, 00:33:18.742 "zcopy": true, 00:33:18.743 "get_zone_info": false, 00:33:18.743 "zone_management": false, 00:33:18.743 "zone_append": false, 00:33:18.743 "compare": false, 00:33:18.743 "compare_and_write": false, 00:33:18.743 "abort": true, 00:33:18.743 "seek_hole": false, 00:33:18.743 "seek_data": false, 00:33:18.743 "copy": true, 00:33:18.743 "nvme_iov_md": false 00:33:18.743 }, 00:33:18.743 "memory_domains": [ 00:33:18.743 { 00:33:18.743 "dma_device_id": "system", 00:33:18.743 "dma_device_type": 1 00:33:18.743 }, 00:33:18.743 { 00:33:18.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:18.743 "dma_device_type": 2 00:33:18.743 } 00:33:18.743 ], 00:33:18.743 "driver_specific": {} 00:33:18.743 } 00:33:18.743 ] 00:33:18.743 23:19:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:33:18.743 23:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:18.743 23:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:18.743 23:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:18.743 23:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:18.743 23:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:18.743 23:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:18.743 23:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:18.743 23:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:18.743 23:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:18.743 23:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:18.743 23:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:18.743 23:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:19.001 23:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:19.001 "name": "Existed_Raid", 00:33:19.001 "uuid": "e98c438a-eb72-40d2-be62-6c7133cf320f", 00:33:19.001 "strip_size_kb": 64, 00:33:19.001 "state": "configuring", 00:33:19.001 "raid_level": "raid5f", 00:33:19.001 "superblock": true, 00:33:19.001 "num_base_bdevs": 4, 00:33:19.001 "num_base_bdevs_discovered": 3, 00:33:19.001 "num_base_bdevs_operational": 4, 00:33:19.001 "base_bdevs_list": [ 00:33:19.001 { 00:33:19.001 "name": "BaseBdev1", 00:33:19.001 "uuid": "26f1f92a-6b0c-47c5-9386-1ab444d321d8", 00:33:19.001 "is_configured": true, 00:33:19.001 "data_offset": 2048, 00:33:19.001 "data_size": 63488 00:33:19.001 }, 00:33:19.001 { 00:33:19.001 "name": null, 00:33:19.001 "uuid": "63bf96a4-cb47-4d16-9693-4fdad3debf80", 00:33:19.001 "is_configured": false, 00:33:19.001 "data_offset": 2048, 00:33:19.001 "data_size": 63488 00:33:19.001 }, 00:33:19.001 { 00:33:19.001 "name": "BaseBdev3", 00:33:19.001 "uuid": "6367774e-0f63-423b-82e2-691718fbdfd9", 00:33:19.001 "is_configured": true, 00:33:19.001 "data_offset": 2048, 00:33:19.001 "data_size": 63488 00:33:19.001 }, 00:33:19.001 { 00:33:19.001 "name": "BaseBdev4", 00:33:19.001 "uuid": "3aca7ac6-e7a1-4e21-9ed3-d4d8391618d7", 00:33:19.001 "is_configured": true, 00:33:19.001 "data_offset": 2048, 00:33:19.001 "data_size": 63488 00:33:19.001 } 00:33:19.001 ] 00:33:19.001 }' 00:33:19.002 23:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:19.002 23:19:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:19.569 23:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:19.569 23:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:19.827 23:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:33:19.827 23:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:33:20.087 [2024-07-13 23:19:09.342925] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:20.087 23:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:20.087 23:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:20.087 23:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:20.087 23:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:20.087 23:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:20.087 23:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:20.087 23:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:20.087 23:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:20.087 23:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:20.087 23:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:20.087 23:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:20.087 23:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:20.345 23:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:20.345 "name": "Existed_Raid", 00:33:20.345 "uuid": "e98c438a-eb72-40d2-be62-6c7133cf320f", 00:33:20.345 "strip_size_kb": 64, 00:33:20.345 "state": "configuring", 00:33:20.345 "raid_level": "raid5f", 00:33:20.345 "superblock": true, 00:33:20.345 "num_base_bdevs": 4, 00:33:20.345 "num_base_bdevs_discovered": 2, 00:33:20.345 "num_base_bdevs_operational": 4, 00:33:20.345 "base_bdevs_list": [ 00:33:20.345 { 00:33:20.345 "name": "BaseBdev1", 00:33:20.345 "uuid": "26f1f92a-6b0c-47c5-9386-1ab444d321d8", 00:33:20.345 "is_configured": true, 00:33:20.345 "data_offset": 2048, 00:33:20.345 "data_size": 63488 00:33:20.345 }, 00:33:20.345 { 00:33:20.345 "name": null, 00:33:20.345 "uuid": "63bf96a4-cb47-4d16-9693-4fdad3debf80", 00:33:20.345 "is_configured": false, 00:33:20.345 "data_offset": 2048, 00:33:20.345 "data_size": 63488 00:33:20.345 }, 00:33:20.345 { 00:33:20.345 "name": null, 00:33:20.345 "uuid": "6367774e-0f63-423b-82e2-691718fbdfd9", 00:33:20.345 "is_configured": false, 00:33:20.345 "data_offset": 2048, 00:33:20.345 "data_size": 63488 00:33:20.345 }, 00:33:20.345 { 00:33:20.345 "name": "BaseBdev4", 00:33:20.345 "uuid": "3aca7ac6-e7a1-4e21-9ed3-d4d8391618d7", 00:33:20.345 "is_configured": true, 00:33:20.345 "data_offset": 2048, 00:33:20.345 "data_size": 63488 00:33:20.345 } 00:33:20.345 ] 00:33:20.345 }' 00:33:20.345 23:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:20.346 23:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:20.913 23:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:33:20.913 23:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:21.171 23:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:33:21.171 23:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:33:21.430 [2024-07-13 23:19:10.703175] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:21.430 23:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:21.430 23:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:21.430 23:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:21.430 23:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:21.430 23:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:21.430 23:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:21.430 23:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:21.430 23:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:21.430 23:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:21.430 23:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:21.430 23:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:21.430 23:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:21.689 23:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:21.689 "name": "Existed_Raid", 00:33:21.689 "uuid": "e98c438a-eb72-40d2-be62-6c7133cf320f", 00:33:21.689 "strip_size_kb": 64, 00:33:21.689 "state": "configuring", 00:33:21.689 "raid_level": "raid5f", 00:33:21.689 "superblock": true, 00:33:21.689 "num_base_bdevs": 4, 00:33:21.689 "num_base_bdevs_discovered": 3, 00:33:21.689 "num_base_bdevs_operational": 4, 00:33:21.689 "base_bdevs_list": [ 00:33:21.689 { 00:33:21.689 "name": "BaseBdev1", 00:33:21.689 "uuid": "26f1f92a-6b0c-47c5-9386-1ab444d321d8", 00:33:21.689 "is_configured": true, 00:33:21.689 "data_offset": 2048, 00:33:21.689 "data_size": 63488 00:33:21.689 }, 00:33:21.689 { 00:33:21.689 "name": null, 00:33:21.689 "uuid": "63bf96a4-cb47-4d16-9693-4fdad3debf80", 00:33:21.689 "is_configured": false, 00:33:21.689 "data_offset": 2048, 00:33:21.689 "data_size": 63488 00:33:21.689 }, 00:33:21.689 { 00:33:21.689 "name": "BaseBdev3", 00:33:21.689 "uuid": "6367774e-0f63-423b-82e2-691718fbdfd9", 00:33:21.689 "is_configured": true, 00:33:21.689 "data_offset": 2048, 00:33:21.689 "data_size": 63488 00:33:21.689 }, 00:33:21.689 { 00:33:21.689 "name": "BaseBdev4", 00:33:21.689 "uuid": "3aca7ac6-e7a1-4e21-9ed3-d4d8391618d7", 00:33:21.689 "is_configured": true, 00:33:21.689 "data_offset": 2048, 00:33:21.689 "data_size": 63488 00:33:21.689 } 00:33:21.689 ] 00:33:21.689 }' 00:33:21.689 23:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:21.689 23:19:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:22.257 23:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:22.257 23:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:33:22.516 23:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:33:22.516 23:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:33:22.774 [2024-07-13 23:19:12.043533] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:22.774 23:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:22.774 23:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:22.774 23:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:22.774 23:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:22.774 23:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:22.774 23:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:22.774 23:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:22.774 23:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:22.774 23:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:22.774 23:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:22.774 23:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:22.774 23:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:23.033 23:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:23.033 "name": "Existed_Raid", 00:33:23.033 "uuid": "e98c438a-eb72-40d2-be62-6c7133cf320f", 00:33:23.033 "strip_size_kb": 64, 00:33:23.033 "state": "configuring", 00:33:23.033 "raid_level": "raid5f", 00:33:23.033 "superblock": true, 00:33:23.033 "num_base_bdevs": 4, 00:33:23.033 "num_base_bdevs_discovered": 2, 00:33:23.033 "num_base_bdevs_operational": 4, 00:33:23.033 "base_bdevs_list": [ 00:33:23.033 { 00:33:23.033 "name": null, 00:33:23.033 "uuid": "26f1f92a-6b0c-47c5-9386-1ab444d321d8", 00:33:23.033 "is_configured": false, 00:33:23.033 "data_offset": 2048, 00:33:23.033 "data_size": 63488 00:33:23.033 }, 00:33:23.033 { 00:33:23.034 "name": null, 00:33:23.034 "uuid": "63bf96a4-cb47-4d16-9693-4fdad3debf80", 00:33:23.034 "is_configured": false, 00:33:23.034 "data_offset": 2048, 00:33:23.034 "data_size": 63488 00:33:23.034 }, 00:33:23.034 { 00:33:23.034 "name": "BaseBdev3", 00:33:23.034 "uuid": "6367774e-0f63-423b-82e2-691718fbdfd9", 00:33:23.034 "is_configured": true, 00:33:23.034 "data_offset": 2048, 00:33:23.034 "data_size": 63488 00:33:23.034 }, 00:33:23.034 { 00:33:23.034 "name": "BaseBdev4", 00:33:23.034 "uuid": "3aca7ac6-e7a1-4e21-9ed3-d4d8391618d7", 00:33:23.034 "is_configured": true, 00:33:23.034 "data_offset": 2048, 00:33:23.034 "data_size": 63488 00:33:23.034 } 00:33:23.034 ] 00:33:23.034 }' 00:33:23.034 23:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:23.034 23:19:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:23.601 23:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:23.601 23:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:23.859 23:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:33:23.859 23:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:33:24.116 [2024-07-13 23:19:13.424831] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:24.116 23:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:24.116 23:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:24.116 23:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:24.116 23:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:24.116 23:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:24.116 23:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:24.116 23:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:24.116 23:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:24.116 23:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:24.116 23:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:24.116 23:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:24.116 23:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:24.382 23:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:24.382 "name": "Existed_Raid", 00:33:24.382 "uuid": "e98c438a-eb72-40d2-be62-6c7133cf320f", 00:33:24.382 "strip_size_kb": 64, 00:33:24.382 "state": "configuring", 00:33:24.382 "raid_level": "raid5f", 00:33:24.382 "superblock": true, 00:33:24.382 "num_base_bdevs": 4, 00:33:24.382 "num_base_bdevs_discovered": 3, 00:33:24.382 "num_base_bdevs_operational": 4, 00:33:24.382 "base_bdevs_list": [ 00:33:24.382 { 00:33:24.382 "name": null, 00:33:24.382 "uuid": "26f1f92a-6b0c-47c5-9386-1ab444d321d8", 00:33:24.382 "is_configured": false, 00:33:24.382 "data_offset": 2048, 00:33:24.382 "data_size": 63488 00:33:24.382 }, 00:33:24.382 { 00:33:24.382 "name": "BaseBdev2", 00:33:24.382 "uuid": "63bf96a4-cb47-4d16-9693-4fdad3debf80", 00:33:24.382 "is_configured": true, 00:33:24.382 "data_offset": 2048, 00:33:24.382 "data_size": 63488 00:33:24.382 }, 00:33:24.382 { 00:33:24.382 "name": "BaseBdev3", 00:33:24.382 "uuid": "6367774e-0f63-423b-82e2-691718fbdfd9", 00:33:24.382 "is_configured": true, 00:33:24.382 "data_offset": 2048, 00:33:24.382 "data_size": 63488 00:33:24.382 }, 00:33:24.382 { 00:33:24.382 "name": "BaseBdev4", 00:33:24.382 "uuid": "3aca7ac6-e7a1-4e21-9ed3-d4d8391618d7", 00:33:24.382 "is_configured": true, 00:33:24.382 "data_offset": 2048, 00:33:24.382 "data_size": 63488 00:33:24.382 } 00:33:24.382 ] 00:33:24.382 }' 00:33:24.382 23:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:24.382 23:19:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:24.963 23:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:24.963 23:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:25.222 23:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:33:25.222 23:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:25.222 23:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:33:25.480 23:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 26f1f92a-6b0c-47c5-9386-1ab444d321d8 00:33:25.739 [2024-07-13 23:19:14.934977] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:33:25.739 [2024-07-13 23:19:14.935388] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:33:25.739 [2024-07-13 23:19:14.935537] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:33:25.739 [2024-07-13 23:19:14.935742] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:33:25.739 [2024-07-13 23:19:14.936578] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:33:25.739 [2024-07-13 23:19:14.936720] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008180 00:33:25.739 [2024-07-13 23:19:14.936970] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:25.739 NewBaseBdev 00:33:25.739 23:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:33:25.739 23:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:33:25.739 23:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:33:25.739 23:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:33:25.739 23:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:33:25.739 23:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:33:25.739 23:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:25.997 23:19:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:33:26.256 [ 00:33:26.256 { 00:33:26.256 "name": "NewBaseBdev", 00:33:26.256 "aliases": [ 00:33:26.256 "26f1f92a-6b0c-47c5-9386-1ab444d321d8" 00:33:26.256 ], 00:33:26.256 "product_name": "Malloc disk", 00:33:26.256 "block_size": 512, 00:33:26.256 "num_blocks": 65536, 00:33:26.256 "uuid": "26f1f92a-6b0c-47c5-9386-1ab444d321d8", 00:33:26.256 "assigned_rate_limits": { 00:33:26.256 "rw_ios_per_sec": 0, 00:33:26.256 "rw_mbytes_per_sec": 0, 00:33:26.256 "r_mbytes_per_sec": 0, 00:33:26.256 "w_mbytes_per_sec": 0 00:33:26.256 }, 00:33:26.256 "claimed": true, 00:33:26.256 "claim_type": "exclusive_write", 00:33:26.256 "zoned": false, 00:33:26.256 "supported_io_types": { 00:33:26.256 "read": true, 00:33:26.256 "write": true, 00:33:26.256 "unmap": true, 00:33:26.256 "flush": true, 00:33:26.256 "reset": true, 00:33:26.256 "nvme_admin": false, 00:33:26.256 "nvme_io": false, 00:33:26.256 "nvme_io_md": false, 00:33:26.256 "write_zeroes": true, 00:33:26.256 "zcopy": true, 00:33:26.256 "get_zone_info": false, 00:33:26.256 "zone_management": false, 00:33:26.256 "zone_append": false, 00:33:26.256 "compare": false, 00:33:26.256 "compare_and_write": false, 00:33:26.256 "abort": true, 00:33:26.256 "seek_hole": false, 00:33:26.256 "seek_data": false, 00:33:26.256 "copy": true, 00:33:26.256 "nvme_iov_md": false 00:33:26.256 }, 00:33:26.256 "memory_domains": [ 00:33:26.256 { 00:33:26.256 "dma_device_id": "system", 00:33:26.256 "dma_device_type": 1 00:33:26.256 }, 00:33:26.256 { 00:33:26.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:26.256 "dma_device_type": 2 00:33:26.256 } 00:33:26.256 ], 00:33:26.256 "driver_specific": {} 00:33:26.256 } 00:33:26.256 ] 00:33:26.256 23:19:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:33:26.256 23:19:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:33:26.256 23:19:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:26.256 23:19:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:26.256 23:19:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:26.256 23:19:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:26.256 23:19:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:26.257 23:19:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:26.257 23:19:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:26.257 23:19:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:26.257 23:19:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:26.257 23:19:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:26.257 23:19:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:26.516 23:19:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:26.516 "name": "Existed_Raid", 00:33:26.516 "uuid": "e98c438a-eb72-40d2-be62-6c7133cf320f", 00:33:26.516 "strip_size_kb": 64, 00:33:26.516 "state": "online", 00:33:26.516 "raid_level": "raid5f", 00:33:26.516 "superblock": true, 00:33:26.516 "num_base_bdevs": 4, 00:33:26.516 "num_base_bdevs_discovered": 4, 00:33:26.516 "num_base_bdevs_operational": 4, 00:33:26.516 "base_bdevs_list": [ 00:33:26.516 { 00:33:26.516 "name": "NewBaseBdev", 00:33:26.516 "uuid": "26f1f92a-6b0c-47c5-9386-1ab444d321d8", 00:33:26.516 "is_configured": true, 00:33:26.516 "data_offset": 2048, 00:33:26.516 "data_size": 63488 00:33:26.516 }, 00:33:26.516 { 00:33:26.516 "name": "BaseBdev2", 00:33:26.516 "uuid": "63bf96a4-cb47-4d16-9693-4fdad3debf80", 00:33:26.516 "is_configured": true, 00:33:26.516 "data_offset": 2048, 00:33:26.516 "data_size": 63488 00:33:26.516 }, 00:33:26.516 { 00:33:26.516 "name": "BaseBdev3", 00:33:26.516 "uuid": "6367774e-0f63-423b-82e2-691718fbdfd9", 00:33:26.516 "is_configured": true, 00:33:26.516 "data_offset": 2048, 00:33:26.516 "data_size": 63488 00:33:26.516 }, 00:33:26.516 { 00:33:26.516 "name": "BaseBdev4", 00:33:26.516 "uuid": "3aca7ac6-e7a1-4e21-9ed3-d4d8391618d7", 00:33:26.516 "is_configured": true, 00:33:26.516 "data_offset": 2048, 00:33:26.516 "data_size": 63488 00:33:26.516 } 00:33:26.516 ] 00:33:26.516 }' 00:33:26.516 23:19:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:26.516 23:19:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:27.084 23:19:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:33:27.084 23:19:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:33:27.084 23:19:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:27.084 23:19:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:27.084 23:19:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:27.084 23:19:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:33:27.084 23:19:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:33:27.084 23:19:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:27.343 [2024-07-13 23:19:16.609566] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:27.343 23:19:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:33:27.343 "name": "Existed_Raid", 00:33:27.343 "aliases": [ 00:33:27.343 "e98c438a-eb72-40d2-be62-6c7133cf320f" 00:33:27.343 ], 00:33:27.343 "product_name": "Raid Volume", 00:33:27.343 "block_size": 512, 00:33:27.343 "num_blocks": 190464, 00:33:27.343 "uuid": "e98c438a-eb72-40d2-be62-6c7133cf320f", 00:33:27.343 "assigned_rate_limits": { 00:33:27.343 "rw_ios_per_sec": 0, 00:33:27.343 "rw_mbytes_per_sec": 0, 00:33:27.343 "r_mbytes_per_sec": 0, 00:33:27.343 "w_mbytes_per_sec": 0 00:33:27.343 }, 00:33:27.343 "claimed": false, 00:33:27.343 "zoned": false, 00:33:27.343 "supported_io_types": { 00:33:27.343 "read": true, 00:33:27.343 "write": true, 00:33:27.343 "unmap": false, 00:33:27.343 "flush": false, 00:33:27.343 "reset": true, 00:33:27.343 "nvme_admin": false, 00:33:27.343 "nvme_io": false, 00:33:27.343 "nvme_io_md": false, 00:33:27.343 "write_zeroes": true, 00:33:27.343 "zcopy": false, 00:33:27.343 "get_zone_info": false, 00:33:27.343 "zone_management": false, 00:33:27.343 "zone_append": false, 00:33:27.343 "compare": false, 00:33:27.343 "compare_and_write": false, 00:33:27.343 "abort": false, 00:33:27.343 "seek_hole": false, 00:33:27.343 "seek_data": false, 00:33:27.343 "copy": false, 00:33:27.343 "nvme_iov_md": false 00:33:27.343 }, 00:33:27.343 "driver_specific": { 00:33:27.343 "raid": { 00:33:27.343 "uuid": "e98c438a-eb72-40d2-be62-6c7133cf320f", 00:33:27.343 "strip_size_kb": 64, 00:33:27.343 "state": "online", 00:33:27.343 "raid_level": "raid5f", 00:33:27.343 "superblock": true, 00:33:27.343 "num_base_bdevs": 4, 00:33:27.343 "num_base_bdevs_discovered": 4, 00:33:27.343 "num_base_bdevs_operational": 4, 00:33:27.343 "base_bdevs_list": [ 00:33:27.343 { 00:33:27.343 "name": "NewBaseBdev", 00:33:27.343 "uuid": "26f1f92a-6b0c-47c5-9386-1ab444d321d8", 00:33:27.343 "is_configured": true, 00:33:27.343 "data_offset": 2048, 00:33:27.343 "data_size": 63488 00:33:27.343 }, 00:33:27.343 { 00:33:27.343 "name": "BaseBdev2", 00:33:27.343 "uuid": "63bf96a4-cb47-4d16-9693-4fdad3debf80", 00:33:27.343 "is_configured": true, 00:33:27.343 "data_offset": 2048, 00:33:27.343 "data_size": 63488 00:33:27.343 }, 00:33:27.343 { 00:33:27.343 "name": "BaseBdev3", 00:33:27.343 "uuid": "6367774e-0f63-423b-82e2-691718fbdfd9", 00:33:27.343 "is_configured": true, 00:33:27.343 "data_offset": 2048, 00:33:27.343 "data_size": 63488 00:33:27.343 }, 00:33:27.343 { 00:33:27.343 "name": "BaseBdev4", 00:33:27.343 "uuid": "3aca7ac6-e7a1-4e21-9ed3-d4d8391618d7", 00:33:27.343 "is_configured": true, 00:33:27.343 "data_offset": 2048, 00:33:27.343 "data_size": 63488 00:33:27.343 } 00:33:27.343 ] 00:33:27.343 } 00:33:27.343 } 00:33:27.343 }' 00:33:27.343 23:19:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:27.343 23:19:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:33:27.343 BaseBdev2 00:33:27.343 BaseBdev3 00:33:27.343 BaseBdev4' 00:33:27.343 23:19:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:27.343 23:19:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:27.343 23:19:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:33:27.602 23:19:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:27.602 "name": "NewBaseBdev", 00:33:27.602 "aliases": [ 00:33:27.602 "26f1f92a-6b0c-47c5-9386-1ab444d321d8" 00:33:27.602 ], 00:33:27.602 "product_name": "Malloc disk", 00:33:27.602 "block_size": 512, 00:33:27.602 "num_blocks": 65536, 00:33:27.602 "uuid": "26f1f92a-6b0c-47c5-9386-1ab444d321d8", 00:33:27.602 "assigned_rate_limits": { 00:33:27.602 "rw_ios_per_sec": 0, 00:33:27.602 "rw_mbytes_per_sec": 0, 00:33:27.602 "r_mbytes_per_sec": 0, 00:33:27.602 "w_mbytes_per_sec": 0 00:33:27.602 }, 00:33:27.602 "claimed": true, 00:33:27.602 "claim_type": "exclusive_write", 00:33:27.602 "zoned": false, 00:33:27.602 "supported_io_types": { 00:33:27.602 "read": true, 00:33:27.602 "write": true, 00:33:27.602 "unmap": true, 00:33:27.602 "flush": true, 00:33:27.602 "reset": true, 00:33:27.602 "nvme_admin": false, 00:33:27.602 "nvme_io": false, 00:33:27.602 "nvme_io_md": false, 00:33:27.602 "write_zeroes": true, 00:33:27.602 "zcopy": true, 00:33:27.602 "get_zone_info": false, 00:33:27.602 "zone_management": false, 00:33:27.602 "zone_append": false, 00:33:27.602 "compare": false, 00:33:27.602 "compare_and_write": false, 00:33:27.602 "abort": true, 00:33:27.602 "seek_hole": false, 00:33:27.602 "seek_data": false, 00:33:27.602 "copy": true, 00:33:27.602 "nvme_iov_md": false 00:33:27.602 }, 00:33:27.602 "memory_domains": [ 00:33:27.602 { 00:33:27.602 "dma_device_id": "system", 00:33:27.602 "dma_device_type": 1 00:33:27.602 }, 00:33:27.602 { 00:33:27.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:27.602 "dma_device_type": 2 00:33:27.602 } 00:33:27.602 ], 00:33:27.602 "driver_specific": {} 00:33:27.602 }' 00:33:27.602 23:19:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:27.861 23:19:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:27.861 23:19:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:27.861 23:19:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:27.861 23:19:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:27.861 23:19:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:27.861 23:19:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:27.861 23:19:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:28.119 23:19:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:28.119 23:19:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:28.119 23:19:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:28.119 23:19:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:28.119 23:19:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:28.119 23:19:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:33:28.119 23:19:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:28.378 23:19:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:28.378 "name": "BaseBdev2", 00:33:28.378 "aliases": [ 00:33:28.378 "63bf96a4-cb47-4d16-9693-4fdad3debf80" 00:33:28.378 ], 00:33:28.378 "product_name": "Malloc disk", 00:33:28.378 "block_size": 512, 00:33:28.378 "num_blocks": 65536, 00:33:28.378 "uuid": "63bf96a4-cb47-4d16-9693-4fdad3debf80", 00:33:28.378 "assigned_rate_limits": { 00:33:28.378 "rw_ios_per_sec": 0, 00:33:28.378 "rw_mbytes_per_sec": 0, 00:33:28.378 "r_mbytes_per_sec": 0, 00:33:28.378 "w_mbytes_per_sec": 0 00:33:28.378 }, 00:33:28.378 "claimed": true, 00:33:28.378 "claim_type": "exclusive_write", 00:33:28.378 "zoned": false, 00:33:28.378 "supported_io_types": { 00:33:28.378 "read": true, 00:33:28.378 "write": true, 00:33:28.379 "unmap": true, 00:33:28.379 "flush": true, 00:33:28.379 "reset": true, 00:33:28.379 "nvme_admin": false, 00:33:28.379 "nvme_io": false, 00:33:28.379 "nvme_io_md": false, 00:33:28.379 "write_zeroes": true, 00:33:28.379 "zcopy": true, 00:33:28.379 "get_zone_info": false, 00:33:28.379 "zone_management": false, 00:33:28.379 "zone_append": false, 00:33:28.379 "compare": false, 00:33:28.379 "compare_and_write": false, 00:33:28.379 "abort": true, 00:33:28.379 "seek_hole": false, 00:33:28.379 "seek_data": false, 00:33:28.379 "copy": true, 00:33:28.379 "nvme_iov_md": false 00:33:28.379 }, 00:33:28.379 "memory_domains": [ 00:33:28.379 { 00:33:28.379 "dma_device_id": "system", 00:33:28.379 "dma_device_type": 1 00:33:28.379 }, 00:33:28.379 { 00:33:28.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:28.379 "dma_device_type": 2 00:33:28.379 } 00:33:28.379 ], 00:33:28.379 "driver_specific": {} 00:33:28.379 }' 00:33:28.379 23:19:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:28.379 23:19:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:28.379 23:19:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:28.379 23:19:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:28.379 23:19:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:28.637 23:19:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:28.637 23:19:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:28.637 23:19:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:28.637 23:19:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:28.637 23:19:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:28.637 23:19:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:28.637 23:19:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:28.637 23:19:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:28.637 23:19:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:28.637 23:19:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:33:28.896 23:19:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:28.896 "name": "BaseBdev3", 00:33:28.896 "aliases": [ 00:33:28.896 "6367774e-0f63-423b-82e2-691718fbdfd9" 00:33:28.896 ], 00:33:28.896 "product_name": "Malloc disk", 00:33:28.896 "block_size": 512, 00:33:28.896 "num_blocks": 65536, 00:33:28.896 "uuid": "6367774e-0f63-423b-82e2-691718fbdfd9", 00:33:28.896 "assigned_rate_limits": { 00:33:28.896 "rw_ios_per_sec": 0, 00:33:28.896 "rw_mbytes_per_sec": 0, 00:33:28.896 "r_mbytes_per_sec": 0, 00:33:28.896 "w_mbytes_per_sec": 0 00:33:28.896 }, 00:33:28.896 "claimed": true, 00:33:28.896 "claim_type": "exclusive_write", 00:33:28.896 "zoned": false, 00:33:28.896 "supported_io_types": { 00:33:28.896 "read": true, 00:33:28.896 "write": true, 00:33:28.896 "unmap": true, 00:33:28.896 "flush": true, 00:33:28.896 "reset": true, 00:33:28.896 "nvme_admin": false, 00:33:28.896 "nvme_io": false, 00:33:28.896 "nvme_io_md": false, 00:33:28.896 "write_zeroes": true, 00:33:28.896 "zcopy": true, 00:33:28.896 "get_zone_info": false, 00:33:28.896 "zone_management": false, 00:33:28.896 "zone_append": false, 00:33:28.896 "compare": false, 00:33:28.896 "compare_and_write": false, 00:33:28.896 "abort": true, 00:33:28.896 "seek_hole": false, 00:33:28.896 "seek_data": false, 00:33:28.896 "copy": true, 00:33:28.896 "nvme_iov_md": false 00:33:28.896 }, 00:33:28.896 "memory_domains": [ 00:33:28.896 { 00:33:28.896 "dma_device_id": "system", 00:33:28.896 "dma_device_type": 1 00:33:28.897 }, 00:33:28.897 { 00:33:28.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:28.897 "dma_device_type": 2 00:33:28.897 } 00:33:28.897 ], 00:33:28.897 "driver_specific": {} 00:33:28.897 }' 00:33:28.897 23:19:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:29.155 23:19:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:29.155 23:19:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:29.155 23:19:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:29.155 23:19:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:29.155 23:19:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:29.155 23:19:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:29.155 23:19:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:29.414 23:19:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:29.414 23:19:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:29.414 23:19:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:29.414 23:19:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:29.414 23:19:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:29.414 23:19:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:29.414 23:19:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:33:29.672 23:19:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:29.672 "name": "BaseBdev4", 00:33:29.672 "aliases": [ 00:33:29.672 "3aca7ac6-e7a1-4e21-9ed3-d4d8391618d7" 00:33:29.672 ], 00:33:29.672 "product_name": "Malloc disk", 00:33:29.672 "block_size": 512, 00:33:29.672 "num_blocks": 65536, 00:33:29.672 "uuid": "3aca7ac6-e7a1-4e21-9ed3-d4d8391618d7", 00:33:29.672 "assigned_rate_limits": { 00:33:29.672 "rw_ios_per_sec": 0, 00:33:29.672 "rw_mbytes_per_sec": 0, 00:33:29.672 "r_mbytes_per_sec": 0, 00:33:29.672 "w_mbytes_per_sec": 0 00:33:29.672 }, 00:33:29.672 "claimed": true, 00:33:29.672 "claim_type": "exclusive_write", 00:33:29.672 "zoned": false, 00:33:29.672 "supported_io_types": { 00:33:29.672 "read": true, 00:33:29.672 "write": true, 00:33:29.672 "unmap": true, 00:33:29.672 "flush": true, 00:33:29.672 "reset": true, 00:33:29.672 "nvme_admin": false, 00:33:29.672 "nvme_io": false, 00:33:29.672 "nvme_io_md": false, 00:33:29.672 "write_zeroes": true, 00:33:29.672 "zcopy": true, 00:33:29.672 "get_zone_info": false, 00:33:29.672 "zone_management": false, 00:33:29.672 "zone_append": false, 00:33:29.672 "compare": false, 00:33:29.672 "compare_and_write": false, 00:33:29.672 "abort": true, 00:33:29.672 "seek_hole": false, 00:33:29.672 "seek_data": false, 00:33:29.672 "copy": true, 00:33:29.672 "nvme_iov_md": false 00:33:29.672 }, 00:33:29.672 "memory_domains": [ 00:33:29.672 { 00:33:29.672 "dma_device_id": "system", 00:33:29.672 "dma_device_type": 1 00:33:29.672 }, 00:33:29.672 { 00:33:29.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:29.672 "dma_device_type": 2 00:33:29.672 } 00:33:29.673 ], 00:33:29.673 "driver_specific": {} 00:33:29.673 }' 00:33:29.673 23:19:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:29.673 23:19:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:29.673 23:19:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:29.673 23:19:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:29.673 23:19:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:29.930 23:19:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:29.930 23:19:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:29.930 23:19:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:29.930 23:19:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:29.930 23:19:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:29.930 23:19:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:29.930 23:19:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:29.930 23:19:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:33:30.211 [2024-07-13 23:19:19.578148] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:30.211 [2024-07-13 23:19:19.578342] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:30.212 [2024-07-13 23:19:19.578524] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:30.212 [2024-07-13 23:19:19.578918] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:30.212 [2024-07-13 23:19:19.579072] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name Existed_Raid, state offline 00:33:30.212 23:19:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 164850 00:33:30.212 23:19:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 164850 ']' 00:33:30.212 23:19:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 164850 00:33:30.212 23:19:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:33:30.212 23:19:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:30.212 23:19:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 164850 00:33:30.469 23:19:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:30.469 23:19:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:30.469 23:19:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 164850' 00:33:30.469 killing process with pid 164850 00:33:30.469 23:19:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 164850 00:33:30.469 [2024-07-13 23:19:19.622225] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:30.469 23:19:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 164850 00:33:30.470 [2024-07-13 23:19:19.657688] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:30.728 23:19:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:33:30.728 00:33:30.728 real 0m33.459s 00:33:30.728 user 1m3.737s 00:33:30.728 sys 0m4.041s 00:33:30.728 23:19:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:30.728 23:19:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:30.728 ************************************ 00:33:30.728 END TEST raid5f_state_function_test_sb 00:33:30.728 ************************************ 00:33:30.728 23:19:19 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:33:30.728 23:19:19 bdev_raid -- bdev/bdev_raid.sh@888 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:33:30.728 23:19:19 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:33:30.728 23:19:19 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:30.728 23:19:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:30.728 ************************************ 00:33:30.728 START TEST raid5f_superblock_test 00:33:30.728 ************************************ 00:33:30.728 23:19:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid5f 4 00:33:30.728 23:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid5f 00:33:30.728 23:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:33:30.728 23:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:33:30.728 23:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:33:30.728 23:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:33:30.728 23:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:33:30.728 23:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:33:30.728 23:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:33:30.728 23:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:33:30.728 23:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:33:30.728 23:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:33:30.728 23:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:33:30.728 23:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:33:30.728 23:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid5f '!=' raid1 ']' 00:33:30.728 23:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:33:30.728 23:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:33:30.728 23:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=165934 00:33:30.728 23:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:33:30.728 23:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 165934 /var/tmp/spdk-raid.sock 00:33:30.728 23:19:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 165934 ']' 00:33:30.728 23:19:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:30.728 23:19:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:30.728 23:19:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:30.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:30.728 23:19:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:30.728 23:19:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:30.728 [2024-07-13 23:19:20.006701] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:33:30.728 [2024-07-13 23:19:20.007136] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165934 ] 00:33:30.987 [2024-07-13 23:19:20.158203] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:30.987 [2024-07-13 23:19:20.227401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:30.987 [2024-07-13 23:19:20.288446] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:31.553 23:19:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:31.553 23:19:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:33:31.553 23:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:33:31.553 23:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:33:31.553 23:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:33:31.553 23:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:33:31.553 23:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:33:31.553 23:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:31.553 23:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:33:31.553 23:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:31.553 23:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:33:31.811 malloc1 00:33:31.811 23:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:32.069 [2024-07-13 23:19:21.387416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:32.069 [2024-07-13 23:19:21.387771] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:32.069 [2024-07-13 23:19:21.387927] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:33:32.069 [2024-07-13 23:19:21.388131] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:32.069 [2024-07-13 23:19:21.390812] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:32.069 [2024-07-13 23:19:21.391002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:32.069 pt1 00:33:32.069 23:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:33:32.069 23:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:33:32.069 23:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:33:32.069 23:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:33:32.069 23:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:33:32.069 23:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:32.070 23:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:33:32.070 23:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:32.070 23:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:33:32.328 malloc2 00:33:32.328 23:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:32.586 [2024-07-13 23:19:21.854088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:32.586 [2024-07-13 23:19:21.854352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:32.586 [2024-07-13 23:19:21.854506] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:33:32.586 [2024-07-13 23:19:21.854649] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:32.586 [2024-07-13 23:19:21.857178] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:32.586 [2024-07-13 23:19:21.857388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:32.586 pt2 00:33:32.586 23:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:33:32.586 23:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:33:32.586 23:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:33:32.586 23:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:33:32.586 23:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:33:32.586 23:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:32.586 23:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:33:32.586 23:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:32.586 23:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:33:32.844 malloc3 00:33:32.844 23:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:33:33.102 [2024-07-13 23:19:22.319116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:33:33.102 [2024-07-13 23:19:22.319409] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:33.102 [2024-07-13 23:19:22.319580] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:33:33.102 [2024-07-13 23:19:22.319762] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:33.102 [2024-07-13 23:19:22.322382] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:33.102 [2024-07-13 23:19:22.322565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:33:33.102 pt3 00:33:33.102 23:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:33:33.102 23:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:33:33.102 23:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:33:33.102 23:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:33:33.102 23:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:33:33.102 23:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:33.102 23:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:33:33.102 23:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:33.102 23:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:33:33.360 malloc4 00:33:33.360 23:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:33:33.617 [2024-07-13 23:19:22.818446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:33:33.617 [2024-07-13 23:19:22.818795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:33.617 [2024-07-13 23:19:22.818971] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:33:33.617 [2024-07-13 23:19:22.819127] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:33.617 [2024-07-13 23:19:22.821878] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:33.617 [2024-07-13 23:19:22.822062] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:33:33.617 pt4 00:33:33.617 23:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:33:33.617 23:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:33:33.617 23:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:33:33.875 [2024-07-13 23:19:23.098597] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:33.875 [2024-07-13 23:19:23.100853] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:33.875 [2024-07-13 23:19:23.101155] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:33:33.875 [2024-07-13 23:19:23.101347] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:33:33.875 [2024-07-13 23:19:23.101774] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:33:33.875 [2024-07-13 23:19:23.101930] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:33:33.875 [2024-07-13 23:19:23.102116] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:33:33.875 [2024-07-13 23:19:23.103017] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:33:33.875 [2024-07-13 23:19:23.103159] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:33:33.875 [2024-07-13 23:19:23.103515] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:33.875 23:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:33:33.875 23:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:33.875 23:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:33.875 23:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:33.875 23:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:33.875 23:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:33.875 23:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:33.876 23:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:33.876 23:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:33.876 23:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:33.876 23:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:33.876 23:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:34.133 23:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:34.133 "name": "raid_bdev1", 00:33:34.133 "uuid": "bcbdee1b-9aab-4a59-813d-97beca9de828", 00:33:34.133 "strip_size_kb": 64, 00:33:34.133 "state": "online", 00:33:34.133 "raid_level": "raid5f", 00:33:34.133 "superblock": true, 00:33:34.133 "num_base_bdevs": 4, 00:33:34.133 "num_base_bdevs_discovered": 4, 00:33:34.133 "num_base_bdevs_operational": 4, 00:33:34.133 "base_bdevs_list": [ 00:33:34.133 { 00:33:34.133 "name": "pt1", 00:33:34.133 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:34.133 "is_configured": true, 00:33:34.133 "data_offset": 2048, 00:33:34.133 "data_size": 63488 00:33:34.133 }, 00:33:34.133 { 00:33:34.133 "name": "pt2", 00:33:34.133 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:34.133 "is_configured": true, 00:33:34.133 "data_offset": 2048, 00:33:34.133 "data_size": 63488 00:33:34.133 }, 00:33:34.133 { 00:33:34.133 "name": "pt3", 00:33:34.133 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:34.133 "is_configured": true, 00:33:34.133 "data_offset": 2048, 00:33:34.133 "data_size": 63488 00:33:34.133 }, 00:33:34.133 { 00:33:34.133 "name": "pt4", 00:33:34.133 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:34.133 "is_configured": true, 00:33:34.133 "data_offset": 2048, 00:33:34.133 "data_size": 63488 00:33:34.133 } 00:33:34.133 ] 00:33:34.133 }' 00:33:34.133 23:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:34.133 23:19:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:34.698 23:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:33:34.698 23:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:33:34.698 23:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:34.698 23:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:34.698 23:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:34.698 23:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:33:34.698 23:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:34.698 23:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:34.957 [2024-07-13 23:19:24.255877] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:34.957 23:19:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:33:34.957 "name": "raid_bdev1", 00:33:34.957 "aliases": [ 00:33:34.957 "bcbdee1b-9aab-4a59-813d-97beca9de828" 00:33:34.957 ], 00:33:34.957 "product_name": "Raid Volume", 00:33:34.957 "block_size": 512, 00:33:34.957 "num_blocks": 190464, 00:33:34.957 "uuid": "bcbdee1b-9aab-4a59-813d-97beca9de828", 00:33:34.957 "assigned_rate_limits": { 00:33:34.957 "rw_ios_per_sec": 0, 00:33:34.957 "rw_mbytes_per_sec": 0, 00:33:34.957 "r_mbytes_per_sec": 0, 00:33:34.957 "w_mbytes_per_sec": 0 00:33:34.957 }, 00:33:34.957 "claimed": false, 00:33:34.957 "zoned": false, 00:33:34.957 "supported_io_types": { 00:33:34.957 "read": true, 00:33:34.957 "write": true, 00:33:34.957 "unmap": false, 00:33:34.957 "flush": false, 00:33:34.957 "reset": true, 00:33:34.957 "nvme_admin": false, 00:33:34.957 "nvme_io": false, 00:33:34.957 "nvme_io_md": false, 00:33:34.957 "write_zeroes": true, 00:33:34.957 "zcopy": false, 00:33:34.957 "get_zone_info": false, 00:33:34.957 "zone_management": false, 00:33:34.957 "zone_append": false, 00:33:34.957 "compare": false, 00:33:34.957 "compare_and_write": false, 00:33:34.957 "abort": false, 00:33:34.957 "seek_hole": false, 00:33:34.957 "seek_data": false, 00:33:34.957 "copy": false, 00:33:34.957 "nvme_iov_md": false 00:33:34.957 }, 00:33:34.957 "driver_specific": { 00:33:34.957 "raid": { 00:33:34.957 "uuid": "bcbdee1b-9aab-4a59-813d-97beca9de828", 00:33:34.957 "strip_size_kb": 64, 00:33:34.957 "state": "online", 00:33:34.957 "raid_level": "raid5f", 00:33:34.957 "superblock": true, 00:33:34.957 "num_base_bdevs": 4, 00:33:34.957 "num_base_bdevs_discovered": 4, 00:33:34.957 "num_base_bdevs_operational": 4, 00:33:34.957 "base_bdevs_list": [ 00:33:34.957 { 00:33:34.957 "name": "pt1", 00:33:34.957 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:34.957 "is_configured": true, 00:33:34.957 "data_offset": 2048, 00:33:34.957 "data_size": 63488 00:33:34.957 }, 00:33:34.957 { 00:33:34.957 "name": "pt2", 00:33:34.957 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:34.957 "is_configured": true, 00:33:34.957 "data_offset": 2048, 00:33:34.957 "data_size": 63488 00:33:34.957 }, 00:33:34.957 { 00:33:34.957 "name": "pt3", 00:33:34.957 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:34.957 "is_configured": true, 00:33:34.957 "data_offset": 2048, 00:33:34.957 "data_size": 63488 00:33:34.957 }, 00:33:34.957 { 00:33:34.957 "name": "pt4", 00:33:34.957 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:34.957 "is_configured": true, 00:33:34.957 "data_offset": 2048, 00:33:34.957 "data_size": 63488 00:33:34.957 } 00:33:34.957 ] 00:33:34.957 } 00:33:34.957 } 00:33:34.957 }' 00:33:34.957 23:19:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:34.957 23:19:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:33:34.957 pt2 00:33:34.957 pt3 00:33:34.957 pt4' 00:33:34.957 23:19:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:34.957 23:19:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:33:34.957 23:19:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:35.215 23:19:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:35.215 "name": "pt1", 00:33:35.215 "aliases": [ 00:33:35.215 "00000000-0000-0000-0000-000000000001" 00:33:35.215 ], 00:33:35.215 "product_name": "passthru", 00:33:35.215 "block_size": 512, 00:33:35.215 "num_blocks": 65536, 00:33:35.215 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:35.215 "assigned_rate_limits": { 00:33:35.215 "rw_ios_per_sec": 0, 00:33:35.215 "rw_mbytes_per_sec": 0, 00:33:35.215 "r_mbytes_per_sec": 0, 00:33:35.215 "w_mbytes_per_sec": 0 00:33:35.215 }, 00:33:35.215 "claimed": true, 00:33:35.215 "claim_type": "exclusive_write", 00:33:35.215 "zoned": false, 00:33:35.215 "supported_io_types": { 00:33:35.215 "read": true, 00:33:35.215 "write": true, 00:33:35.215 "unmap": true, 00:33:35.215 "flush": true, 00:33:35.215 "reset": true, 00:33:35.215 "nvme_admin": false, 00:33:35.215 "nvme_io": false, 00:33:35.215 "nvme_io_md": false, 00:33:35.215 "write_zeroes": true, 00:33:35.215 "zcopy": true, 00:33:35.215 "get_zone_info": false, 00:33:35.215 "zone_management": false, 00:33:35.215 "zone_append": false, 00:33:35.215 "compare": false, 00:33:35.215 "compare_and_write": false, 00:33:35.215 "abort": true, 00:33:35.215 "seek_hole": false, 00:33:35.215 "seek_data": false, 00:33:35.215 "copy": true, 00:33:35.215 "nvme_iov_md": false 00:33:35.215 }, 00:33:35.215 "memory_domains": [ 00:33:35.215 { 00:33:35.215 "dma_device_id": "system", 00:33:35.215 "dma_device_type": 1 00:33:35.215 }, 00:33:35.215 { 00:33:35.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:35.215 "dma_device_type": 2 00:33:35.215 } 00:33:35.215 ], 00:33:35.215 "driver_specific": { 00:33:35.215 "passthru": { 00:33:35.215 "name": "pt1", 00:33:35.215 "base_bdev_name": "malloc1" 00:33:35.215 } 00:33:35.215 } 00:33:35.215 }' 00:33:35.215 23:19:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:35.215 23:19:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:35.473 23:19:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:35.473 23:19:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:35.473 23:19:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:35.473 23:19:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:35.473 23:19:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:35.473 23:19:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:35.473 23:19:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:35.473 23:19:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:35.473 23:19:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:35.781 23:19:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:35.781 23:19:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:35.781 23:19:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:35.781 23:19:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:33:35.781 23:19:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:35.781 "name": "pt2", 00:33:35.781 "aliases": [ 00:33:35.781 "00000000-0000-0000-0000-000000000002" 00:33:35.781 ], 00:33:35.781 "product_name": "passthru", 00:33:35.781 "block_size": 512, 00:33:35.781 "num_blocks": 65536, 00:33:35.781 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:35.781 "assigned_rate_limits": { 00:33:35.781 "rw_ios_per_sec": 0, 00:33:35.781 "rw_mbytes_per_sec": 0, 00:33:35.781 "r_mbytes_per_sec": 0, 00:33:35.781 "w_mbytes_per_sec": 0 00:33:35.781 }, 00:33:35.781 "claimed": true, 00:33:35.781 "claim_type": "exclusive_write", 00:33:35.781 "zoned": false, 00:33:35.781 "supported_io_types": { 00:33:35.781 "read": true, 00:33:35.781 "write": true, 00:33:35.781 "unmap": true, 00:33:35.781 "flush": true, 00:33:35.781 "reset": true, 00:33:35.781 "nvme_admin": false, 00:33:35.781 "nvme_io": false, 00:33:35.781 "nvme_io_md": false, 00:33:35.781 "write_zeroes": true, 00:33:35.781 "zcopy": true, 00:33:35.781 "get_zone_info": false, 00:33:35.781 "zone_management": false, 00:33:35.781 "zone_append": false, 00:33:35.781 "compare": false, 00:33:35.781 "compare_and_write": false, 00:33:35.781 "abort": true, 00:33:35.781 "seek_hole": false, 00:33:35.781 "seek_data": false, 00:33:35.781 "copy": true, 00:33:35.781 "nvme_iov_md": false 00:33:35.781 }, 00:33:35.781 "memory_domains": [ 00:33:35.781 { 00:33:35.781 "dma_device_id": "system", 00:33:35.781 "dma_device_type": 1 00:33:35.781 }, 00:33:35.781 { 00:33:35.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:35.781 "dma_device_type": 2 00:33:35.781 } 00:33:35.781 ], 00:33:35.781 "driver_specific": { 00:33:35.781 "passthru": { 00:33:35.781 "name": "pt2", 00:33:35.781 "base_bdev_name": "malloc2" 00:33:35.781 } 00:33:35.781 } 00:33:35.781 }' 00:33:35.781 23:19:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:36.038 23:19:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:36.038 23:19:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:36.038 23:19:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:36.038 23:19:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:36.038 23:19:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:36.038 23:19:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:36.038 23:19:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:36.038 23:19:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:36.038 23:19:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:36.294 23:19:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:36.294 23:19:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:36.294 23:19:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:36.294 23:19:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:33:36.294 23:19:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:36.551 23:19:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:36.551 "name": "pt3", 00:33:36.551 "aliases": [ 00:33:36.551 "00000000-0000-0000-0000-000000000003" 00:33:36.551 ], 00:33:36.551 "product_name": "passthru", 00:33:36.551 "block_size": 512, 00:33:36.551 "num_blocks": 65536, 00:33:36.551 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:36.551 "assigned_rate_limits": { 00:33:36.551 "rw_ios_per_sec": 0, 00:33:36.551 "rw_mbytes_per_sec": 0, 00:33:36.551 "r_mbytes_per_sec": 0, 00:33:36.551 "w_mbytes_per_sec": 0 00:33:36.551 }, 00:33:36.551 "claimed": true, 00:33:36.551 "claim_type": "exclusive_write", 00:33:36.551 "zoned": false, 00:33:36.551 "supported_io_types": { 00:33:36.551 "read": true, 00:33:36.551 "write": true, 00:33:36.551 "unmap": true, 00:33:36.551 "flush": true, 00:33:36.551 "reset": true, 00:33:36.551 "nvme_admin": false, 00:33:36.551 "nvme_io": false, 00:33:36.551 "nvme_io_md": false, 00:33:36.551 "write_zeroes": true, 00:33:36.551 "zcopy": true, 00:33:36.551 "get_zone_info": false, 00:33:36.551 "zone_management": false, 00:33:36.551 "zone_append": false, 00:33:36.551 "compare": false, 00:33:36.551 "compare_and_write": false, 00:33:36.551 "abort": true, 00:33:36.551 "seek_hole": false, 00:33:36.551 "seek_data": false, 00:33:36.551 "copy": true, 00:33:36.551 "nvme_iov_md": false 00:33:36.551 }, 00:33:36.551 "memory_domains": [ 00:33:36.551 { 00:33:36.551 "dma_device_id": "system", 00:33:36.551 "dma_device_type": 1 00:33:36.551 }, 00:33:36.551 { 00:33:36.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:36.551 "dma_device_type": 2 00:33:36.551 } 00:33:36.551 ], 00:33:36.551 "driver_specific": { 00:33:36.551 "passthru": { 00:33:36.551 "name": "pt3", 00:33:36.551 "base_bdev_name": "malloc3" 00:33:36.551 } 00:33:36.551 } 00:33:36.551 }' 00:33:36.551 23:19:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:36.551 23:19:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:36.807 23:19:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:36.807 23:19:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:36.807 23:19:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:36.807 23:19:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:36.807 23:19:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:36.807 23:19:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:36.807 23:19:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:36.807 23:19:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:36.807 23:19:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:37.064 23:19:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:37.064 23:19:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:37.064 23:19:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:37.064 23:19:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:33:37.322 23:19:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:37.322 "name": "pt4", 00:33:37.322 "aliases": [ 00:33:37.322 "00000000-0000-0000-0000-000000000004" 00:33:37.322 ], 00:33:37.322 "product_name": "passthru", 00:33:37.322 "block_size": 512, 00:33:37.322 "num_blocks": 65536, 00:33:37.322 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:37.322 "assigned_rate_limits": { 00:33:37.322 "rw_ios_per_sec": 0, 00:33:37.322 "rw_mbytes_per_sec": 0, 00:33:37.322 "r_mbytes_per_sec": 0, 00:33:37.322 "w_mbytes_per_sec": 0 00:33:37.322 }, 00:33:37.322 "claimed": true, 00:33:37.322 "claim_type": "exclusive_write", 00:33:37.322 "zoned": false, 00:33:37.322 "supported_io_types": { 00:33:37.322 "read": true, 00:33:37.322 "write": true, 00:33:37.322 "unmap": true, 00:33:37.322 "flush": true, 00:33:37.322 "reset": true, 00:33:37.322 "nvme_admin": false, 00:33:37.322 "nvme_io": false, 00:33:37.322 "nvme_io_md": false, 00:33:37.322 "write_zeroes": true, 00:33:37.322 "zcopy": true, 00:33:37.322 "get_zone_info": false, 00:33:37.322 "zone_management": false, 00:33:37.322 "zone_append": false, 00:33:37.322 "compare": false, 00:33:37.322 "compare_and_write": false, 00:33:37.322 "abort": true, 00:33:37.322 "seek_hole": false, 00:33:37.322 "seek_data": false, 00:33:37.322 "copy": true, 00:33:37.322 "nvme_iov_md": false 00:33:37.322 }, 00:33:37.322 "memory_domains": [ 00:33:37.322 { 00:33:37.322 "dma_device_id": "system", 00:33:37.322 "dma_device_type": 1 00:33:37.322 }, 00:33:37.322 { 00:33:37.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:37.322 "dma_device_type": 2 00:33:37.322 } 00:33:37.322 ], 00:33:37.322 "driver_specific": { 00:33:37.322 "passthru": { 00:33:37.322 "name": "pt4", 00:33:37.322 "base_bdev_name": "malloc4" 00:33:37.322 } 00:33:37.322 } 00:33:37.322 }' 00:33:37.322 23:19:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:37.322 23:19:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:37.322 23:19:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:37.322 23:19:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:37.322 23:19:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:37.322 23:19:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:37.322 23:19:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:37.580 23:19:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:37.580 23:19:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:37.580 23:19:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:37.580 23:19:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:37.580 23:19:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:37.580 23:19:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:37.580 23:19:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:33:37.838 [2024-07-13 23:19:27.176634] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:37.838 23:19:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=bcbdee1b-9aab-4a59-813d-97beca9de828 00:33:37.838 23:19:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z bcbdee1b-9aab-4a59-813d-97beca9de828 ']' 00:33:37.838 23:19:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:38.095 [2024-07-13 23:19:27.428489] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:38.095 [2024-07-13 23:19:27.428674] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:38.095 [2024-07-13 23:19:27.428893] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:38.095 [2024-07-13 23:19:27.429164] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:38.095 [2024-07-13 23:19:27.429279] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:33:38.095 23:19:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:38.095 23:19:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:33:38.353 23:19:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:33:38.353 23:19:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:33:38.353 23:19:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:33:38.353 23:19:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:33:38.611 23:19:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:33:38.611 23:19:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:33:38.868 23:19:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:33:38.868 23:19:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:33:39.126 23:19:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:33:39.126 23:19:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:33:39.383 23:19:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:33:39.383 23:19:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:33:39.641 23:19:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:33:39.641 23:19:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:33:39.641 23:19:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:33:39.642 23:19:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:33:39.642 23:19:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:39.642 23:19:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:39.642 23:19:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:39.642 23:19:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:39.642 23:19:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:39.642 23:19:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:39.642 23:19:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:39.642 23:19:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:33:39.642 23:19:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:33:39.899 [2024-07-13 23:19:29.192821] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:33:39.899 [2024-07-13 23:19:29.195021] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:33:39.899 [2024-07-13 23:19:29.195238] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:33:39.899 [2024-07-13 23:19:29.195323] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:33:39.899 [2024-07-13 23:19:29.195494] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:33:39.899 [2024-07-13 23:19:29.195714] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:33:39.899 [2024-07-13 23:19:29.195910] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:33:39.899 [2024-07-13 23:19:29.196084] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:33:39.899 [2024-07-13 23:19:29.196235] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:39.899 [2024-07-13 23:19:29.196374] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state configuring 00:33:39.899 request: 00:33:39.899 { 00:33:39.899 "name": "raid_bdev1", 00:33:39.899 "raid_level": "raid5f", 00:33:39.899 "base_bdevs": [ 00:33:39.899 "malloc1", 00:33:39.899 "malloc2", 00:33:39.899 "malloc3", 00:33:39.899 "malloc4" 00:33:39.899 ], 00:33:39.899 "strip_size_kb": 64, 00:33:39.899 "superblock": false, 00:33:39.899 "method": "bdev_raid_create", 00:33:39.899 "req_id": 1 00:33:39.899 } 00:33:39.899 Got JSON-RPC error response 00:33:39.899 response: 00:33:39.899 { 00:33:39.899 "code": -17, 00:33:39.899 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:33:39.899 } 00:33:39.899 23:19:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:33:39.900 23:19:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:39.900 23:19:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:39.900 23:19:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:39.900 23:19:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:39.900 23:19:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:33:40.157 23:19:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:33:40.157 23:19:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:33:40.157 23:19:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:40.414 [2024-07-13 23:19:29.800979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:40.414 [2024-07-13 23:19:29.801277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:40.414 [2024-07-13 23:19:29.801494] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:33:40.414 [2024-07-13 23:19:29.801621] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:40.414 [2024-07-13 23:19:29.803983] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:40.414 [2024-07-13 23:19:29.804182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:40.414 [2024-07-13 23:19:29.804382] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:40.414 [2024-07-13 23:19:29.804575] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:40.414 pt1 00:33:40.672 23:19:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:33:40.672 23:19:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:40.672 23:19:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:40.672 23:19:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:40.672 23:19:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:40.672 23:19:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:40.672 23:19:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:40.672 23:19:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:40.672 23:19:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:40.672 23:19:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:40.672 23:19:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:40.672 23:19:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:40.672 23:19:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:40.672 "name": "raid_bdev1", 00:33:40.672 "uuid": "bcbdee1b-9aab-4a59-813d-97beca9de828", 00:33:40.672 "strip_size_kb": 64, 00:33:40.672 "state": "configuring", 00:33:40.672 "raid_level": "raid5f", 00:33:40.672 "superblock": true, 00:33:40.672 "num_base_bdevs": 4, 00:33:40.672 "num_base_bdevs_discovered": 1, 00:33:40.672 "num_base_bdevs_operational": 4, 00:33:40.672 "base_bdevs_list": [ 00:33:40.672 { 00:33:40.672 "name": "pt1", 00:33:40.672 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:40.672 "is_configured": true, 00:33:40.672 "data_offset": 2048, 00:33:40.672 "data_size": 63488 00:33:40.672 }, 00:33:40.672 { 00:33:40.672 "name": null, 00:33:40.672 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:40.672 "is_configured": false, 00:33:40.672 "data_offset": 2048, 00:33:40.672 "data_size": 63488 00:33:40.672 }, 00:33:40.672 { 00:33:40.672 "name": null, 00:33:40.672 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:40.672 "is_configured": false, 00:33:40.672 "data_offset": 2048, 00:33:40.672 "data_size": 63488 00:33:40.672 }, 00:33:40.672 { 00:33:40.672 "name": null, 00:33:40.672 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:40.672 "is_configured": false, 00:33:40.672 "data_offset": 2048, 00:33:40.672 "data_size": 63488 00:33:40.672 } 00:33:40.672 ] 00:33:40.672 }' 00:33:40.673 23:19:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:40.673 23:19:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:41.606 23:19:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:33:41.606 23:19:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:41.606 [2024-07-13 23:19:30.889295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:41.606 [2024-07-13 23:19:30.889598] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:41.606 [2024-07-13 23:19:30.889691] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:33:41.606 [2024-07-13 23:19:30.889900] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:41.606 [2024-07-13 23:19:30.890492] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:41.606 [2024-07-13 23:19:30.890684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:41.606 [2024-07-13 23:19:30.890903] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:41.606 [2024-07-13 23:19:30.891065] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:41.606 pt2 00:33:41.606 23:19:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:33:41.863 [2024-07-13 23:19:31.157381] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:33:41.864 23:19:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:33:41.864 23:19:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:41.864 23:19:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:41.864 23:19:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:41.864 23:19:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:41.864 23:19:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:41.864 23:19:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:41.864 23:19:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:41.864 23:19:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:41.864 23:19:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:41.864 23:19:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:41.864 23:19:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:42.122 23:19:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:42.122 "name": "raid_bdev1", 00:33:42.122 "uuid": "bcbdee1b-9aab-4a59-813d-97beca9de828", 00:33:42.122 "strip_size_kb": 64, 00:33:42.122 "state": "configuring", 00:33:42.122 "raid_level": "raid5f", 00:33:42.122 "superblock": true, 00:33:42.122 "num_base_bdevs": 4, 00:33:42.122 "num_base_bdevs_discovered": 1, 00:33:42.122 "num_base_bdevs_operational": 4, 00:33:42.122 "base_bdevs_list": [ 00:33:42.122 { 00:33:42.122 "name": "pt1", 00:33:42.122 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:42.122 "is_configured": true, 00:33:42.122 "data_offset": 2048, 00:33:42.122 "data_size": 63488 00:33:42.122 }, 00:33:42.122 { 00:33:42.122 "name": null, 00:33:42.122 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:42.122 "is_configured": false, 00:33:42.122 "data_offset": 2048, 00:33:42.122 "data_size": 63488 00:33:42.122 }, 00:33:42.122 { 00:33:42.122 "name": null, 00:33:42.122 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:42.122 "is_configured": false, 00:33:42.122 "data_offset": 2048, 00:33:42.122 "data_size": 63488 00:33:42.122 }, 00:33:42.122 { 00:33:42.122 "name": null, 00:33:42.122 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:42.122 "is_configured": false, 00:33:42.122 "data_offset": 2048, 00:33:42.122 "data_size": 63488 00:33:42.122 } 00:33:42.122 ] 00:33:42.122 }' 00:33:42.122 23:19:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:42.122 23:19:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:42.688 23:19:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:33:42.688 23:19:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:33:42.688 23:19:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:42.946 [2024-07-13 23:19:32.257749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:42.946 [2024-07-13 23:19:32.258042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:42.946 [2024-07-13 23:19:32.258203] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:33:42.946 [2024-07-13 23:19:32.258380] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:42.946 [2024-07-13 23:19:32.259083] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:42.946 [2024-07-13 23:19:32.259302] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:42.946 [2024-07-13 23:19:32.259531] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:42.946 [2024-07-13 23:19:32.259674] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:42.946 pt2 00:33:42.946 23:19:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:33:42.946 23:19:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:33:42.946 23:19:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:33:43.205 [2024-07-13 23:19:32.533751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:33:43.205 [2024-07-13 23:19:32.534091] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:43.205 [2024-07-13 23:19:32.534166] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:33:43.205 [2024-07-13 23:19:32.534414] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:43.205 [2024-07-13 23:19:32.534932] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:43.205 [2024-07-13 23:19:32.535172] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:33:43.205 [2024-07-13 23:19:32.535449] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:33:43.205 [2024-07-13 23:19:32.535596] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:33:43.205 pt3 00:33:43.205 23:19:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:33:43.205 23:19:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:33:43.205 23:19:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:33:43.463 [2024-07-13 23:19:32.781787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:33:43.463 [2024-07-13 23:19:32.782039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:43.463 [2024-07-13 23:19:32.782185] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:33:43.463 [2024-07-13 23:19:32.782313] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:43.463 [2024-07-13 23:19:32.782846] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:43.463 [2024-07-13 23:19:32.783055] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:33:43.463 [2024-07-13 23:19:32.783294] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:33:43.463 [2024-07-13 23:19:32.783434] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:33:43.463 [2024-07-13 23:19:32.783742] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:33:43.463 [2024-07-13 23:19:32.783904] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:33:43.463 [2024-07-13 23:19:32.784024] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:33:43.463 [2024-07-13 23:19:32.784752] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:33:43.463 [2024-07-13 23:19:32.784892] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:33:43.463 [2024-07-13 23:19:32.785209] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:43.463 pt4 00:33:43.463 23:19:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:33:43.463 23:19:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:33:43.463 23:19:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:33:43.463 23:19:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:43.463 23:19:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:43.463 23:19:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:43.463 23:19:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:43.463 23:19:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:43.463 23:19:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:43.463 23:19:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:43.463 23:19:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:43.463 23:19:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:43.463 23:19:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:43.463 23:19:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:43.722 23:19:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:43.722 "name": "raid_bdev1", 00:33:43.722 "uuid": "bcbdee1b-9aab-4a59-813d-97beca9de828", 00:33:43.722 "strip_size_kb": 64, 00:33:43.722 "state": "online", 00:33:43.722 "raid_level": "raid5f", 00:33:43.722 "superblock": true, 00:33:43.722 "num_base_bdevs": 4, 00:33:43.722 "num_base_bdevs_discovered": 4, 00:33:43.722 "num_base_bdevs_operational": 4, 00:33:43.722 "base_bdevs_list": [ 00:33:43.722 { 00:33:43.722 "name": "pt1", 00:33:43.722 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:43.722 "is_configured": true, 00:33:43.722 "data_offset": 2048, 00:33:43.722 "data_size": 63488 00:33:43.722 }, 00:33:43.722 { 00:33:43.722 "name": "pt2", 00:33:43.722 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:43.722 "is_configured": true, 00:33:43.722 "data_offset": 2048, 00:33:43.722 "data_size": 63488 00:33:43.722 }, 00:33:43.722 { 00:33:43.722 "name": "pt3", 00:33:43.722 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:43.722 "is_configured": true, 00:33:43.722 "data_offset": 2048, 00:33:43.722 "data_size": 63488 00:33:43.722 }, 00:33:43.722 { 00:33:43.722 "name": "pt4", 00:33:43.722 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:43.722 "is_configured": true, 00:33:43.722 "data_offset": 2048, 00:33:43.722 "data_size": 63488 00:33:43.722 } 00:33:43.722 ] 00:33:43.722 }' 00:33:43.722 23:19:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:43.722 23:19:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:44.651 23:19:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:33:44.651 23:19:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:33:44.651 23:19:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:44.651 23:19:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:44.651 23:19:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:44.651 23:19:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:33:44.651 23:19:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:44.651 23:19:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:44.651 [2024-07-13 23:19:33.977807] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:44.651 23:19:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:33:44.651 "name": "raid_bdev1", 00:33:44.651 "aliases": [ 00:33:44.651 "bcbdee1b-9aab-4a59-813d-97beca9de828" 00:33:44.651 ], 00:33:44.651 "product_name": "Raid Volume", 00:33:44.651 "block_size": 512, 00:33:44.651 "num_blocks": 190464, 00:33:44.651 "uuid": "bcbdee1b-9aab-4a59-813d-97beca9de828", 00:33:44.651 "assigned_rate_limits": { 00:33:44.651 "rw_ios_per_sec": 0, 00:33:44.651 "rw_mbytes_per_sec": 0, 00:33:44.651 "r_mbytes_per_sec": 0, 00:33:44.651 "w_mbytes_per_sec": 0 00:33:44.651 }, 00:33:44.651 "claimed": false, 00:33:44.651 "zoned": false, 00:33:44.651 "supported_io_types": { 00:33:44.651 "read": true, 00:33:44.651 "write": true, 00:33:44.651 "unmap": false, 00:33:44.651 "flush": false, 00:33:44.651 "reset": true, 00:33:44.651 "nvme_admin": false, 00:33:44.651 "nvme_io": false, 00:33:44.651 "nvme_io_md": false, 00:33:44.651 "write_zeroes": true, 00:33:44.651 "zcopy": false, 00:33:44.651 "get_zone_info": false, 00:33:44.651 "zone_management": false, 00:33:44.651 "zone_append": false, 00:33:44.651 "compare": false, 00:33:44.651 "compare_and_write": false, 00:33:44.651 "abort": false, 00:33:44.651 "seek_hole": false, 00:33:44.651 "seek_data": false, 00:33:44.651 "copy": false, 00:33:44.651 "nvme_iov_md": false 00:33:44.651 }, 00:33:44.651 "driver_specific": { 00:33:44.651 "raid": { 00:33:44.651 "uuid": "bcbdee1b-9aab-4a59-813d-97beca9de828", 00:33:44.651 "strip_size_kb": 64, 00:33:44.651 "state": "online", 00:33:44.651 "raid_level": "raid5f", 00:33:44.651 "superblock": true, 00:33:44.651 "num_base_bdevs": 4, 00:33:44.651 "num_base_bdevs_discovered": 4, 00:33:44.651 "num_base_bdevs_operational": 4, 00:33:44.652 "base_bdevs_list": [ 00:33:44.652 { 00:33:44.652 "name": "pt1", 00:33:44.652 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:44.652 "is_configured": true, 00:33:44.652 "data_offset": 2048, 00:33:44.652 "data_size": 63488 00:33:44.652 }, 00:33:44.652 { 00:33:44.652 "name": "pt2", 00:33:44.652 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:44.652 "is_configured": true, 00:33:44.652 "data_offset": 2048, 00:33:44.652 "data_size": 63488 00:33:44.652 }, 00:33:44.652 { 00:33:44.652 "name": "pt3", 00:33:44.652 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:44.652 "is_configured": true, 00:33:44.652 "data_offset": 2048, 00:33:44.652 "data_size": 63488 00:33:44.652 }, 00:33:44.652 { 00:33:44.652 "name": "pt4", 00:33:44.652 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:44.652 "is_configured": true, 00:33:44.652 "data_offset": 2048, 00:33:44.652 "data_size": 63488 00:33:44.652 } 00:33:44.652 ] 00:33:44.652 } 00:33:44.652 } 00:33:44.652 }' 00:33:44.652 23:19:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:44.652 23:19:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:33:44.652 pt2 00:33:44.652 pt3 00:33:44.652 pt4' 00:33:44.652 23:19:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:44.652 23:19:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:33:44.652 23:19:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:44.910 23:19:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:44.910 "name": "pt1", 00:33:44.910 "aliases": [ 00:33:44.910 "00000000-0000-0000-0000-000000000001" 00:33:44.910 ], 00:33:44.910 "product_name": "passthru", 00:33:44.910 "block_size": 512, 00:33:44.910 "num_blocks": 65536, 00:33:44.910 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:44.910 "assigned_rate_limits": { 00:33:44.910 "rw_ios_per_sec": 0, 00:33:44.910 "rw_mbytes_per_sec": 0, 00:33:44.910 "r_mbytes_per_sec": 0, 00:33:44.910 "w_mbytes_per_sec": 0 00:33:44.910 }, 00:33:44.910 "claimed": true, 00:33:44.910 "claim_type": "exclusive_write", 00:33:44.910 "zoned": false, 00:33:44.910 "supported_io_types": { 00:33:44.910 "read": true, 00:33:44.910 "write": true, 00:33:44.910 "unmap": true, 00:33:44.910 "flush": true, 00:33:44.910 "reset": true, 00:33:44.910 "nvme_admin": false, 00:33:44.910 "nvme_io": false, 00:33:44.910 "nvme_io_md": false, 00:33:44.910 "write_zeroes": true, 00:33:44.910 "zcopy": true, 00:33:44.910 "get_zone_info": false, 00:33:44.910 "zone_management": false, 00:33:44.910 "zone_append": false, 00:33:44.910 "compare": false, 00:33:44.910 "compare_and_write": false, 00:33:44.910 "abort": true, 00:33:44.910 "seek_hole": false, 00:33:44.910 "seek_data": false, 00:33:44.910 "copy": true, 00:33:44.910 "nvme_iov_md": false 00:33:44.910 }, 00:33:44.910 "memory_domains": [ 00:33:44.910 { 00:33:44.910 "dma_device_id": "system", 00:33:44.910 "dma_device_type": 1 00:33:44.910 }, 00:33:44.910 { 00:33:44.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:44.910 "dma_device_type": 2 00:33:44.910 } 00:33:44.910 ], 00:33:44.910 "driver_specific": { 00:33:44.910 "passthru": { 00:33:44.910 "name": "pt1", 00:33:44.910 "base_bdev_name": "malloc1" 00:33:44.910 } 00:33:44.910 } 00:33:44.910 }' 00:33:44.910 23:19:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:45.168 23:19:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:45.168 23:19:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:45.168 23:19:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:45.168 23:19:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:45.168 23:19:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:45.168 23:19:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:45.168 23:19:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:45.168 23:19:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:45.168 23:19:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:45.426 23:19:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:45.426 23:19:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:45.426 23:19:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:45.426 23:19:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:33:45.426 23:19:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:45.682 23:19:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:45.682 "name": "pt2", 00:33:45.682 "aliases": [ 00:33:45.682 "00000000-0000-0000-0000-000000000002" 00:33:45.682 ], 00:33:45.682 "product_name": "passthru", 00:33:45.682 "block_size": 512, 00:33:45.682 "num_blocks": 65536, 00:33:45.682 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:45.683 "assigned_rate_limits": { 00:33:45.683 "rw_ios_per_sec": 0, 00:33:45.683 "rw_mbytes_per_sec": 0, 00:33:45.683 "r_mbytes_per_sec": 0, 00:33:45.683 "w_mbytes_per_sec": 0 00:33:45.683 }, 00:33:45.683 "claimed": true, 00:33:45.683 "claim_type": "exclusive_write", 00:33:45.683 "zoned": false, 00:33:45.683 "supported_io_types": { 00:33:45.683 "read": true, 00:33:45.683 "write": true, 00:33:45.683 "unmap": true, 00:33:45.683 "flush": true, 00:33:45.683 "reset": true, 00:33:45.683 "nvme_admin": false, 00:33:45.683 "nvme_io": false, 00:33:45.683 "nvme_io_md": false, 00:33:45.683 "write_zeroes": true, 00:33:45.683 "zcopy": true, 00:33:45.683 "get_zone_info": false, 00:33:45.683 "zone_management": false, 00:33:45.683 "zone_append": false, 00:33:45.683 "compare": false, 00:33:45.683 "compare_and_write": false, 00:33:45.683 "abort": true, 00:33:45.683 "seek_hole": false, 00:33:45.683 "seek_data": false, 00:33:45.683 "copy": true, 00:33:45.683 "nvme_iov_md": false 00:33:45.683 }, 00:33:45.683 "memory_domains": [ 00:33:45.683 { 00:33:45.683 "dma_device_id": "system", 00:33:45.683 "dma_device_type": 1 00:33:45.683 }, 00:33:45.683 { 00:33:45.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:45.683 "dma_device_type": 2 00:33:45.683 } 00:33:45.683 ], 00:33:45.683 "driver_specific": { 00:33:45.683 "passthru": { 00:33:45.683 "name": "pt2", 00:33:45.683 "base_bdev_name": "malloc2" 00:33:45.683 } 00:33:45.683 } 00:33:45.683 }' 00:33:45.683 23:19:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:45.683 23:19:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:45.683 23:19:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:45.683 23:19:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:45.683 23:19:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:45.940 23:19:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:45.940 23:19:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:45.940 23:19:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:45.940 23:19:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:45.940 23:19:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:45.940 23:19:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:45.940 23:19:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:45.940 23:19:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:45.940 23:19:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:33:45.940 23:19:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:46.197 23:19:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:46.197 "name": "pt3", 00:33:46.197 "aliases": [ 00:33:46.197 "00000000-0000-0000-0000-000000000003" 00:33:46.197 ], 00:33:46.197 "product_name": "passthru", 00:33:46.197 "block_size": 512, 00:33:46.197 "num_blocks": 65536, 00:33:46.197 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:46.197 "assigned_rate_limits": { 00:33:46.197 "rw_ios_per_sec": 0, 00:33:46.197 "rw_mbytes_per_sec": 0, 00:33:46.197 "r_mbytes_per_sec": 0, 00:33:46.197 "w_mbytes_per_sec": 0 00:33:46.197 }, 00:33:46.197 "claimed": true, 00:33:46.197 "claim_type": "exclusive_write", 00:33:46.197 "zoned": false, 00:33:46.197 "supported_io_types": { 00:33:46.197 "read": true, 00:33:46.197 "write": true, 00:33:46.197 "unmap": true, 00:33:46.197 "flush": true, 00:33:46.197 "reset": true, 00:33:46.198 "nvme_admin": false, 00:33:46.198 "nvme_io": false, 00:33:46.198 "nvme_io_md": false, 00:33:46.198 "write_zeroes": true, 00:33:46.198 "zcopy": true, 00:33:46.198 "get_zone_info": false, 00:33:46.198 "zone_management": false, 00:33:46.198 "zone_append": false, 00:33:46.198 "compare": false, 00:33:46.198 "compare_and_write": false, 00:33:46.198 "abort": true, 00:33:46.198 "seek_hole": false, 00:33:46.198 "seek_data": false, 00:33:46.198 "copy": true, 00:33:46.198 "nvme_iov_md": false 00:33:46.198 }, 00:33:46.198 "memory_domains": [ 00:33:46.198 { 00:33:46.198 "dma_device_id": "system", 00:33:46.198 "dma_device_type": 1 00:33:46.198 }, 00:33:46.198 { 00:33:46.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:46.198 "dma_device_type": 2 00:33:46.198 } 00:33:46.198 ], 00:33:46.198 "driver_specific": { 00:33:46.198 "passthru": { 00:33:46.198 "name": "pt3", 00:33:46.198 "base_bdev_name": "malloc3" 00:33:46.198 } 00:33:46.198 } 00:33:46.198 }' 00:33:46.198 23:19:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:46.455 23:19:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:46.455 23:19:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:46.455 23:19:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:46.455 23:19:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:46.455 23:19:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:46.455 23:19:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:46.455 23:19:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:46.455 23:19:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:46.455 23:19:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:46.713 23:19:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:46.713 23:19:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:46.713 23:19:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:46.713 23:19:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:33:46.713 23:19:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:46.970 23:19:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:46.970 "name": "pt4", 00:33:46.970 "aliases": [ 00:33:46.970 "00000000-0000-0000-0000-000000000004" 00:33:46.970 ], 00:33:46.970 "product_name": "passthru", 00:33:46.970 "block_size": 512, 00:33:46.970 "num_blocks": 65536, 00:33:46.970 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:46.970 "assigned_rate_limits": { 00:33:46.970 "rw_ios_per_sec": 0, 00:33:46.970 "rw_mbytes_per_sec": 0, 00:33:46.970 "r_mbytes_per_sec": 0, 00:33:46.970 "w_mbytes_per_sec": 0 00:33:46.970 }, 00:33:46.970 "claimed": true, 00:33:46.970 "claim_type": "exclusive_write", 00:33:46.970 "zoned": false, 00:33:46.970 "supported_io_types": { 00:33:46.970 "read": true, 00:33:46.970 "write": true, 00:33:46.970 "unmap": true, 00:33:46.970 "flush": true, 00:33:46.970 "reset": true, 00:33:46.970 "nvme_admin": false, 00:33:46.970 "nvme_io": false, 00:33:46.970 "nvme_io_md": false, 00:33:46.970 "write_zeroes": true, 00:33:46.970 "zcopy": true, 00:33:46.970 "get_zone_info": false, 00:33:46.970 "zone_management": false, 00:33:46.970 "zone_append": false, 00:33:46.970 "compare": false, 00:33:46.970 "compare_and_write": false, 00:33:46.970 "abort": true, 00:33:46.970 "seek_hole": false, 00:33:46.970 "seek_data": false, 00:33:46.970 "copy": true, 00:33:46.970 "nvme_iov_md": false 00:33:46.970 }, 00:33:46.970 "memory_domains": [ 00:33:46.970 { 00:33:46.970 "dma_device_id": "system", 00:33:46.970 "dma_device_type": 1 00:33:46.970 }, 00:33:46.970 { 00:33:46.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:46.970 "dma_device_type": 2 00:33:46.970 } 00:33:46.970 ], 00:33:46.970 "driver_specific": { 00:33:46.970 "passthru": { 00:33:46.970 "name": "pt4", 00:33:46.970 "base_bdev_name": "malloc4" 00:33:46.970 } 00:33:46.970 } 00:33:46.970 }' 00:33:46.970 23:19:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:46.970 23:19:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:46.970 23:19:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:46.970 23:19:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:46.971 23:19:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:47.228 23:19:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:47.228 23:19:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:47.228 23:19:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:47.228 23:19:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:47.228 23:19:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:47.228 23:19:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:47.228 23:19:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:47.228 23:19:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:47.228 23:19:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:33:47.485 [2024-07-13 23:19:36.842432] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:47.485 23:19:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' bcbdee1b-9aab-4a59-813d-97beca9de828 '!=' bcbdee1b-9aab-4a59-813d-97beca9de828 ']' 00:33:47.485 23:19:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid5f 00:33:47.485 23:19:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:33:47.485 23:19:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:33:47.485 23:19:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:33:47.743 [2024-07-13 23:19:37.110373] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:33:47.743 23:19:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:47.743 23:19:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:47.743 23:19:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:47.743 23:19:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:47.743 23:19:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:47.743 23:19:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:47.743 23:19:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:47.743 23:19:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:47.743 23:19:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:47.743 23:19:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:47.743 23:19:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:47.743 23:19:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:48.308 23:19:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:48.308 "name": "raid_bdev1", 00:33:48.308 "uuid": "bcbdee1b-9aab-4a59-813d-97beca9de828", 00:33:48.308 "strip_size_kb": 64, 00:33:48.308 "state": "online", 00:33:48.308 "raid_level": "raid5f", 00:33:48.308 "superblock": true, 00:33:48.308 "num_base_bdevs": 4, 00:33:48.308 "num_base_bdevs_discovered": 3, 00:33:48.308 "num_base_bdevs_operational": 3, 00:33:48.308 "base_bdevs_list": [ 00:33:48.308 { 00:33:48.308 "name": null, 00:33:48.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:48.308 "is_configured": false, 00:33:48.308 "data_offset": 2048, 00:33:48.308 "data_size": 63488 00:33:48.308 }, 00:33:48.308 { 00:33:48.308 "name": "pt2", 00:33:48.308 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:48.308 "is_configured": true, 00:33:48.308 "data_offset": 2048, 00:33:48.308 "data_size": 63488 00:33:48.308 }, 00:33:48.308 { 00:33:48.308 "name": "pt3", 00:33:48.308 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:48.308 "is_configured": true, 00:33:48.308 "data_offset": 2048, 00:33:48.308 "data_size": 63488 00:33:48.308 }, 00:33:48.308 { 00:33:48.308 "name": "pt4", 00:33:48.308 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:48.308 "is_configured": true, 00:33:48.308 "data_offset": 2048, 00:33:48.308 "data_size": 63488 00:33:48.308 } 00:33:48.308 ] 00:33:48.308 }' 00:33:48.308 23:19:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:48.308 23:19:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:48.890 23:19:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:49.148 [2024-07-13 23:19:38.310587] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:49.148 [2024-07-13 23:19:38.310802] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:49.148 [2024-07-13 23:19:38.310984] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:49.148 [2024-07-13 23:19:38.311178] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:49.148 [2024-07-13 23:19:38.311283] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:33:49.148 23:19:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:49.148 23:19:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:33:49.405 23:19:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:33:49.405 23:19:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:33:49.405 23:19:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:33:49.405 23:19:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:33:49.405 23:19:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:33:49.405 23:19:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:33:49.405 23:19:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:33:49.405 23:19:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:33:49.970 23:19:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:33:49.970 23:19:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:33:49.970 23:19:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:33:49.970 23:19:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:33:49.970 23:19:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:33:49.970 23:19:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:33:49.970 23:19:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:33:49.970 23:19:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:50.228 [2024-07-13 23:19:39.522757] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:50.228 [2024-07-13 23:19:39.523555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:50.228 [2024-07-13 23:19:39.523892] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:33:50.228 [2024-07-13 23:19:39.524256] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:50.228 [2024-07-13 23:19:39.527050] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:50.228 [2024-07-13 23:19:39.527410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:50.228 [2024-07-13 23:19:39.527755] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:50.228 [2024-07-13 23:19:39.527958] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:50.228 pt2 00:33:50.228 23:19:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:33:50.228 23:19:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:50.228 23:19:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:50.228 23:19:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:50.228 23:19:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:50.228 23:19:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:50.228 23:19:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:50.228 23:19:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:50.228 23:19:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:50.228 23:19:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:50.228 23:19:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:50.228 23:19:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:50.487 23:19:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:50.487 "name": "raid_bdev1", 00:33:50.487 "uuid": "bcbdee1b-9aab-4a59-813d-97beca9de828", 00:33:50.487 "strip_size_kb": 64, 00:33:50.487 "state": "configuring", 00:33:50.487 "raid_level": "raid5f", 00:33:50.487 "superblock": true, 00:33:50.487 "num_base_bdevs": 4, 00:33:50.487 "num_base_bdevs_discovered": 1, 00:33:50.487 "num_base_bdevs_operational": 3, 00:33:50.487 "base_bdevs_list": [ 00:33:50.487 { 00:33:50.487 "name": null, 00:33:50.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:50.487 "is_configured": false, 00:33:50.487 "data_offset": 2048, 00:33:50.487 "data_size": 63488 00:33:50.487 }, 00:33:50.487 { 00:33:50.487 "name": "pt2", 00:33:50.487 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:50.487 "is_configured": true, 00:33:50.487 "data_offset": 2048, 00:33:50.487 "data_size": 63488 00:33:50.487 }, 00:33:50.487 { 00:33:50.487 "name": null, 00:33:50.487 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:50.487 "is_configured": false, 00:33:50.487 "data_offset": 2048, 00:33:50.487 "data_size": 63488 00:33:50.487 }, 00:33:50.487 { 00:33:50.487 "name": null, 00:33:50.487 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:50.487 "is_configured": false, 00:33:50.487 "data_offset": 2048, 00:33:50.487 "data_size": 63488 00:33:50.487 } 00:33:50.487 ] 00:33:50.487 }' 00:33:50.487 23:19:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:50.487 23:19:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:51.054 23:19:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:33:51.054 23:19:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:33:51.054 23:19:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:33:51.311 [2024-07-13 23:19:40.632119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:33:51.311 [2024-07-13 23:19:40.632858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:51.311 [2024-07-13 23:19:40.633230] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:33:51.311 [2024-07-13 23:19:40.633524] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:51.312 [2024-07-13 23:19:40.634275] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:51.312 [2024-07-13 23:19:40.634585] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:33:51.312 [2024-07-13 23:19:40.634911] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:33:51.312 [2024-07-13 23:19:40.635084] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:33:51.312 pt3 00:33:51.312 23:19:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:33:51.312 23:19:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:51.312 23:19:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:51.312 23:19:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:51.312 23:19:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:51.312 23:19:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:51.312 23:19:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:51.312 23:19:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:51.312 23:19:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:51.312 23:19:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:51.312 23:19:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:51.312 23:19:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:51.576 23:19:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:51.576 "name": "raid_bdev1", 00:33:51.576 "uuid": "bcbdee1b-9aab-4a59-813d-97beca9de828", 00:33:51.576 "strip_size_kb": 64, 00:33:51.576 "state": "configuring", 00:33:51.576 "raid_level": "raid5f", 00:33:51.576 "superblock": true, 00:33:51.576 "num_base_bdevs": 4, 00:33:51.576 "num_base_bdevs_discovered": 2, 00:33:51.576 "num_base_bdevs_operational": 3, 00:33:51.576 "base_bdevs_list": [ 00:33:51.576 { 00:33:51.576 "name": null, 00:33:51.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:51.576 "is_configured": false, 00:33:51.576 "data_offset": 2048, 00:33:51.576 "data_size": 63488 00:33:51.576 }, 00:33:51.576 { 00:33:51.576 "name": "pt2", 00:33:51.576 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:51.576 "is_configured": true, 00:33:51.576 "data_offset": 2048, 00:33:51.576 "data_size": 63488 00:33:51.576 }, 00:33:51.576 { 00:33:51.576 "name": "pt3", 00:33:51.576 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:51.576 "is_configured": true, 00:33:51.576 "data_offset": 2048, 00:33:51.576 "data_size": 63488 00:33:51.576 }, 00:33:51.576 { 00:33:51.576 "name": null, 00:33:51.576 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:51.576 "is_configured": false, 00:33:51.576 "data_offset": 2048, 00:33:51.576 "data_size": 63488 00:33:51.577 } 00:33:51.577 ] 00:33:51.577 }' 00:33:51.577 23:19:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:51.577 23:19:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:52.152 23:19:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:33:52.152 23:19:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:33:52.152 23:19:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:33:52.152 23:19:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:33:52.409 [2024-07-13 23:19:41.660317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:33:52.409 [2024-07-13 23:19:41.661083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:52.410 [2024-07-13 23:19:41.661369] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:33:52.410 [2024-07-13 23:19:41.661642] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:52.410 [2024-07-13 23:19:41.662397] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:52.410 [2024-07-13 23:19:41.662686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:33:52.410 [2024-07-13 23:19:41.663024] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:33:52.410 [2024-07-13 23:19:41.663199] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:33:52.410 [2024-07-13 23:19:41.663475] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:33:52.410 [2024-07-13 23:19:41.663600] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:33:52.410 [2024-07-13 23:19:41.663825] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002c80 00:33:52.410 [2024-07-13 23:19:41.664836] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:33:52.410 [2024-07-13 23:19:41.665038] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:33:52.410 [2024-07-13 23:19:41.665471] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:52.410 pt4 00:33:52.410 23:19:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:52.410 23:19:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:52.410 23:19:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:52.410 23:19:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:52.410 23:19:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:52.410 23:19:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:52.410 23:19:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:52.410 23:19:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:52.410 23:19:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:52.410 23:19:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:52.410 23:19:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:52.410 23:19:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:52.668 23:19:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:52.668 "name": "raid_bdev1", 00:33:52.668 "uuid": "bcbdee1b-9aab-4a59-813d-97beca9de828", 00:33:52.668 "strip_size_kb": 64, 00:33:52.668 "state": "online", 00:33:52.668 "raid_level": "raid5f", 00:33:52.668 "superblock": true, 00:33:52.668 "num_base_bdevs": 4, 00:33:52.668 "num_base_bdevs_discovered": 3, 00:33:52.668 "num_base_bdevs_operational": 3, 00:33:52.668 "base_bdevs_list": [ 00:33:52.668 { 00:33:52.668 "name": null, 00:33:52.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:52.668 "is_configured": false, 00:33:52.668 "data_offset": 2048, 00:33:52.668 "data_size": 63488 00:33:52.668 }, 00:33:52.668 { 00:33:52.668 "name": "pt2", 00:33:52.668 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:52.668 "is_configured": true, 00:33:52.668 "data_offset": 2048, 00:33:52.668 "data_size": 63488 00:33:52.668 }, 00:33:52.668 { 00:33:52.668 "name": "pt3", 00:33:52.668 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:52.668 "is_configured": true, 00:33:52.668 "data_offset": 2048, 00:33:52.668 "data_size": 63488 00:33:52.668 }, 00:33:52.668 { 00:33:52.668 "name": "pt4", 00:33:52.668 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:52.668 "is_configured": true, 00:33:52.668 "data_offset": 2048, 00:33:52.668 "data_size": 63488 00:33:52.668 } 00:33:52.668 ] 00:33:52.668 }' 00:33:52.668 23:19:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:52.668 23:19:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:53.234 23:19:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:53.493 [2024-07-13 23:19:42.777668] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:53.493 [2024-07-13 23:19:42.777925] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:53.493 [2024-07-13 23:19:42.778101] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:53.493 [2024-07-13 23:19:42.778286] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:53.493 [2024-07-13 23:19:42.778390] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:33:53.493 23:19:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:53.493 23:19:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:33:53.751 23:19:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:33:53.751 23:19:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:33:53.751 23:19:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:33:53.751 23:19:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:33:53.751 23:19:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:33:54.009 23:19:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:54.266 [2024-07-13 23:19:43.478044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:54.266 [2024-07-13 23:19:43.478593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:54.266 [2024-07-13 23:19:43.478928] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:33:54.266 [2024-07-13 23:19:43.479188] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:54.266 [2024-07-13 23:19:43.481909] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:54.266 [2024-07-13 23:19:43.482237] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:54.266 [2024-07-13 23:19:43.482555] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:54.266 [2024-07-13 23:19:43.482738] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:54.266 [2024-07-13 23:19:43.483083] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:33:54.266 [2024-07-13 23:19:43.483252] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:54.266 [2024-07-13 23:19:43.483325] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:33:54.266 [2024-07-13 23:19:43.483540] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:54.266 pt1 00:33:54.266 [2024-07-13 23:19:43.483894] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:33:54.266 23:19:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:33:54.266 23:19:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:33:54.266 23:19:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:54.266 23:19:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:54.266 23:19:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:54.266 23:19:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:54.266 23:19:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:54.266 23:19:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:54.266 23:19:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:54.266 23:19:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:54.266 23:19:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:54.266 23:19:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:54.266 23:19:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:54.524 23:19:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:54.524 "name": "raid_bdev1", 00:33:54.524 "uuid": "bcbdee1b-9aab-4a59-813d-97beca9de828", 00:33:54.524 "strip_size_kb": 64, 00:33:54.524 "state": "configuring", 00:33:54.524 "raid_level": "raid5f", 00:33:54.524 "superblock": true, 00:33:54.524 "num_base_bdevs": 4, 00:33:54.524 "num_base_bdevs_discovered": 2, 00:33:54.524 "num_base_bdevs_operational": 3, 00:33:54.524 "base_bdevs_list": [ 00:33:54.524 { 00:33:54.524 "name": null, 00:33:54.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:54.524 "is_configured": false, 00:33:54.524 "data_offset": 2048, 00:33:54.524 "data_size": 63488 00:33:54.524 }, 00:33:54.524 { 00:33:54.524 "name": "pt2", 00:33:54.524 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:54.524 "is_configured": true, 00:33:54.524 "data_offset": 2048, 00:33:54.524 "data_size": 63488 00:33:54.524 }, 00:33:54.524 { 00:33:54.524 "name": "pt3", 00:33:54.524 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:54.524 "is_configured": true, 00:33:54.524 "data_offset": 2048, 00:33:54.524 "data_size": 63488 00:33:54.524 }, 00:33:54.524 { 00:33:54.524 "name": null, 00:33:54.524 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:54.524 "is_configured": false, 00:33:54.524 "data_offset": 2048, 00:33:54.524 "data_size": 63488 00:33:54.524 } 00:33:54.524 ] 00:33:54.524 }' 00:33:54.524 23:19:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:54.524 23:19:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:55.091 23:19:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:33:55.091 23:19:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:33:55.349 23:19:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:33:55.349 23:19:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:33:55.608 [2024-07-13 23:19:44.858972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:33:55.608 [2024-07-13 23:19:44.859805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:55.608 [2024-07-13 23:19:44.860115] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:33:55.608 [2024-07-13 23:19:44.860369] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:55.608 [2024-07-13 23:19:44.861150] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:55.608 [2024-07-13 23:19:44.861480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:33:55.608 [2024-07-13 23:19:44.861814] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:33:55.608 [2024-07-13 23:19:44.861985] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:33:55.608 [2024-07-13 23:19:44.862229] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:33:55.608 [2024-07-13 23:19:44.862348] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:33:55.608 [2024-07-13 23:19:44.862473] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002ef0 00:33:55.608 [2024-07-13 23:19:44.863310] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:33:55.608 [2024-07-13 23:19:44.863450] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:33:55.608 [2024-07-13 23:19:44.863782] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:55.608 pt4 00:33:55.608 23:19:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:55.608 23:19:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:55.608 23:19:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:55.608 23:19:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:55.608 23:19:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:55.608 23:19:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:55.608 23:19:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:55.608 23:19:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:55.608 23:19:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:55.608 23:19:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:55.608 23:19:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:55.608 23:19:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:55.866 23:19:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:55.866 "name": "raid_bdev1", 00:33:55.866 "uuid": "bcbdee1b-9aab-4a59-813d-97beca9de828", 00:33:55.866 "strip_size_kb": 64, 00:33:55.866 "state": "online", 00:33:55.866 "raid_level": "raid5f", 00:33:55.866 "superblock": true, 00:33:55.866 "num_base_bdevs": 4, 00:33:55.866 "num_base_bdevs_discovered": 3, 00:33:55.866 "num_base_bdevs_operational": 3, 00:33:55.866 "base_bdevs_list": [ 00:33:55.866 { 00:33:55.866 "name": null, 00:33:55.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:55.866 "is_configured": false, 00:33:55.866 "data_offset": 2048, 00:33:55.866 "data_size": 63488 00:33:55.866 }, 00:33:55.866 { 00:33:55.866 "name": "pt2", 00:33:55.866 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:55.866 "is_configured": true, 00:33:55.866 "data_offset": 2048, 00:33:55.866 "data_size": 63488 00:33:55.866 }, 00:33:55.866 { 00:33:55.866 "name": "pt3", 00:33:55.866 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:55.866 "is_configured": true, 00:33:55.866 "data_offset": 2048, 00:33:55.866 "data_size": 63488 00:33:55.866 }, 00:33:55.866 { 00:33:55.866 "name": "pt4", 00:33:55.866 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:55.866 "is_configured": true, 00:33:55.866 "data_offset": 2048, 00:33:55.866 "data_size": 63488 00:33:55.866 } 00:33:55.866 ] 00:33:55.866 }' 00:33:55.866 23:19:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:55.866 23:19:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:56.434 23:19:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:33:56.434 23:19:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:33:56.693 23:19:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:33:56.693 23:19:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:56.693 23:19:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:33:56.952 [2024-07-13 23:19:46.112220] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:56.952 23:19:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' bcbdee1b-9aab-4a59-813d-97beca9de828 '!=' bcbdee1b-9aab-4a59-813d-97beca9de828 ']' 00:33:56.952 23:19:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 165934 00:33:56.952 23:19:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 165934 ']' 00:33:56.952 23:19:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # kill -0 165934 00:33:56.952 23:19:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@953 -- # uname 00:33:56.952 23:19:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:56.952 23:19:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 165934 00:33:56.952 killing process with pid 165934 00:33:56.952 23:19:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:56.952 23:19:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:56.952 23:19:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 165934' 00:33:56.952 23:19:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@967 -- # kill 165934 00:33:56.952 [2024-07-13 23:19:46.147260] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:56.952 23:19:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # wait 165934 00:33:56.952 [2024-07-13 23:19:46.147335] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:56.952 [2024-07-13 23:19:46.147410] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:56.952 [2024-07-13 23:19:46.147421] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:33:56.952 [2024-07-13 23:19:46.187175] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:57.211 23:19:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:33:57.211 ************************************ 00:33:57.211 END TEST raid5f_superblock_test 00:33:57.211 ************************************ 00:33:57.211 00:33:57.211 real 0m26.470s 00:33:57.211 user 0m49.999s 00:33:57.211 sys 0m3.269s 00:33:57.211 23:19:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:57.211 23:19:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:57.211 23:19:46 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:33:57.211 23:19:46 bdev_raid -- bdev/bdev_raid.sh@889 -- # '[' true = true ']' 00:33:57.211 23:19:46 bdev_raid -- bdev/bdev_raid.sh@890 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:33:57.211 23:19:46 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:33:57.211 23:19:46 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:57.211 23:19:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:57.211 ************************************ 00:33:57.211 START TEST raid5f_rebuild_test 00:33:57.211 ************************************ 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid5f 4 false false true 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid5f 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid5f '!=' raid1 ']' 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' false = true ']' 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@585 -- # strip_size=64 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # create_arg+=' -z 64' 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=166780 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 166780 /var/tmp/spdk-raid.sock 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 166780 ']' 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:57.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:57.211 23:19:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:57.211 [2024-07-13 23:19:46.542666] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:33:57.211 [2024-07-13 23:19:46.543205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166780 ] 00:33:57.211 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:57.211 Zero copy mechanism will not be used. 00:33:57.470 [2024-07-13 23:19:46.694197] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:57.470 [2024-07-13 23:19:46.766406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:57.470 [2024-07-13 23:19:46.824309] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:58.406 23:19:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:58.406 23:19:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:33:58.406 23:19:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:58.406 23:19:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:33:58.406 BaseBdev1_malloc 00:33:58.406 23:19:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:58.665 [2024-07-13 23:19:47.874242] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:58.665 [2024-07-13 23:19:47.874554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:58.665 [2024-07-13 23:19:47.874720] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:33:58.665 [2024-07-13 23:19:47.874877] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:58.665 [2024-07-13 23:19:47.877585] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:58.665 [2024-07-13 23:19:47.877747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:58.665 BaseBdev1 00:33:58.665 23:19:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:58.665 23:19:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:33:58.923 BaseBdev2_malloc 00:33:58.923 23:19:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:33:58.923 [2024-07-13 23:19:48.325416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:33:58.923 [2024-07-13 23:19:48.325722] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:58.923 [2024-07-13 23:19:48.325883] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:33:58.923 [2024-07-13 23:19:48.326037] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:58.923 [2024-07-13 23:19:48.328868] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:58.923 [2024-07-13 23:19:48.329069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:59.182 BaseBdev2 00:33:59.182 23:19:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:59.182 23:19:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:33:59.182 BaseBdev3_malloc 00:33:59.452 23:19:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:33:59.452 [2024-07-13 23:19:48.800942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:33:59.452 [2024-07-13 23:19:48.801225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:59.452 [2024-07-13 23:19:48.801436] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:33:59.452 [2024-07-13 23:19:48.801589] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:59.452 [2024-07-13 23:19:48.804180] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:59.452 [2024-07-13 23:19:48.804370] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:33:59.452 BaseBdev3 00:33:59.452 23:19:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:59.452 23:19:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:33:59.739 BaseBdev4_malloc 00:33:59.739 23:19:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:33:59.998 [2024-07-13 23:19:49.247681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:33:59.998 [2024-07-13 23:19:49.248009] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:59.998 [2024-07-13 23:19:49.248159] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:33:59.998 [2024-07-13 23:19:49.248313] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:59.998 [2024-07-13 23:19:49.250842] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:59.998 [2024-07-13 23:19:49.251041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:33:59.998 BaseBdev4 00:33:59.998 23:19:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:34:00.256 spare_malloc 00:34:00.256 23:19:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:34:00.515 spare_delay 00:34:00.515 23:19:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:34:00.774 [2024-07-13 23:19:49.994586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:00.774 [2024-07-13 23:19:49.994905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:00.774 [2024-07-13 23:19:49.995062] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:34:00.774 [2024-07-13 23:19:49.995260] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:00.774 [2024-07-13 23:19:49.998041] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:00.774 [2024-07-13 23:19:49.998238] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:00.774 spare 00:34:00.774 23:19:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:34:01.032 [2024-07-13 23:19:50.214812] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:01.032 [2024-07-13 23:19:50.217141] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:01.032 [2024-07-13 23:19:50.217378] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:01.032 [2024-07-13 23:19:50.217610] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:34:01.032 [2024-07-13 23:19:50.217852] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:34:01.032 [2024-07-13 23:19:50.217965] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:34:01.032 [2024-07-13 23:19:50.218184] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:34:01.032 [2024-07-13 23:19:50.219070] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:34:01.032 [2024-07-13 23:19:50.219246] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:34:01.032 [2024-07-13 23:19:50.219625] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:01.032 23:19:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:34:01.032 23:19:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:01.032 23:19:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:01.032 23:19:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:01.032 23:19:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:01.032 23:19:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:01.032 23:19:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:01.032 23:19:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:01.032 23:19:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:01.032 23:19:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:01.032 23:19:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:01.032 23:19:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:01.291 23:19:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:01.291 "name": "raid_bdev1", 00:34:01.291 "uuid": "282b07bd-d472-4ed0-82c7-3e5aa13fcbca", 00:34:01.291 "strip_size_kb": 64, 00:34:01.291 "state": "online", 00:34:01.291 "raid_level": "raid5f", 00:34:01.291 "superblock": false, 00:34:01.291 "num_base_bdevs": 4, 00:34:01.291 "num_base_bdevs_discovered": 4, 00:34:01.291 "num_base_bdevs_operational": 4, 00:34:01.291 "base_bdevs_list": [ 00:34:01.291 { 00:34:01.291 "name": "BaseBdev1", 00:34:01.291 "uuid": "8b47da42-969e-5327-9e5b-ab0334c35780", 00:34:01.291 "is_configured": true, 00:34:01.291 "data_offset": 0, 00:34:01.291 "data_size": 65536 00:34:01.291 }, 00:34:01.291 { 00:34:01.291 "name": "BaseBdev2", 00:34:01.291 "uuid": "713d9f40-40e3-55d8-aed7-a7aea7600cf7", 00:34:01.291 "is_configured": true, 00:34:01.291 "data_offset": 0, 00:34:01.291 "data_size": 65536 00:34:01.291 }, 00:34:01.291 { 00:34:01.291 "name": "BaseBdev3", 00:34:01.291 "uuid": "2a6b9c81-eec1-5ac9-bd3c-f0089ee4e9ee", 00:34:01.291 "is_configured": true, 00:34:01.291 "data_offset": 0, 00:34:01.291 "data_size": 65536 00:34:01.291 }, 00:34:01.291 { 00:34:01.291 "name": "BaseBdev4", 00:34:01.291 "uuid": "76c2aa6e-66f7-5484-9a89-d8f93b22605e", 00:34:01.291 "is_configured": true, 00:34:01.291 "data_offset": 0, 00:34:01.291 "data_size": 65536 00:34:01.291 } 00:34:01.291 ] 00:34:01.291 }' 00:34:01.291 23:19:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:01.291 23:19:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:01.857 23:19:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:01.857 23:19:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:34:02.115 [2024-07-13 23:19:51.368075] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:02.115 23:19:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=196608 00:34:02.115 23:19:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:02.115 23:19:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:34:02.374 23:19:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:34:02.374 23:19:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:34:02.374 23:19:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:34:02.374 23:19:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:34:02.374 23:19:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:34:02.374 23:19:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:34:02.374 23:19:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:34:02.374 23:19:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:02.374 23:19:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:34:02.374 23:19:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:02.374 23:19:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:34:02.374 23:19:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:02.374 23:19:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:02.374 23:19:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:34:02.632 [2024-07-13 23:19:51.848037] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:34:02.632 /dev/nbd0 00:34:02.632 23:19:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:02.632 23:19:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:02.632 23:19:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:34:02.632 23:19:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:34:02.632 23:19:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:34:02.632 23:19:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:34:02.632 23:19:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:34:02.632 23:19:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # break 00:34:02.632 23:19:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:34:02.632 23:19:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:34:02.632 23:19:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:02.632 1+0 records in 00:34:02.632 1+0 records out 00:34:02.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000554201 s, 7.4 MB/s 00:34:02.632 23:19:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:02.632 23:19:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:34:02.632 23:19:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:02.632 23:19:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:34:02.632 23:19:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:34:02.632 23:19:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:02.632 23:19:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:02.632 23:19:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid5f = raid5f ']' 00:34:02.632 23:19:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # write_unit_size=384 00:34:02.632 23:19:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # echo 192 00:34:02.632 23:19:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:34:03.199 512+0 records in 00:34:03.199 512+0 records out 00:34:03.199 100663296 bytes (101 MB, 96 MiB) copied, 0.53799 s, 187 MB/s 00:34:03.199 23:19:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:34:03.199 23:19:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:34:03.199 23:19:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:03.199 23:19:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:03.199 23:19:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:34:03.199 23:19:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:03.199 23:19:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:34:03.457 [2024-07-13 23:19:52.748637] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:03.457 23:19:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:03.457 23:19:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:03.457 23:19:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:03.457 23:19:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:03.457 23:19:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:03.457 23:19:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:03.457 23:19:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:34:03.457 23:19:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:34:03.457 23:19:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:34:03.715 [2024-07-13 23:19:53.024229] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:03.715 23:19:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:03.715 23:19:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:03.715 23:19:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:03.715 23:19:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:03.715 23:19:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:03.715 23:19:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:03.715 23:19:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:03.715 23:19:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:03.715 23:19:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:03.715 23:19:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:03.715 23:19:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:03.715 23:19:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:03.978 23:19:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:03.978 "name": "raid_bdev1", 00:34:03.978 "uuid": "282b07bd-d472-4ed0-82c7-3e5aa13fcbca", 00:34:03.978 "strip_size_kb": 64, 00:34:03.978 "state": "online", 00:34:03.978 "raid_level": "raid5f", 00:34:03.978 "superblock": false, 00:34:03.978 "num_base_bdevs": 4, 00:34:03.978 "num_base_bdevs_discovered": 3, 00:34:03.978 "num_base_bdevs_operational": 3, 00:34:03.978 "base_bdevs_list": [ 00:34:03.978 { 00:34:03.978 "name": null, 00:34:03.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:03.978 "is_configured": false, 00:34:03.978 "data_offset": 0, 00:34:03.978 "data_size": 65536 00:34:03.978 }, 00:34:03.978 { 00:34:03.978 "name": "BaseBdev2", 00:34:03.978 "uuid": "713d9f40-40e3-55d8-aed7-a7aea7600cf7", 00:34:03.978 "is_configured": true, 00:34:03.978 "data_offset": 0, 00:34:03.978 "data_size": 65536 00:34:03.978 }, 00:34:03.978 { 00:34:03.978 "name": "BaseBdev3", 00:34:03.978 "uuid": "2a6b9c81-eec1-5ac9-bd3c-f0089ee4e9ee", 00:34:03.978 "is_configured": true, 00:34:03.978 "data_offset": 0, 00:34:03.978 "data_size": 65536 00:34:03.978 }, 00:34:03.978 { 00:34:03.978 "name": "BaseBdev4", 00:34:03.978 "uuid": "76c2aa6e-66f7-5484-9a89-d8f93b22605e", 00:34:03.978 "is_configured": true, 00:34:03.978 "data_offset": 0, 00:34:03.979 "data_size": 65536 00:34:03.979 } 00:34:03.979 ] 00:34:03.979 }' 00:34:03.979 23:19:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:03.979 23:19:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:04.545 23:19:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:34:04.802 [2024-07-13 23:19:54.196682] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:04.802 [2024-07-13 23:19:54.203389] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027da0 00:34:04.802 [2024-07-13 23:19:54.207232] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:05.059 23:19:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:34:05.991 23:19:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:05.991 23:19:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:05.991 23:19:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:05.991 23:19:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:05.991 23:19:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:05.991 23:19:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:05.991 23:19:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:06.249 23:19:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:06.249 "name": "raid_bdev1", 00:34:06.249 "uuid": "282b07bd-d472-4ed0-82c7-3e5aa13fcbca", 00:34:06.249 "strip_size_kb": 64, 00:34:06.249 "state": "online", 00:34:06.249 "raid_level": "raid5f", 00:34:06.249 "superblock": false, 00:34:06.249 "num_base_bdevs": 4, 00:34:06.249 "num_base_bdevs_discovered": 4, 00:34:06.249 "num_base_bdevs_operational": 4, 00:34:06.249 "process": { 00:34:06.249 "type": "rebuild", 00:34:06.249 "target": "spare", 00:34:06.249 "progress": { 00:34:06.249 "blocks": 21120, 00:34:06.249 "percent": 10 00:34:06.249 } 00:34:06.249 }, 00:34:06.249 "base_bdevs_list": [ 00:34:06.249 { 00:34:06.249 "name": "spare", 00:34:06.249 "uuid": "7aad808e-715a-5408-937f-a42c3aa02c05", 00:34:06.249 "is_configured": true, 00:34:06.249 "data_offset": 0, 00:34:06.249 "data_size": 65536 00:34:06.249 }, 00:34:06.249 { 00:34:06.249 "name": "BaseBdev2", 00:34:06.249 "uuid": "713d9f40-40e3-55d8-aed7-a7aea7600cf7", 00:34:06.249 "is_configured": true, 00:34:06.249 "data_offset": 0, 00:34:06.249 "data_size": 65536 00:34:06.249 }, 00:34:06.249 { 00:34:06.249 "name": "BaseBdev3", 00:34:06.249 "uuid": "2a6b9c81-eec1-5ac9-bd3c-f0089ee4e9ee", 00:34:06.249 "is_configured": true, 00:34:06.249 "data_offset": 0, 00:34:06.249 "data_size": 65536 00:34:06.249 }, 00:34:06.249 { 00:34:06.249 "name": "BaseBdev4", 00:34:06.249 "uuid": "76c2aa6e-66f7-5484-9a89-d8f93b22605e", 00:34:06.249 "is_configured": true, 00:34:06.249 "data_offset": 0, 00:34:06.249 "data_size": 65536 00:34:06.249 } 00:34:06.249 ] 00:34:06.249 }' 00:34:06.249 23:19:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:06.249 23:19:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:06.249 23:19:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:06.249 23:19:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:06.249 23:19:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:34:06.507 [2024-07-13 23:19:55.806863] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:06.507 [2024-07-13 23:19:55.827159] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:06.507 [2024-07-13 23:19:55.828142] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:06.507 [2024-07-13 23:19:55.828194] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:06.507 [2024-07-13 23:19:55.828210] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:06.507 23:19:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:06.507 23:19:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:06.507 23:19:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:06.507 23:19:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:06.507 23:19:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:06.507 23:19:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:06.507 23:19:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:06.507 23:19:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:06.507 23:19:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:06.507 23:19:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:06.507 23:19:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:06.507 23:19:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:06.765 23:19:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:06.765 "name": "raid_bdev1", 00:34:06.765 "uuid": "282b07bd-d472-4ed0-82c7-3e5aa13fcbca", 00:34:06.765 "strip_size_kb": 64, 00:34:06.765 "state": "online", 00:34:06.765 "raid_level": "raid5f", 00:34:06.765 "superblock": false, 00:34:06.765 "num_base_bdevs": 4, 00:34:06.765 "num_base_bdevs_discovered": 3, 00:34:06.765 "num_base_bdevs_operational": 3, 00:34:06.765 "base_bdevs_list": [ 00:34:06.765 { 00:34:06.765 "name": null, 00:34:06.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:06.765 "is_configured": false, 00:34:06.765 "data_offset": 0, 00:34:06.765 "data_size": 65536 00:34:06.765 }, 00:34:06.765 { 00:34:06.765 "name": "BaseBdev2", 00:34:06.765 "uuid": "713d9f40-40e3-55d8-aed7-a7aea7600cf7", 00:34:06.765 "is_configured": true, 00:34:06.765 "data_offset": 0, 00:34:06.765 "data_size": 65536 00:34:06.765 }, 00:34:06.765 { 00:34:06.765 "name": "BaseBdev3", 00:34:06.765 "uuid": "2a6b9c81-eec1-5ac9-bd3c-f0089ee4e9ee", 00:34:06.765 "is_configured": true, 00:34:06.765 "data_offset": 0, 00:34:06.765 "data_size": 65536 00:34:06.765 }, 00:34:06.765 { 00:34:06.765 "name": "BaseBdev4", 00:34:06.765 "uuid": "76c2aa6e-66f7-5484-9a89-d8f93b22605e", 00:34:06.765 "is_configured": true, 00:34:06.765 "data_offset": 0, 00:34:06.765 "data_size": 65536 00:34:06.765 } 00:34:06.765 ] 00:34:06.765 }' 00:34:06.765 23:19:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:06.765 23:19:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:07.332 23:19:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:07.332 23:19:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:07.332 23:19:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:07.332 23:19:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:07.332 23:19:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:07.332 23:19:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:07.332 23:19:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:07.645 23:19:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:07.645 "name": "raid_bdev1", 00:34:07.645 "uuid": "282b07bd-d472-4ed0-82c7-3e5aa13fcbca", 00:34:07.645 "strip_size_kb": 64, 00:34:07.645 "state": "online", 00:34:07.645 "raid_level": "raid5f", 00:34:07.645 "superblock": false, 00:34:07.645 "num_base_bdevs": 4, 00:34:07.645 "num_base_bdevs_discovered": 3, 00:34:07.645 "num_base_bdevs_operational": 3, 00:34:07.645 "base_bdevs_list": [ 00:34:07.645 { 00:34:07.645 "name": null, 00:34:07.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:07.645 "is_configured": false, 00:34:07.645 "data_offset": 0, 00:34:07.645 "data_size": 65536 00:34:07.645 }, 00:34:07.645 { 00:34:07.645 "name": "BaseBdev2", 00:34:07.646 "uuid": "713d9f40-40e3-55d8-aed7-a7aea7600cf7", 00:34:07.646 "is_configured": true, 00:34:07.646 "data_offset": 0, 00:34:07.646 "data_size": 65536 00:34:07.646 }, 00:34:07.646 { 00:34:07.646 "name": "BaseBdev3", 00:34:07.646 "uuid": "2a6b9c81-eec1-5ac9-bd3c-f0089ee4e9ee", 00:34:07.646 "is_configured": true, 00:34:07.646 "data_offset": 0, 00:34:07.646 "data_size": 65536 00:34:07.646 }, 00:34:07.646 { 00:34:07.646 "name": "BaseBdev4", 00:34:07.646 "uuid": "76c2aa6e-66f7-5484-9a89-d8f93b22605e", 00:34:07.646 "is_configured": true, 00:34:07.646 "data_offset": 0, 00:34:07.646 "data_size": 65536 00:34:07.646 } 00:34:07.646 ] 00:34:07.646 }' 00:34:07.646 23:19:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:07.646 23:19:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:07.646 23:19:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:07.646 23:19:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:07.646 23:19:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:34:07.904 [2024-07-13 23:19:57.298259] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:07.904 [2024-07-13 23:19:57.302897] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027f40 00:34:07.904 [2024-07-13 23:19:57.305543] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:08.162 23:19:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:34:09.097 23:19:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:09.097 23:19:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:09.097 23:19:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:09.097 23:19:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:09.097 23:19:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:09.097 23:19:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:09.097 23:19:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:09.355 23:19:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:09.355 "name": "raid_bdev1", 00:34:09.355 "uuid": "282b07bd-d472-4ed0-82c7-3e5aa13fcbca", 00:34:09.355 "strip_size_kb": 64, 00:34:09.355 "state": "online", 00:34:09.355 "raid_level": "raid5f", 00:34:09.355 "superblock": false, 00:34:09.355 "num_base_bdevs": 4, 00:34:09.355 "num_base_bdevs_discovered": 4, 00:34:09.355 "num_base_bdevs_operational": 4, 00:34:09.355 "process": { 00:34:09.355 "type": "rebuild", 00:34:09.355 "target": "spare", 00:34:09.355 "progress": { 00:34:09.355 "blocks": 23040, 00:34:09.355 "percent": 11 00:34:09.355 } 00:34:09.355 }, 00:34:09.355 "base_bdevs_list": [ 00:34:09.355 { 00:34:09.355 "name": "spare", 00:34:09.355 "uuid": "7aad808e-715a-5408-937f-a42c3aa02c05", 00:34:09.355 "is_configured": true, 00:34:09.355 "data_offset": 0, 00:34:09.355 "data_size": 65536 00:34:09.355 }, 00:34:09.355 { 00:34:09.355 "name": "BaseBdev2", 00:34:09.355 "uuid": "713d9f40-40e3-55d8-aed7-a7aea7600cf7", 00:34:09.355 "is_configured": true, 00:34:09.355 "data_offset": 0, 00:34:09.355 "data_size": 65536 00:34:09.355 }, 00:34:09.355 { 00:34:09.355 "name": "BaseBdev3", 00:34:09.355 "uuid": "2a6b9c81-eec1-5ac9-bd3c-f0089ee4e9ee", 00:34:09.355 "is_configured": true, 00:34:09.355 "data_offset": 0, 00:34:09.355 "data_size": 65536 00:34:09.355 }, 00:34:09.355 { 00:34:09.355 "name": "BaseBdev4", 00:34:09.355 "uuid": "76c2aa6e-66f7-5484-9a89-d8f93b22605e", 00:34:09.355 "is_configured": true, 00:34:09.355 "data_offset": 0, 00:34:09.355 "data_size": 65536 00:34:09.355 } 00:34:09.355 ] 00:34:09.355 }' 00:34:09.355 23:19:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:09.355 23:19:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:09.355 23:19:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:09.355 23:19:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:09.355 23:19:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:34:09.355 23:19:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:34:09.355 23:19:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid5f = raid1 ']' 00:34:09.355 23:19:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=1236 00:34:09.355 23:19:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:09.355 23:19:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:09.355 23:19:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:09.355 23:19:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:09.355 23:19:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:09.355 23:19:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:09.355 23:19:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:09.355 23:19:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:09.613 23:19:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:09.613 "name": "raid_bdev1", 00:34:09.613 "uuid": "282b07bd-d472-4ed0-82c7-3e5aa13fcbca", 00:34:09.613 "strip_size_kb": 64, 00:34:09.613 "state": "online", 00:34:09.613 "raid_level": "raid5f", 00:34:09.613 "superblock": false, 00:34:09.613 "num_base_bdevs": 4, 00:34:09.613 "num_base_bdevs_discovered": 4, 00:34:09.613 "num_base_bdevs_operational": 4, 00:34:09.613 "process": { 00:34:09.613 "type": "rebuild", 00:34:09.613 "target": "spare", 00:34:09.613 "progress": { 00:34:09.613 "blocks": 28800, 00:34:09.613 "percent": 14 00:34:09.613 } 00:34:09.613 }, 00:34:09.613 "base_bdevs_list": [ 00:34:09.613 { 00:34:09.613 "name": "spare", 00:34:09.613 "uuid": "7aad808e-715a-5408-937f-a42c3aa02c05", 00:34:09.613 "is_configured": true, 00:34:09.613 "data_offset": 0, 00:34:09.613 "data_size": 65536 00:34:09.613 }, 00:34:09.613 { 00:34:09.613 "name": "BaseBdev2", 00:34:09.613 "uuid": "713d9f40-40e3-55d8-aed7-a7aea7600cf7", 00:34:09.613 "is_configured": true, 00:34:09.613 "data_offset": 0, 00:34:09.613 "data_size": 65536 00:34:09.613 }, 00:34:09.613 { 00:34:09.613 "name": "BaseBdev3", 00:34:09.613 "uuid": "2a6b9c81-eec1-5ac9-bd3c-f0089ee4e9ee", 00:34:09.613 "is_configured": true, 00:34:09.613 "data_offset": 0, 00:34:09.613 "data_size": 65536 00:34:09.613 }, 00:34:09.613 { 00:34:09.613 "name": "BaseBdev4", 00:34:09.613 "uuid": "76c2aa6e-66f7-5484-9a89-d8f93b22605e", 00:34:09.613 "is_configured": true, 00:34:09.613 "data_offset": 0, 00:34:09.613 "data_size": 65536 00:34:09.613 } 00:34:09.613 ] 00:34:09.613 }' 00:34:09.613 23:19:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:09.613 23:19:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:09.613 23:19:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:09.613 23:19:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:09.613 23:19:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:10.987 23:19:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:10.987 23:20:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:10.987 23:20:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:10.987 23:20:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:10.987 23:20:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:10.987 23:20:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:10.987 23:20:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:10.987 23:20:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:10.987 23:20:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:10.987 "name": "raid_bdev1", 00:34:10.987 "uuid": "282b07bd-d472-4ed0-82c7-3e5aa13fcbca", 00:34:10.987 "strip_size_kb": 64, 00:34:10.987 "state": "online", 00:34:10.987 "raid_level": "raid5f", 00:34:10.987 "superblock": false, 00:34:10.987 "num_base_bdevs": 4, 00:34:10.987 "num_base_bdevs_discovered": 4, 00:34:10.987 "num_base_bdevs_operational": 4, 00:34:10.987 "process": { 00:34:10.987 "type": "rebuild", 00:34:10.987 "target": "spare", 00:34:10.987 "progress": { 00:34:10.987 "blocks": 53760, 00:34:10.987 "percent": 27 00:34:10.987 } 00:34:10.987 }, 00:34:10.987 "base_bdevs_list": [ 00:34:10.987 { 00:34:10.987 "name": "spare", 00:34:10.987 "uuid": "7aad808e-715a-5408-937f-a42c3aa02c05", 00:34:10.987 "is_configured": true, 00:34:10.987 "data_offset": 0, 00:34:10.987 "data_size": 65536 00:34:10.987 }, 00:34:10.987 { 00:34:10.987 "name": "BaseBdev2", 00:34:10.987 "uuid": "713d9f40-40e3-55d8-aed7-a7aea7600cf7", 00:34:10.987 "is_configured": true, 00:34:10.987 "data_offset": 0, 00:34:10.987 "data_size": 65536 00:34:10.987 }, 00:34:10.987 { 00:34:10.987 "name": "BaseBdev3", 00:34:10.987 "uuid": "2a6b9c81-eec1-5ac9-bd3c-f0089ee4e9ee", 00:34:10.987 "is_configured": true, 00:34:10.987 "data_offset": 0, 00:34:10.987 "data_size": 65536 00:34:10.987 }, 00:34:10.987 { 00:34:10.987 "name": "BaseBdev4", 00:34:10.987 "uuid": "76c2aa6e-66f7-5484-9a89-d8f93b22605e", 00:34:10.987 "is_configured": true, 00:34:10.987 "data_offset": 0, 00:34:10.987 "data_size": 65536 00:34:10.987 } 00:34:10.987 ] 00:34:10.987 }' 00:34:10.987 23:20:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:10.987 23:20:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:10.987 23:20:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:10.987 23:20:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:10.987 23:20:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:12.359 23:20:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:12.359 23:20:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:12.359 23:20:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:12.359 23:20:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:12.359 23:20:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:12.359 23:20:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:12.359 23:20:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:12.359 23:20:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:12.359 23:20:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:12.359 "name": "raid_bdev1", 00:34:12.359 "uuid": "282b07bd-d472-4ed0-82c7-3e5aa13fcbca", 00:34:12.359 "strip_size_kb": 64, 00:34:12.359 "state": "online", 00:34:12.359 "raid_level": "raid5f", 00:34:12.359 "superblock": false, 00:34:12.359 "num_base_bdevs": 4, 00:34:12.359 "num_base_bdevs_discovered": 4, 00:34:12.359 "num_base_bdevs_operational": 4, 00:34:12.359 "process": { 00:34:12.359 "type": "rebuild", 00:34:12.359 "target": "spare", 00:34:12.359 "progress": { 00:34:12.359 "blocks": 80640, 00:34:12.359 "percent": 41 00:34:12.359 } 00:34:12.359 }, 00:34:12.359 "base_bdevs_list": [ 00:34:12.359 { 00:34:12.359 "name": "spare", 00:34:12.359 "uuid": "7aad808e-715a-5408-937f-a42c3aa02c05", 00:34:12.359 "is_configured": true, 00:34:12.359 "data_offset": 0, 00:34:12.359 "data_size": 65536 00:34:12.359 }, 00:34:12.359 { 00:34:12.359 "name": "BaseBdev2", 00:34:12.359 "uuid": "713d9f40-40e3-55d8-aed7-a7aea7600cf7", 00:34:12.359 "is_configured": true, 00:34:12.359 "data_offset": 0, 00:34:12.359 "data_size": 65536 00:34:12.359 }, 00:34:12.359 { 00:34:12.359 "name": "BaseBdev3", 00:34:12.359 "uuid": "2a6b9c81-eec1-5ac9-bd3c-f0089ee4e9ee", 00:34:12.359 "is_configured": true, 00:34:12.359 "data_offset": 0, 00:34:12.359 "data_size": 65536 00:34:12.359 }, 00:34:12.359 { 00:34:12.359 "name": "BaseBdev4", 00:34:12.359 "uuid": "76c2aa6e-66f7-5484-9a89-d8f93b22605e", 00:34:12.359 "is_configured": true, 00:34:12.359 "data_offset": 0, 00:34:12.359 "data_size": 65536 00:34:12.359 } 00:34:12.359 ] 00:34:12.359 }' 00:34:12.359 23:20:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:12.359 23:20:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:12.359 23:20:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:12.359 23:20:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:12.359 23:20:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:13.293 23:20:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:13.293 23:20:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:13.293 23:20:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:13.293 23:20:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:13.293 23:20:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:13.293 23:20:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:13.293 23:20:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:13.293 23:20:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:13.551 23:20:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:13.551 "name": "raid_bdev1", 00:34:13.551 "uuid": "282b07bd-d472-4ed0-82c7-3e5aa13fcbca", 00:34:13.551 "strip_size_kb": 64, 00:34:13.551 "state": "online", 00:34:13.551 "raid_level": "raid5f", 00:34:13.551 "superblock": false, 00:34:13.551 "num_base_bdevs": 4, 00:34:13.551 "num_base_bdevs_discovered": 4, 00:34:13.551 "num_base_bdevs_operational": 4, 00:34:13.551 "process": { 00:34:13.551 "type": "rebuild", 00:34:13.551 "target": "spare", 00:34:13.551 "progress": { 00:34:13.551 "blocks": 105600, 00:34:13.551 "percent": 53 00:34:13.551 } 00:34:13.551 }, 00:34:13.551 "base_bdevs_list": [ 00:34:13.551 { 00:34:13.551 "name": "spare", 00:34:13.551 "uuid": "7aad808e-715a-5408-937f-a42c3aa02c05", 00:34:13.551 "is_configured": true, 00:34:13.551 "data_offset": 0, 00:34:13.551 "data_size": 65536 00:34:13.551 }, 00:34:13.551 { 00:34:13.551 "name": "BaseBdev2", 00:34:13.551 "uuid": "713d9f40-40e3-55d8-aed7-a7aea7600cf7", 00:34:13.551 "is_configured": true, 00:34:13.551 "data_offset": 0, 00:34:13.551 "data_size": 65536 00:34:13.551 }, 00:34:13.551 { 00:34:13.551 "name": "BaseBdev3", 00:34:13.551 "uuid": "2a6b9c81-eec1-5ac9-bd3c-f0089ee4e9ee", 00:34:13.551 "is_configured": true, 00:34:13.551 "data_offset": 0, 00:34:13.551 "data_size": 65536 00:34:13.551 }, 00:34:13.551 { 00:34:13.551 "name": "BaseBdev4", 00:34:13.551 "uuid": "76c2aa6e-66f7-5484-9a89-d8f93b22605e", 00:34:13.551 "is_configured": true, 00:34:13.551 "data_offset": 0, 00:34:13.551 "data_size": 65536 00:34:13.551 } 00:34:13.551 ] 00:34:13.551 }' 00:34:13.551 23:20:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:13.808 23:20:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:13.808 23:20:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:13.808 23:20:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:13.808 23:20:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:14.741 23:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:14.741 23:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:14.741 23:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:14.741 23:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:14.741 23:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:14.741 23:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:14.741 23:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:14.741 23:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:15.000 23:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:15.000 "name": "raid_bdev1", 00:34:15.000 "uuid": "282b07bd-d472-4ed0-82c7-3e5aa13fcbca", 00:34:15.000 "strip_size_kb": 64, 00:34:15.000 "state": "online", 00:34:15.000 "raid_level": "raid5f", 00:34:15.000 "superblock": false, 00:34:15.000 "num_base_bdevs": 4, 00:34:15.000 "num_base_bdevs_discovered": 4, 00:34:15.000 "num_base_bdevs_operational": 4, 00:34:15.000 "process": { 00:34:15.000 "type": "rebuild", 00:34:15.000 "target": "spare", 00:34:15.000 "progress": { 00:34:15.000 "blocks": 132480, 00:34:15.000 "percent": 67 00:34:15.000 } 00:34:15.000 }, 00:34:15.000 "base_bdevs_list": [ 00:34:15.000 { 00:34:15.000 "name": "spare", 00:34:15.000 "uuid": "7aad808e-715a-5408-937f-a42c3aa02c05", 00:34:15.000 "is_configured": true, 00:34:15.000 "data_offset": 0, 00:34:15.000 "data_size": 65536 00:34:15.000 }, 00:34:15.000 { 00:34:15.000 "name": "BaseBdev2", 00:34:15.000 "uuid": "713d9f40-40e3-55d8-aed7-a7aea7600cf7", 00:34:15.000 "is_configured": true, 00:34:15.000 "data_offset": 0, 00:34:15.000 "data_size": 65536 00:34:15.000 }, 00:34:15.000 { 00:34:15.000 "name": "BaseBdev3", 00:34:15.000 "uuid": "2a6b9c81-eec1-5ac9-bd3c-f0089ee4e9ee", 00:34:15.000 "is_configured": true, 00:34:15.000 "data_offset": 0, 00:34:15.000 "data_size": 65536 00:34:15.000 }, 00:34:15.000 { 00:34:15.000 "name": "BaseBdev4", 00:34:15.000 "uuid": "76c2aa6e-66f7-5484-9a89-d8f93b22605e", 00:34:15.000 "is_configured": true, 00:34:15.000 "data_offset": 0, 00:34:15.000 "data_size": 65536 00:34:15.000 } 00:34:15.000 ] 00:34:15.000 }' 00:34:15.000 23:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:15.000 23:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:15.000 23:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:15.000 23:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:15.000 23:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:16.380 23:20:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:16.380 23:20:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:16.380 23:20:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:16.380 23:20:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:16.380 23:20:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:16.380 23:20:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:16.380 23:20:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:16.380 23:20:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:16.380 23:20:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:16.380 "name": "raid_bdev1", 00:34:16.380 "uuid": "282b07bd-d472-4ed0-82c7-3e5aa13fcbca", 00:34:16.380 "strip_size_kb": 64, 00:34:16.380 "state": "online", 00:34:16.380 "raid_level": "raid5f", 00:34:16.380 "superblock": false, 00:34:16.380 "num_base_bdevs": 4, 00:34:16.380 "num_base_bdevs_discovered": 4, 00:34:16.380 "num_base_bdevs_operational": 4, 00:34:16.380 "process": { 00:34:16.380 "type": "rebuild", 00:34:16.380 "target": "spare", 00:34:16.380 "progress": { 00:34:16.380 "blocks": 157440, 00:34:16.380 "percent": 80 00:34:16.380 } 00:34:16.380 }, 00:34:16.380 "base_bdevs_list": [ 00:34:16.380 { 00:34:16.380 "name": "spare", 00:34:16.380 "uuid": "7aad808e-715a-5408-937f-a42c3aa02c05", 00:34:16.380 "is_configured": true, 00:34:16.380 "data_offset": 0, 00:34:16.380 "data_size": 65536 00:34:16.380 }, 00:34:16.380 { 00:34:16.380 "name": "BaseBdev2", 00:34:16.380 "uuid": "713d9f40-40e3-55d8-aed7-a7aea7600cf7", 00:34:16.380 "is_configured": true, 00:34:16.380 "data_offset": 0, 00:34:16.380 "data_size": 65536 00:34:16.380 }, 00:34:16.380 { 00:34:16.380 "name": "BaseBdev3", 00:34:16.380 "uuid": "2a6b9c81-eec1-5ac9-bd3c-f0089ee4e9ee", 00:34:16.380 "is_configured": true, 00:34:16.380 "data_offset": 0, 00:34:16.380 "data_size": 65536 00:34:16.380 }, 00:34:16.380 { 00:34:16.380 "name": "BaseBdev4", 00:34:16.380 "uuid": "76c2aa6e-66f7-5484-9a89-d8f93b22605e", 00:34:16.380 "is_configured": true, 00:34:16.380 "data_offset": 0, 00:34:16.380 "data_size": 65536 00:34:16.380 } 00:34:16.380 ] 00:34:16.380 }' 00:34:16.380 23:20:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:16.380 23:20:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:16.380 23:20:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:16.380 23:20:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:16.380 23:20:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:17.754 23:20:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:17.754 23:20:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:17.754 23:20:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:17.754 23:20:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:17.754 23:20:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:17.754 23:20:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:17.754 23:20:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:17.754 23:20:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:17.754 23:20:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:17.754 "name": "raid_bdev1", 00:34:17.754 "uuid": "282b07bd-d472-4ed0-82c7-3e5aa13fcbca", 00:34:17.754 "strip_size_kb": 64, 00:34:17.754 "state": "online", 00:34:17.754 "raid_level": "raid5f", 00:34:17.754 "superblock": false, 00:34:17.754 "num_base_bdevs": 4, 00:34:17.754 "num_base_bdevs_discovered": 4, 00:34:17.754 "num_base_bdevs_operational": 4, 00:34:17.754 "process": { 00:34:17.754 "type": "rebuild", 00:34:17.754 "target": "spare", 00:34:17.754 "progress": { 00:34:17.754 "blocks": 184320, 00:34:17.754 "percent": 93 00:34:17.754 } 00:34:17.754 }, 00:34:17.754 "base_bdevs_list": [ 00:34:17.754 { 00:34:17.754 "name": "spare", 00:34:17.754 "uuid": "7aad808e-715a-5408-937f-a42c3aa02c05", 00:34:17.754 "is_configured": true, 00:34:17.754 "data_offset": 0, 00:34:17.754 "data_size": 65536 00:34:17.754 }, 00:34:17.754 { 00:34:17.754 "name": "BaseBdev2", 00:34:17.754 "uuid": "713d9f40-40e3-55d8-aed7-a7aea7600cf7", 00:34:17.754 "is_configured": true, 00:34:17.754 "data_offset": 0, 00:34:17.754 "data_size": 65536 00:34:17.754 }, 00:34:17.754 { 00:34:17.754 "name": "BaseBdev3", 00:34:17.754 "uuid": "2a6b9c81-eec1-5ac9-bd3c-f0089ee4e9ee", 00:34:17.754 "is_configured": true, 00:34:17.754 "data_offset": 0, 00:34:17.754 "data_size": 65536 00:34:17.754 }, 00:34:17.754 { 00:34:17.754 "name": "BaseBdev4", 00:34:17.754 "uuid": "76c2aa6e-66f7-5484-9a89-d8f93b22605e", 00:34:17.754 "is_configured": true, 00:34:17.754 "data_offset": 0, 00:34:17.754 "data_size": 65536 00:34:17.754 } 00:34:17.754 ] 00:34:17.754 }' 00:34:17.754 23:20:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:17.754 23:20:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:17.754 23:20:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:17.754 23:20:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:17.754 23:20:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:18.320 [2024-07-13 23:20:07.688194] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:34:18.320 [2024-07-13 23:20:07.688282] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:34:18.320 [2024-07-13 23:20:07.688393] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:18.887 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:18.887 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:18.887 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:18.887 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:18.887 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:18.887 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:18.887 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:18.887 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:19.146 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:19.146 "name": "raid_bdev1", 00:34:19.146 "uuid": "282b07bd-d472-4ed0-82c7-3e5aa13fcbca", 00:34:19.146 "strip_size_kb": 64, 00:34:19.146 "state": "online", 00:34:19.146 "raid_level": "raid5f", 00:34:19.146 "superblock": false, 00:34:19.146 "num_base_bdevs": 4, 00:34:19.146 "num_base_bdevs_discovered": 4, 00:34:19.146 "num_base_bdevs_operational": 4, 00:34:19.146 "base_bdevs_list": [ 00:34:19.146 { 00:34:19.146 "name": "spare", 00:34:19.146 "uuid": "7aad808e-715a-5408-937f-a42c3aa02c05", 00:34:19.146 "is_configured": true, 00:34:19.146 "data_offset": 0, 00:34:19.146 "data_size": 65536 00:34:19.146 }, 00:34:19.146 { 00:34:19.146 "name": "BaseBdev2", 00:34:19.146 "uuid": "713d9f40-40e3-55d8-aed7-a7aea7600cf7", 00:34:19.146 "is_configured": true, 00:34:19.146 "data_offset": 0, 00:34:19.146 "data_size": 65536 00:34:19.146 }, 00:34:19.146 { 00:34:19.146 "name": "BaseBdev3", 00:34:19.146 "uuid": "2a6b9c81-eec1-5ac9-bd3c-f0089ee4e9ee", 00:34:19.146 "is_configured": true, 00:34:19.146 "data_offset": 0, 00:34:19.146 "data_size": 65536 00:34:19.146 }, 00:34:19.146 { 00:34:19.146 "name": "BaseBdev4", 00:34:19.146 "uuid": "76c2aa6e-66f7-5484-9a89-d8f93b22605e", 00:34:19.146 "is_configured": true, 00:34:19.146 "data_offset": 0, 00:34:19.146 "data_size": 65536 00:34:19.146 } 00:34:19.146 ] 00:34:19.146 }' 00:34:19.146 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:19.146 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:34:19.146 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:19.146 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:34:19.146 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:34:19.146 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:19.146 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:19.146 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:19.146 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:19.146 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:19.146 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:19.146 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:19.404 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:19.404 "name": "raid_bdev1", 00:34:19.404 "uuid": "282b07bd-d472-4ed0-82c7-3e5aa13fcbca", 00:34:19.404 "strip_size_kb": 64, 00:34:19.404 "state": "online", 00:34:19.404 "raid_level": "raid5f", 00:34:19.404 "superblock": false, 00:34:19.404 "num_base_bdevs": 4, 00:34:19.404 "num_base_bdevs_discovered": 4, 00:34:19.404 "num_base_bdevs_operational": 4, 00:34:19.404 "base_bdevs_list": [ 00:34:19.404 { 00:34:19.404 "name": "spare", 00:34:19.404 "uuid": "7aad808e-715a-5408-937f-a42c3aa02c05", 00:34:19.404 "is_configured": true, 00:34:19.404 "data_offset": 0, 00:34:19.404 "data_size": 65536 00:34:19.404 }, 00:34:19.404 { 00:34:19.404 "name": "BaseBdev2", 00:34:19.404 "uuid": "713d9f40-40e3-55d8-aed7-a7aea7600cf7", 00:34:19.404 "is_configured": true, 00:34:19.404 "data_offset": 0, 00:34:19.404 "data_size": 65536 00:34:19.404 }, 00:34:19.404 { 00:34:19.404 "name": "BaseBdev3", 00:34:19.404 "uuid": "2a6b9c81-eec1-5ac9-bd3c-f0089ee4e9ee", 00:34:19.404 "is_configured": true, 00:34:19.404 "data_offset": 0, 00:34:19.404 "data_size": 65536 00:34:19.404 }, 00:34:19.404 { 00:34:19.404 "name": "BaseBdev4", 00:34:19.404 "uuid": "76c2aa6e-66f7-5484-9a89-d8f93b22605e", 00:34:19.404 "is_configured": true, 00:34:19.404 "data_offset": 0, 00:34:19.404 "data_size": 65536 00:34:19.404 } 00:34:19.404 ] 00:34:19.404 }' 00:34:19.404 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:19.661 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:19.661 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:19.661 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:19.661 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:34:19.661 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:19.661 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:19.661 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:19.661 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:19.661 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:19.661 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:19.661 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:19.661 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:19.661 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:19.661 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:19.661 23:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:19.919 23:20:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:19.919 "name": "raid_bdev1", 00:34:19.919 "uuid": "282b07bd-d472-4ed0-82c7-3e5aa13fcbca", 00:34:19.919 "strip_size_kb": 64, 00:34:19.919 "state": "online", 00:34:19.919 "raid_level": "raid5f", 00:34:19.919 "superblock": false, 00:34:19.919 "num_base_bdevs": 4, 00:34:19.919 "num_base_bdevs_discovered": 4, 00:34:19.919 "num_base_bdevs_operational": 4, 00:34:19.919 "base_bdevs_list": [ 00:34:19.919 { 00:34:19.919 "name": "spare", 00:34:19.919 "uuid": "7aad808e-715a-5408-937f-a42c3aa02c05", 00:34:19.919 "is_configured": true, 00:34:19.919 "data_offset": 0, 00:34:19.919 "data_size": 65536 00:34:19.919 }, 00:34:19.919 { 00:34:19.919 "name": "BaseBdev2", 00:34:19.919 "uuid": "713d9f40-40e3-55d8-aed7-a7aea7600cf7", 00:34:19.919 "is_configured": true, 00:34:19.919 "data_offset": 0, 00:34:19.919 "data_size": 65536 00:34:19.919 }, 00:34:19.919 { 00:34:19.919 "name": "BaseBdev3", 00:34:19.919 "uuid": "2a6b9c81-eec1-5ac9-bd3c-f0089ee4e9ee", 00:34:19.919 "is_configured": true, 00:34:19.919 "data_offset": 0, 00:34:19.919 "data_size": 65536 00:34:19.919 }, 00:34:19.919 { 00:34:19.919 "name": "BaseBdev4", 00:34:19.919 "uuid": "76c2aa6e-66f7-5484-9a89-d8f93b22605e", 00:34:19.919 "is_configured": true, 00:34:19.919 "data_offset": 0, 00:34:19.919 "data_size": 65536 00:34:19.919 } 00:34:19.919 ] 00:34:19.919 }' 00:34:19.919 23:20:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:19.919 23:20:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:20.486 23:20:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:34:20.744 [2024-07-13 23:20:10.022301] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:20.744 [2024-07-13 23:20:10.022365] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:20.744 [2024-07-13 23:20:10.022469] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:20.744 [2024-07-13 23:20:10.022587] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:20.744 [2024-07-13 23:20:10.022604] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:34:20.744 23:20:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:20.744 23:20:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:34:21.003 23:20:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:34:21.003 23:20:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:34:21.003 23:20:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:34:21.003 23:20:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:34:21.003 23:20:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:34:21.003 23:20:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:34:21.003 23:20:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:21.003 23:20:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:21.003 23:20:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:21.003 23:20:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:34:21.003 23:20:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:21.003 23:20:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:21.003 23:20:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:34:21.261 /dev/nbd0 00:34:21.261 23:20:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:21.261 23:20:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:21.261 23:20:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:34:21.261 23:20:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:34:21.261 23:20:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:34:21.261 23:20:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:34:21.261 23:20:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:34:21.261 23:20:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # break 00:34:21.261 23:20:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:34:21.261 23:20:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:34:21.261 23:20:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:21.261 1+0 records in 00:34:21.261 1+0 records out 00:34:21.261 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527467 s, 7.8 MB/s 00:34:21.261 23:20:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:21.261 23:20:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:34:21.261 23:20:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:21.261 23:20:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:34:21.261 23:20:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:34:21.261 23:20:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:21.261 23:20:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:21.262 23:20:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:34:21.520 /dev/nbd1 00:34:21.520 23:20:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:34:21.520 23:20:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:34:21.520 23:20:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:34:21.520 23:20:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:34:21.520 23:20:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:34:21.520 23:20:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:34:21.520 23:20:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:34:21.520 23:20:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # break 00:34:21.520 23:20:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:34:21.520 23:20:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:34:21.520 23:20:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:21.520 1+0 records in 00:34:21.520 1+0 records out 00:34:21.520 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000455403 s, 9.0 MB/s 00:34:21.520 23:20:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:21.520 23:20:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:34:21.520 23:20:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:21.520 23:20:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:34:21.520 23:20:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:34:21.520 23:20:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:21.520 23:20:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:21.520 23:20:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:34:21.779 23:20:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:34:21.779 23:20:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:34:21.779 23:20:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:21.779 23:20:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:21.779 23:20:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:34:21.779 23:20:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:21.779 23:20:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:34:22.038 23:20:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:22.038 23:20:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:22.038 23:20:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:22.038 23:20:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:22.038 23:20:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:22.038 23:20:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:22.038 23:20:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:34:22.038 23:20:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:34:22.038 23:20:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:22.038 23:20:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:34:22.297 23:20:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:22.297 23:20:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:22.297 23:20:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:22.297 23:20:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:22.297 23:20:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:22.297 23:20:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:22.297 23:20:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:34:22.297 23:20:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:34:22.297 23:20:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:34:22.297 23:20:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 166780 00:34:22.297 23:20:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 166780 ']' 00:34:22.297 23:20:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 166780 00:34:22.297 23:20:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:34:22.297 23:20:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:22.297 23:20:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 166780 00:34:22.297 killing process with pid 166780 00:34:22.297 Received shutdown signal, test time was about 60.000000 seconds 00:34:22.297 00:34:22.297 Latency(us) 00:34:22.297 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:22.297 =================================================================================================================== 00:34:22.297 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:22.297 23:20:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:22.297 23:20:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:22.297 23:20:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 166780' 00:34:22.297 23:20:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@967 -- # kill 166780 00:34:22.297 23:20:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # wait 166780 00:34:22.297 [2024-07-13 23:20:11.504020] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:22.297 [2024-07-13 23:20:11.549542] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:22.556 ************************************ 00:34:22.556 END TEST raid5f_rebuild_test 00:34:22.556 ************************************ 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:34:22.556 00:34:22.556 real 0m25.321s 00:34:22.556 user 0m38.150s 00:34:22.556 sys 0m2.920s 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:22.556 23:20:11 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:34:22.556 23:20:11 bdev_raid -- bdev/bdev_raid.sh@891 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:34:22.556 23:20:11 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:34:22.556 23:20:11 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:22.556 23:20:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:22.556 ************************************ 00:34:22.556 START TEST raid5f_rebuild_test_sb 00:34:22.556 ************************************ 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid5f 4 true false true 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid5f 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid5f '!=' raid1 ']' 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' false = true ']' 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # strip_size=64 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # create_arg+=' -z 64' 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=167395 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 167395 /var/tmp/spdk-raid.sock 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 167395 ']' 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:22.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:22.556 23:20:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:22.556 [2024-07-13 23:20:11.927123] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:34:22.556 [2024-07-13 23:20:11.927352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167395 ] 00:34:22.556 I/O size of 3145728 is greater than zero copy threshold (65536). 00:34:22.556 Zero copy mechanism will not be used. 00:34:22.816 [2024-07-13 23:20:12.067728] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:22.816 [2024-07-13 23:20:12.154712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:22.816 [2024-07-13 23:20:12.210278] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:23.757 23:20:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:23.757 23:20:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:34:23.757 23:20:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:34:23.757 23:20:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:34:24.043 BaseBdev1_malloc 00:34:24.043 23:20:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:24.043 [2024-07-13 23:20:13.412702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:24.043 [2024-07-13 23:20:13.412842] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:24.043 [2024-07-13 23:20:13.412897] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:34:24.043 [2024-07-13 23:20:13.412989] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:24.043 [2024-07-13 23:20:13.415757] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:24.043 [2024-07-13 23:20:13.415819] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:24.043 BaseBdev1 00:34:24.043 23:20:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:34:24.043 23:20:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:34:24.299 BaseBdev2_malloc 00:34:24.299 23:20:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:34:24.557 [2024-07-13 23:20:13.907902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:34:24.557 [2024-07-13 23:20:13.908009] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:24.557 [2024-07-13 23:20:13.908053] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:34:24.557 [2024-07-13 23:20:13.908102] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:24.557 [2024-07-13 23:20:13.910710] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:24.557 [2024-07-13 23:20:13.910774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:34:24.557 BaseBdev2 00:34:24.557 23:20:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:34:24.557 23:20:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:34:24.815 BaseBdev3_malloc 00:34:24.815 23:20:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:34:25.074 [2024-07-13 23:20:14.430274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:34:25.074 [2024-07-13 23:20:14.430399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:25.074 [2024-07-13 23:20:14.430448] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:34:25.074 [2024-07-13 23:20:14.430494] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:25.074 [2024-07-13 23:20:14.433011] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:25.074 [2024-07-13 23:20:14.433088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:34:25.074 BaseBdev3 00:34:25.074 23:20:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:34:25.074 23:20:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:34:25.332 BaseBdev4_malloc 00:34:25.332 23:20:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:34:25.590 [2024-07-13 23:20:14.917617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:34:25.590 [2024-07-13 23:20:14.917756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:25.590 [2024-07-13 23:20:14.917798] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:34:25.590 [2024-07-13 23:20:14.917846] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:25.590 [2024-07-13 23:20:14.920516] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:25.590 [2024-07-13 23:20:14.920589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:34:25.590 BaseBdev4 00:34:25.590 23:20:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:34:25.848 spare_malloc 00:34:25.848 23:20:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:34:26.169 spare_delay 00:34:26.169 23:20:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:34:26.427 [2024-07-13 23:20:15.648864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:26.427 [2024-07-13 23:20:15.649041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:26.427 [2024-07-13 23:20:15.649093] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:34:26.427 [2024-07-13 23:20:15.649141] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:26.427 [2024-07-13 23:20:15.651766] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:26.427 [2024-07-13 23:20:15.651842] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:26.427 spare 00:34:26.427 23:20:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:34:26.685 [2024-07-13 23:20:15.885051] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:26.685 [2024-07-13 23:20:15.887347] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:26.685 [2024-07-13 23:20:15.887430] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:26.685 [2024-07-13 23:20:15.887491] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:34:26.685 [2024-07-13 23:20:15.887776] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:34:26.685 [2024-07-13 23:20:15.887798] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:34:26.685 [2024-07-13 23:20:15.887969] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:34:26.685 [2024-07-13 23:20:15.888847] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:34:26.685 [2024-07-13 23:20:15.888870] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:34:26.686 [2024-07-13 23:20:15.889135] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:26.686 23:20:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:34:26.686 23:20:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:26.686 23:20:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:26.686 23:20:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:26.686 23:20:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:26.686 23:20:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:26.686 23:20:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:26.686 23:20:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:26.686 23:20:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:26.686 23:20:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:26.686 23:20:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:26.686 23:20:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:26.944 23:20:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:26.944 "name": "raid_bdev1", 00:34:26.944 "uuid": "12515dd3-0ee9-4f08-9bd1-076715707bbc", 00:34:26.944 "strip_size_kb": 64, 00:34:26.944 "state": "online", 00:34:26.944 "raid_level": "raid5f", 00:34:26.944 "superblock": true, 00:34:26.944 "num_base_bdevs": 4, 00:34:26.944 "num_base_bdevs_discovered": 4, 00:34:26.944 "num_base_bdevs_operational": 4, 00:34:26.944 "base_bdevs_list": [ 00:34:26.944 { 00:34:26.944 "name": "BaseBdev1", 00:34:26.944 "uuid": "9e2e676a-e187-5bf4-ae35-a15f485e6271", 00:34:26.944 "is_configured": true, 00:34:26.944 "data_offset": 2048, 00:34:26.944 "data_size": 63488 00:34:26.944 }, 00:34:26.944 { 00:34:26.944 "name": "BaseBdev2", 00:34:26.944 "uuid": "9d234050-07ce-52f1-b876-0471b61a4cba", 00:34:26.944 "is_configured": true, 00:34:26.944 "data_offset": 2048, 00:34:26.944 "data_size": 63488 00:34:26.944 }, 00:34:26.944 { 00:34:26.944 "name": "BaseBdev3", 00:34:26.944 "uuid": "8e10374c-c02c-5738-b7d9-8f5099de800a", 00:34:26.944 "is_configured": true, 00:34:26.944 "data_offset": 2048, 00:34:26.944 "data_size": 63488 00:34:26.944 }, 00:34:26.944 { 00:34:26.944 "name": "BaseBdev4", 00:34:26.944 "uuid": "2906589c-0b1a-5e89-a154-148211164bae", 00:34:26.944 "is_configured": true, 00:34:26.944 "data_offset": 2048, 00:34:26.944 "data_size": 63488 00:34:26.944 } 00:34:26.944 ] 00:34:26.944 }' 00:34:26.944 23:20:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:26.944 23:20:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:27.510 23:20:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:27.510 23:20:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:34:27.769 [2024-07-13 23:20:16.993943] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:27.769 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=190464 00:34:27.769 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:27.769 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:34:28.027 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:34:28.027 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:34:28.027 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:34:28.028 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:34:28.028 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:34:28.028 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:34:28.028 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:34:28.028 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:28.028 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:34:28.028 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:28.028 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:34:28.028 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:28.028 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:28.028 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:34:28.287 [2024-07-13 23:20:17.510719] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:34:28.287 /dev/nbd0 00:34:28.287 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:28.287 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:28.287 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:34:28.287 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:34:28.287 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:34:28.287 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:34:28.287 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:34:28.287 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:34:28.287 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:34:28.287 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:34:28.287 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:28.287 1+0 records in 00:34:28.287 1+0 records out 00:34:28.287 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386944 s, 10.6 MB/s 00:34:28.287 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:28.287 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:34:28.287 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:28.287 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:34:28.287 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:34:28.287 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:28.287 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:28.287 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid5f = raid5f ']' 00:34:28.287 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # write_unit_size=384 00:34:28.287 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # echo 192 00:34:28.287 23:20:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:34:28.853 496+0 records in 00:34:28.853 496+0 records out 00:34:28.853 97517568 bytes (98 MB, 93 MiB) copied, 0.467265 s, 209 MB/s 00:34:28.853 23:20:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:34:28.853 23:20:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:34:28.853 23:20:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:28.853 23:20:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:28.853 23:20:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:34:28.853 23:20:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:28.853 23:20:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:34:29.111 [2024-07-13 23:20:18.312149] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:29.111 23:20:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:29.111 23:20:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:29.111 23:20:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:29.111 23:20:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:29.111 23:20:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:29.111 23:20:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:29.111 23:20:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:34:29.111 23:20:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:34:29.111 23:20:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:34:29.368 [2024-07-13 23:20:18.539760] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:29.368 23:20:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:29.368 23:20:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:29.368 23:20:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:29.368 23:20:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:29.368 23:20:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:29.368 23:20:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:29.368 23:20:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:29.368 23:20:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:29.368 23:20:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:29.368 23:20:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:29.368 23:20:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:29.368 23:20:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:29.626 23:20:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:29.626 "name": "raid_bdev1", 00:34:29.626 "uuid": "12515dd3-0ee9-4f08-9bd1-076715707bbc", 00:34:29.626 "strip_size_kb": 64, 00:34:29.626 "state": "online", 00:34:29.626 "raid_level": "raid5f", 00:34:29.626 "superblock": true, 00:34:29.626 "num_base_bdevs": 4, 00:34:29.626 "num_base_bdevs_discovered": 3, 00:34:29.626 "num_base_bdevs_operational": 3, 00:34:29.626 "base_bdevs_list": [ 00:34:29.626 { 00:34:29.626 "name": null, 00:34:29.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:29.626 "is_configured": false, 00:34:29.626 "data_offset": 2048, 00:34:29.626 "data_size": 63488 00:34:29.626 }, 00:34:29.626 { 00:34:29.626 "name": "BaseBdev2", 00:34:29.626 "uuid": "9d234050-07ce-52f1-b876-0471b61a4cba", 00:34:29.626 "is_configured": true, 00:34:29.626 "data_offset": 2048, 00:34:29.626 "data_size": 63488 00:34:29.626 }, 00:34:29.626 { 00:34:29.626 "name": "BaseBdev3", 00:34:29.626 "uuid": "8e10374c-c02c-5738-b7d9-8f5099de800a", 00:34:29.626 "is_configured": true, 00:34:29.626 "data_offset": 2048, 00:34:29.626 "data_size": 63488 00:34:29.626 }, 00:34:29.626 { 00:34:29.626 "name": "BaseBdev4", 00:34:29.626 "uuid": "2906589c-0b1a-5e89-a154-148211164bae", 00:34:29.626 "is_configured": true, 00:34:29.626 "data_offset": 2048, 00:34:29.626 "data_size": 63488 00:34:29.626 } 00:34:29.626 ] 00:34:29.626 }' 00:34:29.626 23:20:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:29.626 23:20:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:30.192 23:20:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:34:30.450 [2024-07-13 23:20:19.676032] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:30.450 [2024-07-13 23:20:19.680578] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000270a0 00:34:30.450 [2024-07-13 23:20:19.683324] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:30.450 23:20:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:34:31.385 23:20:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:31.385 23:20:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:31.385 23:20:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:31.385 23:20:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:31.385 23:20:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:31.385 23:20:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:31.385 23:20:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:31.643 23:20:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:31.643 "name": "raid_bdev1", 00:34:31.643 "uuid": "12515dd3-0ee9-4f08-9bd1-076715707bbc", 00:34:31.643 "strip_size_kb": 64, 00:34:31.643 "state": "online", 00:34:31.643 "raid_level": "raid5f", 00:34:31.643 "superblock": true, 00:34:31.643 "num_base_bdevs": 4, 00:34:31.643 "num_base_bdevs_discovered": 4, 00:34:31.643 "num_base_bdevs_operational": 4, 00:34:31.643 "process": { 00:34:31.643 "type": "rebuild", 00:34:31.643 "target": "spare", 00:34:31.643 "progress": { 00:34:31.643 "blocks": 23040, 00:34:31.643 "percent": 12 00:34:31.643 } 00:34:31.643 }, 00:34:31.643 "base_bdevs_list": [ 00:34:31.643 { 00:34:31.643 "name": "spare", 00:34:31.643 "uuid": "286546b0-b04f-5142-a2d6-f13250ef39c9", 00:34:31.643 "is_configured": true, 00:34:31.643 "data_offset": 2048, 00:34:31.643 "data_size": 63488 00:34:31.643 }, 00:34:31.643 { 00:34:31.643 "name": "BaseBdev2", 00:34:31.643 "uuid": "9d234050-07ce-52f1-b876-0471b61a4cba", 00:34:31.643 "is_configured": true, 00:34:31.643 "data_offset": 2048, 00:34:31.643 "data_size": 63488 00:34:31.643 }, 00:34:31.643 { 00:34:31.643 "name": "BaseBdev3", 00:34:31.643 "uuid": "8e10374c-c02c-5738-b7d9-8f5099de800a", 00:34:31.643 "is_configured": true, 00:34:31.643 "data_offset": 2048, 00:34:31.643 "data_size": 63488 00:34:31.643 }, 00:34:31.643 { 00:34:31.643 "name": "BaseBdev4", 00:34:31.643 "uuid": "2906589c-0b1a-5e89-a154-148211164bae", 00:34:31.643 "is_configured": true, 00:34:31.643 "data_offset": 2048, 00:34:31.643 "data_size": 63488 00:34:31.643 } 00:34:31.643 ] 00:34:31.643 }' 00:34:31.643 23:20:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:31.643 23:20:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:31.643 23:20:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:31.643 23:20:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:31.643 23:20:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:34:31.901 [2024-07-13 23:20:21.266152] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:31.901 [2024-07-13 23:20:21.298339] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:31.901 [2024-07-13 23:20:21.298520] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:31.901 [2024-07-13 23:20:21.298546] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:31.901 [2024-07-13 23:20:21.298554] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:32.160 23:20:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:32.160 23:20:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:32.160 23:20:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:32.160 23:20:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:32.160 23:20:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:32.160 23:20:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:32.160 23:20:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:32.160 23:20:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:32.160 23:20:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:32.160 23:20:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:32.160 23:20:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:32.160 23:20:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:32.419 23:20:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:32.419 "name": "raid_bdev1", 00:34:32.419 "uuid": "12515dd3-0ee9-4f08-9bd1-076715707bbc", 00:34:32.419 "strip_size_kb": 64, 00:34:32.419 "state": "online", 00:34:32.419 "raid_level": "raid5f", 00:34:32.419 "superblock": true, 00:34:32.419 "num_base_bdevs": 4, 00:34:32.419 "num_base_bdevs_discovered": 3, 00:34:32.419 "num_base_bdevs_operational": 3, 00:34:32.419 "base_bdevs_list": [ 00:34:32.419 { 00:34:32.419 "name": null, 00:34:32.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:32.419 "is_configured": false, 00:34:32.419 "data_offset": 2048, 00:34:32.419 "data_size": 63488 00:34:32.419 }, 00:34:32.419 { 00:34:32.419 "name": "BaseBdev2", 00:34:32.419 "uuid": "9d234050-07ce-52f1-b876-0471b61a4cba", 00:34:32.419 "is_configured": true, 00:34:32.419 "data_offset": 2048, 00:34:32.419 "data_size": 63488 00:34:32.419 }, 00:34:32.419 { 00:34:32.419 "name": "BaseBdev3", 00:34:32.419 "uuid": "8e10374c-c02c-5738-b7d9-8f5099de800a", 00:34:32.419 "is_configured": true, 00:34:32.419 "data_offset": 2048, 00:34:32.419 "data_size": 63488 00:34:32.419 }, 00:34:32.419 { 00:34:32.419 "name": "BaseBdev4", 00:34:32.419 "uuid": "2906589c-0b1a-5e89-a154-148211164bae", 00:34:32.419 "is_configured": true, 00:34:32.419 "data_offset": 2048, 00:34:32.419 "data_size": 63488 00:34:32.419 } 00:34:32.419 ] 00:34:32.419 }' 00:34:32.419 23:20:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:32.419 23:20:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:32.986 23:20:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:32.986 23:20:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:32.986 23:20:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:32.986 23:20:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:32.986 23:20:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:32.986 23:20:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:32.986 23:20:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:33.245 23:20:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:33.245 "name": "raid_bdev1", 00:34:33.245 "uuid": "12515dd3-0ee9-4f08-9bd1-076715707bbc", 00:34:33.245 "strip_size_kb": 64, 00:34:33.245 "state": "online", 00:34:33.245 "raid_level": "raid5f", 00:34:33.245 "superblock": true, 00:34:33.245 "num_base_bdevs": 4, 00:34:33.245 "num_base_bdevs_discovered": 3, 00:34:33.245 "num_base_bdevs_operational": 3, 00:34:33.245 "base_bdevs_list": [ 00:34:33.245 { 00:34:33.245 "name": null, 00:34:33.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:33.245 "is_configured": false, 00:34:33.245 "data_offset": 2048, 00:34:33.245 "data_size": 63488 00:34:33.245 }, 00:34:33.245 { 00:34:33.245 "name": "BaseBdev2", 00:34:33.245 "uuid": "9d234050-07ce-52f1-b876-0471b61a4cba", 00:34:33.245 "is_configured": true, 00:34:33.245 "data_offset": 2048, 00:34:33.245 "data_size": 63488 00:34:33.245 }, 00:34:33.245 { 00:34:33.245 "name": "BaseBdev3", 00:34:33.245 "uuid": "8e10374c-c02c-5738-b7d9-8f5099de800a", 00:34:33.245 "is_configured": true, 00:34:33.245 "data_offset": 2048, 00:34:33.245 "data_size": 63488 00:34:33.245 }, 00:34:33.245 { 00:34:33.245 "name": "BaseBdev4", 00:34:33.245 "uuid": "2906589c-0b1a-5e89-a154-148211164bae", 00:34:33.245 "is_configured": true, 00:34:33.245 "data_offset": 2048, 00:34:33.245 "data_size": 63488 00:34:33.245 } 00:34:33.245 ] 00:34:33.245 }' 00:34:33.245 23:20:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:33.245 23:20:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:33.245 23:20:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:33.245 23:20:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:33.245 23:20:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:34:33.504 [2024-07-13 23:20:22.854168] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:33.504 [2024-07-13 23:20:22.858643] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027240 00:34:33.504 [2024-07-13 23:20:22.861223] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:33.504 23:20:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:34:34.876 23:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:34.876 23:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:34.876 23:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:34.876 23:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:34.876 23:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:34.876 23:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:34.876 23:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:34.876 23:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:34.876 "name": "raid_bdev1", 00:34:34.876 "uuid": "12515dd3-0ee9-4f08-9bd1-076715707bbc", 00:34:34.876 "strip_size_kb": 64, 00:34:34.876 "state": "online", 00:34:34.876 "raid_level": "raid5f", 00:34:34.876 "superblock": true, 00:34:34.876 "num_base_bdevs": 4, 00:34:34.876 "num_base_bdevs_discovered": 4, 00:34:34.876 "num_base_bdevs_operational": 4, 00:34:34.876 "process": { 00:34:34.876 "type": "rebuild", 00:34:34.876 "target": "spare", 00:34:34.876 "progress": { 00:34:34.876 "blocks": 23040, 00:34:34.876 "percent": 12 00:34:34.876 } 00:34:34.876 }, 00:34:34.876 "base_bdevs_list": [ 00:34:34.876 { 00:34:34.876 "name": "spare", 00:34:34.876 "uuid": "286546b0-b04f-5142-a2d6-f13250ef39c9", 00:34:34.876 "is_configured": true, 00:34:34.876 "data_offset": 2048, 00:34:34.876 "data_size": 63488 00:34:34.876 }, 00:34:34.876 { 00:34:34.876 "name": "BaseBdev2", 00:34:34.876 "uuid": "9d234050-07ce-52f1-b876-0471b61a4cba", 00:34:34.876 "is_configured": true, 00:34:34.876 "data_offset": 2048, 00:34:34.876 "data_size": 63488 00:34:34.876 }, 00:34:34.876 { 00:34:34.876 "name": "BaseBdev3", 00:34:34.876 "uuid": "8e10374c-c02c-5738-b7d9-8f5099de800a", 00:34:34.876 "is_configured": true, 00:34:34.876 "data_offset": 2048, 00:34:34.876 "data_size": 63488 00:34:34.876 }, 00:34:34.876 { 00:34:34.876 "name": "BaseBdev4", 00:34:34.876 "uuid": "2906589c-0b1a-5e89-a154-148211164bae", 00:34:34.876 "is_configured": true, 00:34:34.876 "data_offset": 2048, 00:34:34.876 "data_size": 63488 00:34:34.876 } 00:34:34.876 ] 00:34:34.876 }' 00:34:34.876 23:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:34.876 23:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:34.876 23:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:34.876 23:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:34.876 23:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:34:34.876 23:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:34:34.876 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:34:34.876 23:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:34:34.876 23:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid5f = raid1 ']' 00:34:34.876 23:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=1262 00:34:34.876 23:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:34.876 23:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:34.876 23:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:34.876 23:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:34.876 23:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:34.876 23:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:34.876 23:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:34.876 23:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:35.135 23:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:35.135 "name": "raid_bdev1", 00:34:35.135 "uuid": "12515dd3-0ee9-4f08-9bd1-076715707bbc", 00:34:35.135 "strip_size_kb": 64, 00:34:35.135 "state": "online", 00:34:35.135 "raid_level": "raid5f", 00:34:35.135 "superblock": true, 00:34:35.135 "num_base_bdevs": 4, 00:34:35.135 "num_base_bdevs_discovered": 4, 00:34:35.135 "num_base_bdevs_operational": 4, 00:34:35.135 "process": { 00:34:35.135 "type": "rebuild", 00:34:35.135 "target": "spare", 00:34:35.135 "progress": { 00:34:35.135 "blocks": 30720, 00:34:35.135 "percent": 16 00:34:35.135 } 00:34:35.135 }, 00:34:35.135 "base_bdevs_list": [ 00:34:35.135 { 00:34:35.135 "name": "spare", 00:34:35.135 "uuid": "286546b0-b04f-5142-a2d6-f13250ef39c9", 00:34:35.135 "is_configured": true, 00:34:35.135 "data_offset": 2048, 00:34:35.135 "data_size": 63488 00:34:35.135 }, 00:34:35.135 { 00:34:35.135 "name": "BaseBdev2", 00:34:35.135 "uuid": "9d234050-07ce-52f1-b876-0471b61a4cba", 00:34:35.135 "is_configured": true, 00:34:35.135 "data_offset": 2048, 00:34:35.135 "data_size": 63488 00:34:35.135 }, 00:34:35.135 { 00:34:35.135 "name": "BaseBdev3", 00:34:35.135 "uuid": "8e10374c-c02c-5738-b7d9-8f5099de800a", 00:34:35.135 "is_configured": true, 00:34:35.135 "data_offset": 2048, 00:34:35.135 "data_size": 63488 00:34:35.135 }, 00:34:35.135 { 00:34:35.135 "name": "BaseBdev4", 00:34:35.135 "uuid": "2906589c-0b1a-5e89-a154-148211164bae", 00:34:35.135 "is_configured": true, 00:34:35.135 "data_offset": 2048, 00:34:35.135 "data_size": 63488 00:34:35.135 } 00:34:35.135 ] 00:34:35.135 }' 00:34:35.135 23:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:35.394 23:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:35.394 23:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:35.394 23:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:35.394 23:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:36.326 23:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:36.326 23:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:36.326 23:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:36.326 23:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:36.326 23:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:36.326 23:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:36.326 23:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:36.326 23:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:36.583 23:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:36.583 "name": "raid_bdev1", 00:34:36.583 "uuid": "12515dd3-0ee9-4f08-9bd1-076715707bbc", 00:34:36.583 "strip_size_kb": 64, 00:34:36.583 "state": "online", 00:34:36.583 "raid_level": "raid5f", 00:34:36.583 "superblock": true, 00:34:36.583 "num_base_bdevs": 4, 00:34:36.583 "num_base_bdevs_discovered": 4, 00:34:36.583 "num_base_bdevs_operational": 4, 00:34:36.583 "process": { 00:34:36.583 "type": "rebuild", 00:34:36.583 "target": "spare", 00:34:36.583 "progress": { 00:34:36.583 "blocks": 55680, 00:34:36.584 "percent": 29 00:34:36.584 } 00:34:36.584 }, 00:34:36.584 "base_bdevs_list": [ 00:34:36.584 { 00:34:36.584 "name": "spare", 00:34:36.584 "uuid": "286546b0-b04f-5142-a2d6-f13250ef39c9", 00:34:36.584 "is_configured": true, 00:34:36.584 "data_offset": 2048, 00:34:36.584 "data_size": 63488 00:34:36.584 }, 00:34:36.584 { 00:34:36.584 "name": "BaseBdev2", 00:34:36.584 "uuid": "9d234050-07ce-52f1-b876-0471b61a4cba", 00:34:36.584 "is_configured": true, 00:34:36.584 "data_offset": 2048, 00:34:36.584 "data_size": 63488 00:34:36.584 }, 00:34:36.584 { 00:34:36.584 "name": "BaseBdev3", 00:34:36.584 "uuid": "8e10374c-c02c-5738-b7d9-8f5099de800a", 00:34:36.584 "is_configured": true, 00:34:36.584 "data_offset": 2048, 00:34:36.584 "data_size": 63488 00:34:36.584 }, 00:34:36.584 { 00:34:36.584 "name": "BaseBdev4", 00:34:36.584 "uuid": "2906589c-0b1a-5e89-a154-148211164bae", 00:34:36.584 "is_configured": true, 00:34:36.584 "data_offset": 2048, 00:34:36.584 "data_size": 63488 00:34:36.584 } 00:34:36.584 ] 00:34:36.584 }' 00:34:36.584 23:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:36.584 23:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:36.584 23:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:36.584 23:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:36.584 23:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:38.003 23:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:38.003 23:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:38.003 23:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:38.003 23:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:38.003 23:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:38.003 23:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:38.003 23:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:38.003 23:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:38.003 23:20:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:38.003 "name": "raid_bdev1", 00:34:38.003 "uuid": "12515dd3-0ee9-4f08-9bd1-076715707bbc", 00:34:38.003 "strip_size_kb": 64, 00:34:38.003 "state": "online", 00:34:38.003 "raid_level": "raid5f", 00:34:38.003 "superblock": true, 00:34:38.003 "num_base_bdevs": 4, 00:34:38.003 "num_base_bdevs_discovered": 4, 00:34:38.003 "num_base_bdevs_operational": 4, 00:34:38.003 "process": { 00:34:38.003 "type": "rebuild", 00:34:38.003 "target": "spare", 00:34:38.003 "progress": { 00:34:38.003 "blocks": 82560, 00:34:38.003 "percent": 43 00:34:38.003 } 00:34:38.003 }, 00:34:38.003 "base_bdevs_list": [ 00:34:38.003 { 00:34:38.003 "name": "spare", 00:34:38.003 "uuid": "286546b0-b04f-5142-a2d6-f13250ef39c9", 00:34:38.003 "is_configured": true, 00:34:38.003 "data_offset": 2048, 00:34:38.003 "data_size": 63488 00:34:38.003 }, 00:34:38.003 { 00:34:38.003 "name": "BaseBdev2", 00:34:38.003 "uuid": "9d234050-07ce-52f1-b876-0471b61a4cba", 00:34:38.003 "is_configured": true, 00:34:38.003 "data_offset": 2048, 00:34:38.003 "data_size": 63488 00:34:38.003 }, 00:34:38.003 { 00:34:38.003 "name": "BaseBdev3", 00:34:38.003 "uuid": "8e10374c-c02c-5738-b7d9-8f5099de800a", 00:34:38.003 "is_configured": true, 00:34:38.003 "data_offset": 2048, 00:34:38.003 "data_size": 63488 00:34:38.003 }, 00:34:38.003 { 00:34:38.003 "name": "BaseBdev4", 00:34:38.003 "uuid": "2906589c-0b1a-5e89-a154-148211164bae", 00:34:38.004 "is_configured": true, 00:34:38.004 "data_offset": 2048, 00:34:38.004 "data_size": 63488 00:34:38.004 } 00:34:38.004 ] 00:34:38.004 }' 00:34:38.004 23:20:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:38.004 23:20:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:38.004 23:20:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:38.004 23:20:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:38.004 23:20:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:39.377 23:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:39.377 23:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:39.377 23:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:39.377 23:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:39.377 23:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:39.377 23:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:39.377 23:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:39.377 23:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:39.377 23:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:39.377 "name": "raid_bdev1", 00:34:39.377 "uuid": "12515dd3-0ee9-4f08-9bd1-076715707bbc", 00:34:39.377 "strip_size_kb": 64, 00:34:39.377 "state": "online", 00:34:39.377 "raid_level": "raid5f", 00:34:39.377 "superblock": true, 00:34:39.377 "num_base_bdevs": 4, 00:34:39.377 "num_base_bdevs_discovered": 4, 00:34:39.377 "num_base_bdevs_operational": 4, 00:34:39.377 "process": { 00:34:39.377 "type": "rebuild", 00:34:39.377 "target": "spare", 00:34:39.377 "progress": { 00:34:39.377 "blocks": 109440, 00:34:39.377 "percent": 57 00:34:39.377 } 00:34:39.377 }, 00:34:39.377 "base_bdevs_list": [ 00:34:39.377 { 00:34:39.377 "name": "spare", 00:34:39.377 "uuid": "286546b0-b04f-5142-a2d6-f13250ef39c9", 00:34:39.377 "is_configured": true, 00:34:39.377 "data_offset": 2048, 00:34:39.377 "data_size": 63488 00:34:39.377 }, 00:34:39.377 { 00:34:39.377 "name": "BaseBdev2", 00:34:39.377 "uuid": "9d234050-07ce-52f1-b876-0471b61a4cba", 00:34:39.377 "is_configured": true, 00:34:39.377 "data_offset": 2048, 00:34:39.377 "data_size": 63488 00:34:39.377 }, 00:34:39.377 { 00:34:39.377 "name": "BaseBdev3", 00:34:39.377 "uuid": "8e10374c-c02c-5738-b7d9-8f5099de800a", 00:34:39.377 "is_configured": true, 00:34:39.377 "data_offset": 2048, 00:34:39.377 "data_size": 63488 00:34:39.377 }, 00:34:39.377 { 00:34:39.377 "name": "BaseBdev4", 00:34:39.377 "uuid": "2906589c-0b1a-5e89-a154-148211164bae", 00:34:39.377 "is_configured": true, 00:34:39.377 "data_offset": 2048, 00:34:39.377 "data_size": 63488 00:34:39.377 } 00:34:39.377 ] 00:34:39.377 }' 00:34:39.377 23:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:39.377 23:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:39.377 23:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:39.377 23:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:39.377 23:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:40.750 23:20:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:40.751 23:20:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:40.751 23:20:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:40.751 23:20:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:40.751 23:20:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:40.751 23:20:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:40.751 23:20:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:40.751 23:20:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:40.751 23:20:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:40.751 "name": "raid_bdev1", 00:34:40.751 "uuid": "12515dd3-0ee9-4f08-9bd1-076715707bbc", 00:34:40.751 "strip_size_kb": 64, 00:34:40.751 "state": "online", 00:34:40.751 "raid_level": "raid5f", 00:34:40.751 "superblock": true, 00:34:40.751 "num_base_bdevs": 4, 00:34:40.751 "num_base_bdevs_discovered": 4, 00:34:40.751 "num_base_bdevs_operational": 4, 00:34:40.751 "process": { 00:34:40.751 "type": "rebuild", 00:34:40.751 "target": "spare", 00:34:40.751 "progress": { 00:34:40.751 "blocks": 134400, 00:34:40.751 "percent": 70 00:34:40.751 } 00:34:40.751 }, 00:34:40.751 "base_bdevs_list": [ 00:34:40.751 { 00:34:40.751 "name": "spare", 00:34:40.751 "uuid": "286546b0-b04f-5142-a2d6-f13250ef39c9", 00:34:40.751 "is_configured": true, 00:34:40.751 "data_offset": 2048, 00:34:40.751 "data_size": 63488 00:34:40.751 }, 00:34:40.751 { 00:34:40.751 "name": "BaseBdev2", 00:34:40.751 "uuid": "9d234050-07ce-52f1-b876-0471b61a4cba", 00:34:40.751 "is_configured": true, 00:34:40.751 "data_offset": 2048, 00:34:40.751 "data_size": 63488 00:34:40.751 }, 00:34:40.751 { 00:34:40.751 "name": "BaseBdev3", 00:34:40.751 "uuid": "8e10374c-c02c-5738-b7d9-8f5099de800a", 00:34:40.751 "is_configured": true, 00:34:40.751 "data_offset": 2048, 00:34:40.751 "data_size": 63488 00:34:40.751 }, 00:34:40.751 { 00:34:40.751 "name": "BaseBdev4", 00:34:40.751 "uuid": "2906589c-0b1a-5e89-a154-148211164bae", 00:34:40.751 "is_configured": true, 00:34:40.751 "data_offset": 2048, 00:34:40.751 "data_size": 63488 00:34:40.751 } 00:34:40.751 ] 00:34:40.751 }' 00:34:40.751 23:20:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:40.751 23:20:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:40.751 23:20:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:40.751 23:20:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:40.751 23:20:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:42.123 23:20:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:42.123 23:20:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:42.123 23:20:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:42.123 23:20:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:42.123 23:20:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:42.123 23:20:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:42.123 23:20:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:42.123 23:20:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:42.123 23:20:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:42.123 "name": "raid_bdev1", 00:34:42.123 "uuid": "12515dd3-0ee9-4f08-9bd1-076715707bbc", 00:34:42.123 "strip_size_kb": 64, 00:34:42.123 "state": "online", 00:34:42.123 "raid_level": "raid5f", 00:34:42.123 "superblock": true, 00:34:42.123 "num_base_bdevs": 4, 00:34:42.123 "num_base_bdevs_discovered": 4, 00:34:42.123 "num_base_bdevs_operational": 4, 00:34:42.123 "process": { 00:34:42.123 "type": "rebuild", 00:34:42.123 "target": "spare", 00:34:42.123 "progress": { 00:34:42.123 "blocks": 161280, 00:34:42.123 "percent": 84 00:34:42.123 } 00:34:42.123 }, 00:34:42.123 "base_bdevs_list": [ 00:34:42.123 { 00:34:42.123 "name": "spare", 00:34:42.123 "uuid": "286546b0-b04f-5142-a2d6-f13250ef39c9", 00:34:42.123 "is_configured": true, 00:34:42.123 "data_offset": 2048, 00:34:42.123 "data_size": 63488 00:34:42.123 }, 00:34:42.123 { 00:34:42.123 "name": "BaseBdev2", 00:34:42.123 "uuid": "9d234050-07ce-52f1-b876-0471b61a4cba", 00:34:42.123 "is_configured": true, 00:34:42.123 "data_offset": 2048, 00:34:42.123 "data_size": 63488 00:34:42.123 }, 00:34:42.123 { 00:34:42.123 "name": "BaseBdev3", 00:34:42.123 "uuid": "8e10374c-c02c-5738-b7d9-8f5099de800a", 00:34:42.123 "is_configured": true, 00:34:42.123 "data_offset": 2048, 00:34:42.123 "data_size": 63488 00:34:42.123 }, 00:34:42.123 { 00:34:42.123 "name": "BaseBdev4", 00:34:42.123 "uuid": "2906589c-0b1a-5e89-a154-148211164bae", 00:34:42.123 "is_configured": true, 00:34:42.123 "data_offset": 2048, 00:34:42.123 "data_size": 63488 00:34:42.123 } 00:34:42.123 ] 00:34:42.123 }' 00:34:42.123 23:20:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:42.124 23:20:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:42.124 23:20:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:42.124 23:20:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:42.124 23:20:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:43.530 23:20:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:43.530 23:20:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:43.530 23:20:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:43.530 23:20:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:43.531 23:20:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:43.531 23:20:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:43.531 23:20:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:43.531 23:20:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:43.531 23:20:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:43.531 "name": "raid_bdev1", 00:34:43.531 "uuid": "12515dd3-0ee9-4f08-9bd1-076715707bbc", 00:34:43.531 "strip_size_kb": 64, 00:34:43.531 "state": "online", 00:34:43.531 "raid_level": "raid5f", 00:34:43.531 "superblock": true, 00:34:43.531 "num_base_bdevs": 4, 00:34:43.531 "num_base_bdevs_discovered": 4, 00:34:43.531 "num_base_bdevs_operational": 4, 00:34:43.531 "process": { 00:34:43.531 "type": "rebuild", 00:34:43.531 "target": "spare", 00:34:43.531 "progress": { 00:34:43.531 "blocks": 186240, 00:34:43.531 "percent": 97 00:34:43.531 } 00:34:43.531 }, 00:34:43.531 "base_bdevs_list": [ 00:34:43.531 { 00:34:43.531 "name": "spare", 00:34:43.531 "uuid": "286546b0-b04f-5142-a2d6-f13250ef39c9", 00:34:43.531 "is_configured": true, 00:34:43.531 "data_offset": 2048, 00:34:43.531 "data_size": 63488 00:34:43.531 }, 00:34:43.531 { 00:34:43.531 "name": "BaseBdev2", 00:34:43.531 "uuid": "9d234050-07ce-52f1-b876-0471b61a4cba", 00:34:43.531 "is_configured": true, 00:34:43.531 "data_offset": 2048, 00:34:43.531 "data_size": 63488 00:34:43.531 }, 00:34:43.531 { 00:34:43.531 "name": "BaseBdev3", 00:34:43.531 "uuid": "8e10374c-c02c-5738-b7d9-8f5099de800a", 00:34:43.531 "is_configured": true, 00:34:43.531 "data_offset": 2048, 00:34:43.531 "data_size": 63488 00:34:43.531 }, 00:34:43.531 { 00:34:43.531 "name": "BaseBdev4", 00:34:43.531 "uuid": "2906589c-0b1a-5e89-a154-148211164bae", 00:34:43.531 "is_configured": true, 00:34:43.531 "data_offset": 2048, 00:34:43.531 "data_size": 63488 00:34:43.531 } 00:34:43.531 ] 00:34:43.531 }' 00:34:43.531 23:20:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:43.531 23:20:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:43.531 23:20:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:43.531 23:20:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:43.531 23:20:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:43.789 [2024-07-13 23:20:32.945773] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:34:43.789 [2024-07-13 23:20:32.945871] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:34:43.789 [2024-07-13 23:20:32.946046] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:44.722 23:20:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:44.722 23:20:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:44.722 23:20:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:44.722 23:20:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:44.722 23:20:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:44.722 23:20:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:44.722 23:20:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:44.722 23:20:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:44.722 23:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:44.722 "name": "raid_bdev1", 00:34:44.722 "uuid": "12515dd3-0ee9-4f08-9bd1-076715707bbc", 00:34:44.722 "strip_size_kb": 64, 00:34:44.722 "state": "online", 00:34:44.722 "raid_level": "raid5f", 00:34:44.722 "superblock": true, 00:34:44.722 "num_base_bdevs": 4, 00:34:44.722 "num_base_bdevs_discovered": 4, 00:34:44.722 "num_base_bdevs_operational": 4, 00:34:44.722 "base_bdevs_list": [ 00:34:44.722 { 00:34:44.722 "name": "spare", 00:34:44.722 "uuid": "286546b0-b04f-5142-a2d6-f13250ef39c9", 00:34:44.722 "is_configured": true, 00:34:44.722 "data_offset": 2048, 00:34:44.722 "data_size": 63488 00:34:44.722 }, 00:34:44.722 { 00:34:44.722 "name": "BaseBdev2", 00:34:44.722 "uuid": "9d234050-07ce-52f1-b876-0471b61a4cba", 00:34:44.722 "is_configured": true, 00:34:44.722 "data_offset": 2048, 00:34:44.722 "data_size": 63488 00:34:44.722 }, 00:34:44.722 { 00:34:44.722 "name": "BaseBdev3", 00:34:44.722 "uuid": "8e10374c-c02c-5738-b7d9-8f5099de800a", 00:34:44.722 "is_configured": true, 00:34:44.722 "data_offset": 2048, 00:34:44.722 "data_size": 63488 00:34:44.722 }, 00:34:44.722 { 00:34:44.722 "name": "BaseBdev4", 00:34:44.722 "uuid": "2906589c-0b1a-5e89-a154-148211164bae", 00:34:44.722 "is_configured": true, 00:34:44.722 "data_offset": 2048, 00:34:44.722 "data_size": 63488 00:34:44.722 } 00:34:44.722 ] 00:34:44.722 }' 00:34:44.722 23:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:44.979 23:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:34:44.979 23:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:44.979 23:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:34:44.979 23:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:34:44.979 23:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:44.979 23:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:44.979 23:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:44.979 23:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:44.979 23:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:44.979 23:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:44.979 23:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:45.237 23:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:45.237 "name": "raid_bdev1", 00:34:45.237 "uuid": "12515dd3-0ee9-4f08-9bd1-076715707bbc", 00:34:45.237 "strip_size_kb": 64, 00:34:45.237 "state": "online", 00:34:45.237 "raid_level": "raid5f", 00:34:45.237 "superblock": true, 00:34:45.237 "num_base_bdevs": 4, 00:34:45.237 "num_base_bdevs_discovered": 4, 00:34:45.237 "num_base_bdevs_operational": 4, 00:34:45.237 "base_bdevs_list": [ 00:34:45.237 { 00:34:45.237 "name": "spare", 00:34:45.237 "uuid": "286546b0-b04f-5142-a2d6-f13250ef39c9", 00:34:45.237 "is_configured": true, 00:34:45.237 "data_offset": 2048, 00:34:45.237 "data_size": 63488 00:34:45.237 }, 00:34:45.237 { 00:34:45.237 "name": "BaseBdev2", 00:34:45.237 "uuid": "9d234050-07ce-52f1-b876-0471b61a4cba", 00:34:45.237 "is_configured": true, 00:34:45.237 "data_offset": 2048, 00:34:45.237 "data_size": 63488 00:34:45.237 }, 00:34:45.237 { 00:34:45.237 "name": "BaseBdev3", 00:34:45.237 "uuid": "8e10374c-c02c-5738-b7d9-8f5099de800a", 00:34:45.237 "is_configured": true, 00:34:45.237 "data_offset": 2048, 00:34:45.237 "data_size": 63488 00:34:45.237 }, 00:34:45.237 { 00:34:45.237 "name": "BaseBdev4", 00:34:45.237 "uuid": "2906589c-0b1a-5e89-a154-148211164bae", 00:34:45.237 "is_configured": true, 00:34:45.237 "data_offset": 2048, 00:34:45.237 "data_size": 63488 00:34:45.237 } 00:34:45.237 ] 00:34:45.237 }' 00:34:45.237 23:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:45.237 23:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:45.237 23:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:45.237 23:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:45.237 23:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:34:45.237 23:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:45.237 23:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:45.237 23:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:45.237 23:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:45.237 23:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:45.237 23:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:45.237 23:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:45.237 23:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:45.237 23:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:45.237 23:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:45.237 23:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:45.495 23:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:45.495 "name": "raid_bdev1", 00:34:45.495 "uuid": "12515dd3-0ee9-4f08-9bd1-076715707bbc", 00:34:45.495 "strip_size_kb": 64, 00:34:45.495 "state": "online", 00:34:45.495 "raid_level": "raid5f", 00:34:45.495 "superblock": true, 00:34:45.495 "num_base_bdevs": 4, 00:34:45.495 "num_base_bdevs_discovered": 4, 00:34:45.495 "num_base_bdevs_operational": 4, 00:34:45.495 "base_bdevs_list": [ 00:34:45.495 { 00:34:45.495 "name": "spare", 00:34:45.495 "uuid": "286546b0-b04f-5142-a2d6-f13250ef39c9", 00:34:45.495 "is_configured": true, 00:34:45.495 "data_offset": 2048, 00:34:45.495 "data_size": 63488 00:34:45.495 }, 00:34:45.495 { 00:34:45.495 "name": "BaseBdev2", 00:34:45.495 "uuid": "9d234050-07ce-52f1-b876-0471b61a4cba", 00:34:45.495 "is_configured": true, 00:34:45.495 "data_offset": 2048, 00:34:45.495 "data_size": 63488 00:34:45.495 }, 00:34:45.495 { 00:34:45.495 "name": "BaseBdev3", 00:34:45.495 "uuid": "8e10374c-c02c-5738-b7d9-8f5099de800a", 00:34:45.495 "is_configured": true, 00:34:45.495 "data_offset": 2048, 00:34:45.495 "data_size": 63488 00:34:45.495 }, 00:34:45.495 { 00:34:45.495 "name": "BaseBdev4", 00:34:45.495 "uuid": "2906589c-0b1a-5e89-a154-148211164bae", 00:34:45.495 "is_configured": true, 00:34:45.495 "data_offset": 2048, 00:34:45.495 "data_size": 63488 00:34:45.495 } 00:34:45.495 ] 00:34:45.495 }' 00:34:45.495 23:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:45.495 23:20:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:46.428 23:20:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:34:46.428 [2024-07-13 23:20:35.700618] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:46.428 [2024-07-13 23:20:35.700658] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:46.428 [2024-07-13 23:20:35.700762] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:46.428 [2024-07-13 23:20:35.700880] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:46.428 [2024-07-13 23:20:35.700894] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:34:46.428 23:20:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:34:46.428 23:20:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:46.687 23:20:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:34:46.687 23:20:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:34:46.687 23:20:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:34:46.687 23:20:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:34:46.687 23:20:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:34:46.687 23:20:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:34:46.687 23:20:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:46.687 23:20:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:46.687 23:20:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:46.687 23:20:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:34:46.687 23:20:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:46.687 23:20:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:46.687 23:20:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:34:46.945 /dev/nbd0 00:34:46.945 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:46.945 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:46.945 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:34:46.945 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:34:46.945 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:34:46.945 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:34:46.945 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:34:46.945 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:34:46.945 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:34:46.945 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:34:46.945 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:46.945 1+0 records in 00:34:46.945 1+0 records out 00:34:46.945 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000525177 s, 7.8 MB/s 00:34:46.945 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:46.945 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:34:46.945 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:46.945 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:34:46.945 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:34:46.945 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:46.945 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:46.945 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:34:47.203 /dev/nbd1 00:34:47.203 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:34:47.203 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:34:47.203 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:34:47.203 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:34:47.203 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:34:47.203 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:34:47.203 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:34:47.203 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:34:47.203 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:34:47.203 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:34:47.203 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:47.203 1+0 records in 00:34:47.203 1+0 records out 00:34:47.203 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437618 s, 9.4 MB/s 00:34:47.203 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:47.203 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:34:47.203 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:47.203 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:34:47.203 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:34:47.203 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:47.203 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:47.203 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:34:47.461 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:34:47.461 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:34:47.461 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:47.461 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:47.461 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:34:47.461 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:47.461 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:34:47.719 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:47.719 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:47.719 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:47.719 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:47.719 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:47.720 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:47.720 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:34:47.720 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:34:47.720 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:47.720 23:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:34:47.978 23:20:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:47.978 23:20:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:47.978 23:20:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:47.978 23:20:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:47.978 23:20:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:47.978 23:20:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:47.978 23:20:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:34:47.978 23:20:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:34:47.978 23:20:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:34:47.978 23:20:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:34:48.237 23:20:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:34:48.494 [2024-07-13 23:20:37.681996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:48.494 [2024-07-13 23:20:37.682110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:48.494 [2024-07-13 23:20:37.682145] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:34:48.494 [2024-07-13 23:20:37.682180] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:48.494 [2024-07-13 23:20:37.685023] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:48.494 [2024-07-13 23:20:37.685088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:48.494 [2024-07-13 23:20:37.685192] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:34:48.494 [2024-07-13 23:20:37.685302] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:48.494 [2024-07-13 23:20:37.685533] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:48.494 [2024-07-13 23:20:37.685684] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:48.494 [2024-07-13 23:20:37.685786] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:34:48.494 spare 00:34:48.494 23:20:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:34:48.494 23:20:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:48.494 23:20:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:48.494 23:20:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:48.494 23:20:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:48.494 23:20:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:48.494 23:20:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:48.494 23:20:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:48.494 23:20:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:48.494 23:20:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:48.494 23:20:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:48.494 23:20:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:48.494 [2024-07-13 23:20:37.785920] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:34:48.494 [2024-07-13 23:20:37.785945] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:34:48.494 [2024-07-13 23:20:37.786100] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000045b60 00:34:48.494 [2024-07-13 23:20:37.787034] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:34:48.494 [2024-07-13 23:20:37.787059] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:34:48.494 [2024-07-13 23:20:37.787264] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:48.751 23:20:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:48.751 "name": "raid_bdev1", 00:34:48.751 "uuid": "12515dd3-0ee9-4f08-9bd1-076715707bbc", 00:34:48.751 "strip_size_kb": 64, 00:34:48.751 "state": "online", 00:34:48.751 "raid_level": "raid5f", 00:34:48.751 "superblock": true, 00:34:48.751 "num_base_bdevs": 4, 00:34:48.751 "num_base_bdevs_discovered": 4, 00:34:48.751 "num_base_bdevs_operational": 4, 00:34:48.751 "base_bdevs_list": [ 00:34:48.751 { 00:34:48.751 "name": "spare", 00:34:48.751 "uuid": "286546b0-b04f-5142-a2d6-f13250ef39c9", 00:34:48.751 "is_configured": true, 00:34:48.751 "data_offset": 2048, 00:34:48.751 "data_size": 63488 00:34:48.751 }, 00:34:48.751 { 00:34:48.751 "name": "BaseBdev2", 00:34:48.751 "uuid": "9d234050-07ce-52f1-b876-0471b61a4cba", 00:34:48.751 "is_configured": true, 00:34:48.751 "data_offset": 2048, 00:34:48.751 "data_size": 63488 00:34:48.751 }, 00:34:48.751 { 00:34:48.751 "name": "BaseBdev3", 00:34:48.751 "uuid": "8e10374c-c02c-5738-b7d9-8f5099de800a", 00:34:48.751 "is_configured": true, 00:34:48.751 "data_offset": 2048, 00:34:48.751 "data_size": 63488 00:34:48.751 }, 00:34:48.751 { 00:34:48.751 "name": "BaseBdev4", 00:34:48.751 "uuid": "2906589c-0b1a-5e89-a154-148211164bae", 00:34:48.751 "is_configured": true, 00:34:48.751 "data_offset": 2048, 00:34:48.751 "data_size": 63488 00:34:48.751 } 00:34:48.751 ] 00:34:48.751 }' 00:34:48.751 23:20:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:48.751 23:20:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:49.315 23:20:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:49.315 23:20:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:49.315 23:20:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:49.315 23:20:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:49.315 23:20:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:49.315 23:20:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:49.315 23:20:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:49.572 23:20:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:49.572 "name": "raid_bdev1", 00:34:49.572 "uuid": "12515dd3-0ee9-4f08-9bd1-076715707bbc", 00:34:49.572 "strip_size_kb": 64, 00:34:49.572 "state": "online", 00:34:49.572 "raid_level": "raid5f", 00:34:49.572 "superblock": true, 00:34:49.572 "num_base_bdevs": 4, 00:34:49.572 "num_base_bdevs_discovered": 4, 00:34:49.572 "num_base_bdevs_operational": 4, 00:34:49.572 "base_bdevs_list": [ 00:34:49.572 { 00:34:49.572 "name": "spare", 00:34:49.572 "uuid": "286546b0-b04f-5142-a2d6-f13250ef39c9", 00:34:49.572 "is_configured": true, 00:34:49.572 "data_offset": 2048, 00:34:49.572 "data_size": 63488 00:34:49.572 }, 00:34:49.572 { 00:34:49.572 "name": "BaseBdev2", 00:34:49.572 "uuid": "9d234050-07ce-52f1-b876-0471b61a4cba", 00:34:49.572 "is_configured": true, 00:34:49.572 "data_offset": 2048, 00:34:49.572 "data_size": 63488 00:34:49.572 }, 00:34:49.572 { 00:34:49.572 "name": "BaseBdev3", 00:34:49.572 "uuid": "8e10374c-c02c-5738-b7d9-8f5099de800a", 00:34:49.572 "is_configured": true, 00:34:49.572 "data_offset": 2048, 00:34:49.572 "data_size": 63488 00:34:49.572 }, 00:34:49.572 { 00:34:49.572 "name": "BaseBdev4", 00:34:49.572 "uuid": "2906589c-0b1a-5e89-a154-148211164bae", 00:34:49.572 "is_configured": true, 00:34:49.572 "data_offset": 2048, 00:34:49.572 "data_size": 63488 00:34:49.572 } 00:34:49.572 ] 00:34:49.572 }' 00:34:49.572 23:20:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:49.572 23:20:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:49.572 23:20:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:49.830 23:20:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:49.830 23:20:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:49.830 23:20:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:34:49.830 23:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:34:49.830 23:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:34:50.088 [2024-07-13 23:20:39.398186] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:50.088 23:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:50.088 23:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:50.088 23:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:50.088 23:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:50.088 23:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:50.088 23:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:50.088 23:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:50.088 23:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:50.088 23:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:50.088 23:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:50.088 23:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:50.088 23:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:50.345 23:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:50.345 "name": "raid_bdev1", 00:34:50.345 "uuid": "12515dd3-0ee9-4f08-9bd1-076715707bbc", 00:34:50.345 "strip_size_kb": 64, 00:34:50.345 "state": "online", 00:34:50.345 "raid_level": "raid5f", 00:34:50.345 "superblock": true, 00:34:50.345 "num_base_bdevs": 4, 00:34:50.345 "num_base_bdevs_discovered": 3, 00:34:50.345 "num_base_bdevs_operational": 3, 00:34:50.345 "base_bdevs_list": [ 00:34:50.345 { 00:34:50.345 "name": null, 00:34:50.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:50.345 "is_configured": false, 00:34:50.345 "data_offset": 2048, 00:34:50.345 "data_size": 63488 00:34:50.345 }, 00:34:50.345 { 00:34:50.345 "name": "BaseBdev2", 00:34:50.345 "uuid": "9d234050-07ce-52f1-b876-0471b61a4cba", 00:34:50.345 "is_configured": true, 00:34:50.345 "data_offset": 2048, 00:34:50.345 "data_size": 63488 00:34:50.345 }, 00:34:50.345 { 00:34:50.345 "name": "BaseBdev3", 00:34:50.345 "uuid": "8e10374c-c02c-5738-b7d9-8f5099de800a", 00:34:50.345 "is_configured": true, 00:34:50.345 "data_offset": 2048, 00:34:50.345 "data_size": 63488 00:34:50.345 }, 00:34:50.345 { 00:34:50.345 "name": "BaseBdev4", 00:34:50.345 "uuid": "2906589c-0b1a-5e89-a154-148211164bae", 00:34:50.345 "is_configured": true, 00:34:50.345 "data_offset": 2048, 00:34:50.345 "data_size": 63488 00:34:50.345 } 00:34:50.345 ] 00:34:50.345 }' 00:34:50.345 23:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:50.345 23:20:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:51.279 23:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:34:51.279 [2024-07-13 23:20:40.544373] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:51.279 [2024-07-13 23:20:40.544595] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:34:51.279 [2024-07-13 23:20:40.544614] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:34:51.279 [2024-07-13 23:20:40.544704] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:51.279 [2024-07-13 23:20:40.548824] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000045d00 00:34:51.279 [2024-07-13 23:20:40.551309] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:51.279 23:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:34:52.215 23:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:52.215 23:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:52.215 23:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:52.215 23:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:52.215 23:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:52.215 23:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:52.215 23:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:52.474 23:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:52.474 "name": "raid_bdev1", 00:34:52.474 "uuid": "12515dd3-0ee9-4f08-9bd1-076715707bbc", 00:34:52.474 "strip_size_kb": 64, 00:34:52.474 "state": "online", 00:34:52.474 "raid_level": "raid5f", 00:34:52.474 "superblock": true, 00:34:52.474 "num_base_bdevs": 4, 00:34:52.474 "num_base_bdevs_discovered": 4, 00:34:52.474 "num_base_bdevs_operational": 4, 00:34:52.474 "process": { 00:34:52.474 "type": "rebuild", 00:34:52.474 "target": "spare", 00:34:52.474 "progress": { 00:34:52.474 "blocks": 23040, 00:34:52.474 "percent": 12 00:34:52.474 } 00:34:52.474 }, 00:34:52.474 "base_bdevs_list": [ 00:34:52.474 { 00:34:52.474 "name": "spare", 00:34:52.474 "uuid": "286546b0-b04f-5142-a2d6-f13250ef39c9", 00:34:52.474 "is_configured": true, 00:34:52.474 "data_offset": 2048, 00:34:52.474 "data_size": 63488 00:34:52.474 }, 00:34:52.474 { 00:34:52.474 "name": "BaseBdev2", 00:34:52.474 "uuid": "9d234050-07ce-52f1-b876-0471b61a4cba", 00:34:52.474 "is_configured": true, 00:34:52.474 "data_offset": 2048, 00:34:52.474 "data_size": 63488 00:34:52.474 }, 00:34:52.474 { 00:34:52.474 "name": "BaseBdev3", 00:34:52.474 "uuid": "8e10374c-c02c-5738-b7d9-8f5099de800a", 00:34:52.474 "is_configured": true, 00:34:52.474 "data_offset": 2048, 00:34:52.474 "data_size": 63488 00:34:52.474 }, 00:34:52.474 { 00:34:52.474 "name": "BaseBdev4", 00:34:52.474 "uuid": "2906589c-0b1a-5e89-a154-148211164bae", 00:34:52.474 "is_configured": true, 00:34:52.474 "data_offset": 2048, 00:34:52.474 "data_size": 63488 00:34:52.474 } 00:34:52.474 ] 00:34:52.474 }' 00:34:52.474 23:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:52.474 23:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:52.474 23:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:52.732 23:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:52.732 23:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:34:52.991 [2024-07-13 23:20:42.166342] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:52.991 [2024-07-13 23:20:42.263713] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:52.991 [2024-07-13 23:20:42.263827] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:52.991 [2024-07-13 23:20:42.263849] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:52.991 [2024-07-13 23:20:42.263857] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:52.991 23:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:52.991 23:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:52.991 23:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:52.991 23:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:52.991 23:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:52.991 23:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:52.991 23:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:52.991 23:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:52.991 23:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:52.991 23:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:52.991 23:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:52.991 23:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:53.250 23:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:53.250 "name": "raid_bdev1", 00:34:53.250 "uuid": "12515dd3-0ee9-4f08-9bd1-076715707bbc", 00:34:53.250 "strip_size_kb": 64, 00:34:53.250 "state": "online", 00:34:53.250 "raid_level": "raid5f", 00:34:53.250 "superblock": true, 00:34:53.250 "num_base_bdevs": 4, 00:34:53.250 "num_base_bdevs_discovered": 3, 00:34:53.250 "num_base_bdevs_operational": 3, 00:34:53.250 "base_bdevs_list": [ 00:34:53.250 { 00:34:53.250 "name": null, 00:34:53.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:53.250 "is_configured": false, 00:34:53.250 "data_offset": 2048, 00:34:53.250 "data_size": 63488 00:34:53.250 }, 00:34:53.250 { 00:34:53.250 "name": "BaseBdev2", 00:34:53.250 "uuid": "9d234050-07ce-52f1-b876-0471b61a4cba", 00:34:53.250 "is_configured": true, 00:34:53.250 "data_offset": 2048, 00:34:53.250 "data_size": 63488 00:34:53.250 }, 00:34:53.250 { 00:34:53.250 "name": "BaseBdev3", 00:34:53.250 "uuid": "8e10374c-c02c-5738-b7d9-8f5099de800a", 00:34:53.250 "is_configured": true, 00:34:53.250 "data_offset": 2048, 00:34:53.250 "data_size": 63488 00:34:53.250 }, 00:34:53.250 { 00:34:53.250 "name": "BaseBdev4", 00:34:53.250 "uuid": "2906589c-0b1a-5e89-a154-148211164bae", 00:34:53.250 "is_configured": true, 00:34:53.250 "data_offset": 2048, 00:34:53.250 "data_size": 63488 00:34:53.250 } 00:34:53.250 ] 00:34:53.250 }' 00:34:53.250 23:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:53.250 23:20:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:53.817 23:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:34:54.074 [2024-07-13 23:20:43.406001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:54.074 [2024-07-13 23:20:43.406124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:54.074 [2024-07-13 23:20:43.406168] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:34:54.074 [2024-07-13 23:20:43.406191] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:54.074 [2024-07-13 23:20:43.406723] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:54.074 [2024-07-13 23:20:43.406813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:54.074 [2024-07-13 23:20:43.406939] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:34:54.074 [2024-07-13 23:20:43.406957] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:34:54.074 [2024-07-13 23:20:43.406965] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:34:54.074 [2024-07-13 23:20:43.407024] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:54.074 [2024-07-13 23:20:43.411372] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000046040 00:34:54.074 spare 00:34:54.074 [2024-07-13 23:20:43.421846] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:54.074 23:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:34:55.446 23:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:55.446 23:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:55.446 23:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:55.446 23:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:55.446 23:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:55.446 23:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:55.446 23:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:55.446 23:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:55.446 "name": "raid_bdev1", 00:34:55.446 "uuid": "12515dd3-0ee9-4f08-9bd1-076715707bbc", 00:34:55.446 "strip_size_kb": 64, 00:34:55.446 "state": "online", 00:34:55.446 "raid_level": "raid5f", 00:34:55.446 "superblock": true, 00:34:55.446 "num_base_bdevs": 4, 00:34:55.446 "num_base_bdevs_discovered": 4, 00:34:55.446 "num_base_bdevs_operational": 4, 00:34:55.446 "process": { 00:34:55.446 "type": "rebuild", 00:34:55.446 "target": "spare", 00:34:55.446 "progress": { 00:34:55.446 "blocks": 23040, 00:34:55.446 "percent": 12 00:34:55.446 } 00:34:55.446 }, 00:34:55.446 "base_bdevs_list": [ 00:34:55.446 { 00:34:55.446 "name": "spare", 00:34:55.446 "uuid": "286546b0-b04f-5142-a2d6-f13250ef39c9", 00:34:55.446 "is_configured": true, 00:34:55.446 "data_offset": 2048, 00:34:55.446 "data_size": 63488 00:34:55.446 }, 00:34:55.446 { 00:34:55.446 "name": "BaseBdev2", 00:34:55.446 "uuid": "9d234050-07ce-52f1-b876-0471b61a4cba", 00:34:55.446 "is_configured": true, 00:34:55.446 "data_offset": 2048, 00:34:55.446 "data_size": 63488 00:34:55.446 }, 00:34:55.446 { 00:34:55.446 "name": "BaseBdev3", 00:34:55.446 "uuid": "8e10374c-c02c-5738-b7d9-8f5099de800a", 00:34:55.446 "is_configured": true, 00:34:55.446 "data_offset": 2048, 00:34:55.446 "data_size": 63488 00:34:55.446 }, 00:34:55.446 { 00:34:55.446 "name": "BaseBdev4", 00:34:55.446 "uuid": "2906589c-0b1a-5e89-a154-148211164bae", 00:34:55.446 "is_configured": true, 00:34:55.446 "data_offset": 2048, 00:34:55.446 "data_size": 63488 00:34:55.446 } 00:34:55.446 ] 00:34:55.446 }' 00:34:55.446 23:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:55.446 23:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:55.446 23:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:55.446 23:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:55.446 23:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:34:55.704 [2024-07-13 23:20:44.991284] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:55.704 [2024-07-13 23:20:45.034260] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:55.704 [2024-07-13 23:20:45.034371] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:55.705 [2024-07-13 23:20:45.034391] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:55.705 [2024-07-13 23:20:45.034400] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:55.705 23:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:55.705 23:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:55.705 23:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:55.705 23:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:55.705 23:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:55.705 23:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:55.705 23:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:55.705 23:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:55.705 23:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:55.705 23:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:55.705 23:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:55.705 23:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:55.963 23:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:55.963 "name": "raid_bdev1", 00:34:55.963 "uuid": "12515dd3-0ee9-4f08-9bd1-076715707bbc", 00:34:55.963 "strip_size_kb": 64, 00:34:55.963 "state": "online", 00:34:55.963 "raid_level": "raid5f", 00:34:55.963 "superblock": true, 00:34:55.963 "num_base_bdevs": 4, 00:34:55.963 "num_base_bdevs_discovered": 3, 00:34:55.963 "num_base_bdevs_operational": 3, 00:34:55.963 "base_bdevs_list": [ 00:34:55.963 { 00:34:55.963 "name": null, 00:34:55.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:55.963 "is_configured": false, 00:34:55.963 "data_offset": 2048, 00:34:55.963 "data_size": 63488 00:34:55.963 }, 00:34:55.963 { 00:34:55.963 "name": "BaseBdev2", 00:34:55.963 "uuid": "9d234050-07ce-52f1-b876-0471b61a4cba", 00:34:55.963 "is_configured": true, 00:34:55.963 "data_offset": 2048, 00:34:55.963 "data_size": 63488 00:34:55.963 }, 00:34:55.963 { 00:34:55.963 "name": "BaseBdev3", 00:34:55.963 "uuid": "8e10374c-c02c-5738-b7d9-8f5099de800a", 00:34:55.963 "is_configured": true, 00:34:55.963 "data_offset": 2048, 00:34:55.963 "data_size": 63488 00:34:55.963 }, 00:34:55.963 { 00:34:55.963 "name": "BaseBdev4", 00:34:55.963 "uuid": "2906589c-0b1a-5e89-a154-148211164bae", 00:34:55.963 "is_configured": true, 00:34:55.963 "data_offset": 2048, 00:34:55.963 "data_size": 63488 00:34:55.963 } 00:34:55.963 ] 00:34:55.963 }' 00:34:55.963 23:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:55.963 23:20:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:56.898 23:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:56.898 23:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:56.898 23:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:56.898 23:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:56.898 23:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:56.898 23:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:56.898 23:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:56.898 23:20:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:56.898 "name": "raid_bdev1", 00:34:56.898 "uuid": "12515dd3-0ee9-4f08-9bd1-076715707bbc", 00:34:56.898 "strip_size_kb": 64, 00:34:56.898 "state": "online", 00:34:56.898 "raid_level": "raid5f", 00:34:56.898 "superblock": true, 00:34:56.898 "num_base_bdevs": 4, 00:34:56.898 "num_base_bdevs_discovered": 3, 00:34:56.898 "num_base_bdevs_operational": 3, 00:34:56.898 "base_bdevs_list": [ 00:34:56.898 { 00:34:56.898 "name": null, 00:34:56.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:56.898 "is_configured": false, 00:34:56.898 "data_offset": 2048, 00:34:56.898 "data_size": 63488 00:34:56.898 }, 00:34:56.898 { 00:34:56.898 "name": "BaseBdev2", 00:34:56.898 "uuid": "9d234050-07ce-52f1-b876-0471b61a4cba", 00:34:56.898 "is_configured": true, 00:34:56.898 "data_offset": 2048, 00:34:56.898 "data_size": 63488 00:34:56.898 }, 00:34:56.898 { 00:34:56.898 "name": "BaseBdev3", 00:34:56.898 "uuid": "8e10374c-c02c-5738-b7d9-8f5099de800a", 00:34:56.898 "is_configured": true, 00:34:56.898 "data_offset": 2048, 00:34:56.898 "data_size": 63488 00:34:56.898 }, 00:34:56.898 { 00:34:56.898 "name": "BaseBdev4", 00:34:56.898 "uuid": "2906589c-0b1a-5e89-a154-148211164bae", 00:34:56.898 "is_configured": true, 00:34:56.898 "data_offset": 2048, 00:34:56.898 "data_size": 63488 00:34:56.898 } 00:34:56.898 ] 00:34:56.898 }' 00:34:56.898 23:20:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:56.898 23:20:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:56.898 23:20:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:56.898 23:20:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:56.898 23:20:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:34:57.156 23:20:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:57.414 [2024-07-13 23:20:46.752481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:57.414 [2024-07-13 23:20:46.752601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:57.414 [2024-07-13 23:20:46.752667] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:34:57.414 [2024-07-13 23:20:46.752694] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:57.414 [2024-07-13 23:20:46.753308] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:57.414 [2024-07-13 23:20:46.753368] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:57.414 [2024-07-13 23:20:46.753461] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:34:57.414 [2024-07-13 23:20:46.753480] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:34:57.414 [2024-07-13 23:20:46.753488] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:57.414 BaseBdev1 00:34:57.414 23:20:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:34:58.787 23:20:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:58.787 23:20:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:58.787 23:20:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:58.787 23:20:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:58.787 23:20:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:58.787 23:20:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:58.787 23:20:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:58.787 23:20:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:58.787 23:20:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:58.787 23:20:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:58.787 23:20:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:58.787 23:20:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:58.787 23:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:58.787 "name": "raid_bdev1", 00:34:58.787 "uuid": "12515dd3-0ee9-4f08-9bd1-076715707bbc", 00:34:58.787 "strip_size_kb": 64, 00:34:58.787 "state": "online", 00:34:58.787 "raid_level": "raid5f", 00:34:58.787 "superblock": true, 00:34:58.787 "num_base_bdevs": 4, 00:34:58.787 "num_base_bdevs_discovered": 3, 00:34:58.787 "num_base_bdevs_operational": 3, 00:34:58.787 "base_bdevs_list": [ 00:34:58.787 { 00:34:58.787 "name": null, 00:34:58.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:58.787 "is_configured": false, 00:34:58.787 "data_offset": 2048, 00:34:58.787 "data_size": 63488 00:34:58.787 }, 00:34:58.787 { 00:34:58.787 "name": "BaseBdev2", 00:34:58.787 "uuid": "9d234050-07ce-52f1-b876-0471b61a4cba", 00:34:58.787 "is_configured": true, 00:34:58.787 "data_offset": 2048, 00:34:58.787 "data_size": 63488 00:34:58.787 }, 00:34:58.787 { 00:34:58.787 "name": "BaseBdev3", 00:34:58.787 "uuid": "8e10374c-c02c-5738-b7d9-8f5099de800a", 00:34:58.787 "is_configured": true, 00:34:58.787 "data_offset": 2048, 00:34:58.787 "data_size": 63488 00:34:58.787 }, 00:34:58.787 { 00:34:58.787 "name": "BaseBdev4", 00:34:58.787 "uuid": "2906589c-0b1a-5e89-a154-148211164bae", 00:34:58.787 "is_configured": true, 00:34:58.787 "data_offset": 2048, 00:34:58.787 "data_size": 63488 00:34:58.787 } 00:34:58.787 ] 00:34:58.787 }' 00:34:58.787 23:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:58.787 23:20:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:59.355 23:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:59.355 23:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:59.355 23:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:59.355 23:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:59.355 23:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:59.355 23:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:59.355 23:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:59.613 23:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:59.613 "name": "raid_bdev1", 00:34:59.613 "uuid": "12515dd3-0ee9-4f08-9bd1-076715707bbc", 00:34:59.613 "strip_size_kb": 64, 00:34:59.613 "state": "online", 00:34:59.613 "raid_level": "raid5f", 00:34:59.613 "superblock": true, 00:34:59.613 "num_base_bdevs": 4, 00:34:59.613 "num_base_bdevs_discovered": 3, 00:34:59.613 "num_base_bdevs_operational": 3, 00:34:59.613 "base_bdevs_list": [ 00:34:59.613 { 00:34:59.613 "name": null, 00:34:59.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:59.613 "is_configured": false, 00:34:59.613 "data_offset": 2048, 00:34:59.613 "data_size": 63488 00:34:59.613 }, 00:34:59.613 { 00:34:59.613 "name": "BaseBdev2", 00:34:59.613 "uuid": "9d234050-07ce-52f1-b876-0471b61a4cba", 00:34:59.613 "is_configured": true, 00:34:59.613 "data_offset": 2048, 00:34:59.613 "data_size": 63488 00:34:59.613 }, 00:34:59.613 { 00:34:59.613 "name": "BaseBdev3", 00:34:59.613 "uuid": "8e10374c-c02c-5738-b7d9-8f5099de800a", 00:34:59.613 "is_configured": true, 00:34:59.613 "data_offset": 2048, 00:34:59.613 "data_size": 63488 00:34:59.613 }, 00:34:59.613 { 00:34:59.613 "name": "BaseBdev4", 00:34:59.613 "uuid": "2906589c-0b1a-5e89-a154-148211164bae", 00:34:59.613 "is_configured": true, 00:34:59.613 "data_offset": 2048, 00:34:59.613 "data_size": 63488 00:34:59.613 } 00:34:59.613 ] 00:34:59.613 }' 00:34:59.613 23:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:59.613 23:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:59.613 23:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:59.613 23:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:59.613 23:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:59.613 23:20:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:34:59.613 23:20:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:59.613 23:20:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:59.613 23:20:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:59.613 23:20:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:59.613 23:20:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:59.613 23:20:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:59.613 23:20:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:59.613 23:20:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:59.613 23:20:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:34:59.613 23:20:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:59.872 [2024-07-13 23:20:49.260434] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:59.872 [2024-07-13 23:20:49.260600] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:34:59.872 [2024-07-13 23:20:49.260616] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:59.872 request: 00:34:59.872 { 00:34:59.872 "base_bdev": "BaseBdev1", 00:34:59.872 "raid_bdev": "raid_bdev1", 00:34:59.872 "method": "bdev_raid_add_base_bdev", 00:34:59.872 "req_id": 1 00:34:59.872 } 00:34:59.872 Got JSON-RPC error response 00:34:59.872 response: 00:34:59.872 { 00:34:59.872 "code": -22, 00:34:59.872 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:34:59.872 } 00:35:00.130 23:20:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:35:00.130 23:20:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:00.130 23:20:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:00.130 23:20:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:00.130 23:20:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:35:01.064 23:20:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:01.064 23:20:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:01.064 23:20:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:01.064 23:20:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:01.064 23:20:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:01.064 23:20:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:01.064 23:20:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:01.064 23:20:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:01.064 23:20:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:01.064 23:20:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:01.064 23:20:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:01.064 23:20:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:01.323 23:20:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:01.323 "name": "raid_bdev1", 00:35:01.323 "uuid": "12515dd3-0ee9-4f08-9bd1-076715707bbc", 00:35:01.323 "strip_size_kb": 64, 00:35:01.323 "state": "online", 00:35:01.323 "raid_level": "raid5f", 00:35:01.323 "superblock": true, 00:35:01.323 "num_base_bdevs": 4, 00:35:01.323 "num_base_bdevs_discovered": 3, 00:35:01.323 "num_base_bdevs_operational": 3, 00:35:01.323 "base_bdevs_list": [ 00:35:01.323 { 00:35:01.323 "name": null, 00:35:01.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:01.323 "is_configured": false, 00:35:01.323 "data_offset": 2048, 00:35:01.323 "data_size": 63488 00:35:01.323 }, 00:35:01.323 { 00:35:01.323 "name": "BaseBdev2", 00:35:01.323 "uuid": "9d234050-07ce-52f1-b876-0471b61a4cba", 00:35:01.323 "is_configured": true, 00:35:01.323 "data_offset": 2048, 00:35:01.323 "data_size": 63488 00:35:01.323 }, 00:35:01.323 { 00:35:01.323 "name": "BaseBdev3", 00:35:01.323 "uuid": "8e10374c-c02c-5738-b7d9-8f5099de800a", 00:35:01.323 "is_configured": true, 00:35:01.323 "data_offset": 2048, 00:35:01.323 "data_size": 63488 00:35:01.323 }, 00:35:01.323 { 00:35:01.323 "name": "BaseBdev4", 00:35:01.323 "uuid": "2906589c-0b1a-5e89-a154-148211164bae", 00:35:01.323 "is_configured": true, 00:35:01.323 "data_offset": 2048, 00:35:01.323 "data_size": 63488 00:35:01.323 } 00:35:01.323 ] 00:35:01.323 }' 00:35:01.323 23:20:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:01.323 23:20:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:01.890 23:20:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:01.890 23:20:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:01.890 23:20:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:01.890 23:20:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:01.890 23:20:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:01.890 23:20:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:01.890 23:20:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:02.149 23:20:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:02.149 "name": "raid_bdev1", 00:35:02.149 "uuid": "12515dd3-0ee9-4f08-9bd1-076715707bbc", 00:35:02.149 "strip_size_kb": 64, 00:35:02.149 "state": "online", 00:35:02.149 "raid_level": "raid5f", 00:35:02.149 "superblock": true, 00:35:02.149 "num_base_bdevs": 4, 00:35:02.149 "num_base_bdevs_discovered": 3, 00:35:02.149 "num_base_bdevs_operational": 3, 00:35:02.149 "base_bdevs_list": [ 00:35:02.149 { 00:35:02.149 "name": null, 00:35:02.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:02.149 "is_configured": false, 00:35:02.149 "data_offset": 2048, 00:35:02.149 "data_size": 63488 00:35:02.149 }, 00:35:02.149 { 00:35:02.149 "name": "BaseBdev2", 00:35:02.149 "uuid": "9d234050-07ce-52f1-b876-0471b61a4cba", 00:35:02.149 "is_configured": true, 00:35:02.149 "data_offset": 2048, 00:35:02.149 "data_size": 63488 00:35:02.149 }, 00:35:02.149 { 00:35:02.149 "name": "BaseBdev3", 00:35:02.149 "uuid": "8e10374c-c02c-5738-b7d9-8f5099de800a", 00:35:02.149 "is_configured": true, 00:35:02.149 "data_offset": 2048, 00:35:02.149 "data_size": 63488 00:35:02.149 }, 00:35:02.149 { 00:35:02.149 "name": "BaseBdev4", 00:35:02.149 "uuid": "2906589c-0b1a-5e89-a154-148211164bae", 00:35:02.149 "is_configured": true, 00:35:02.149 "data_offset": 2048, 00:35:02.149 "data_size": 63488 00:35:02.149 } 00:35:02.149 ] 00:35:02.149 }' 00:35:02.149 23:20:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:02.149 23:20:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:02.149 23:20:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:02.409 23:20:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:02.409 23:20:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 167395 00:35:02.409 23:20:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 167395 ']' 00:35:02.409 23:20:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 167395 00:35:02.409 23:20:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:35:02.409 23:20:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:02.409 23:20:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 167395 00:35:02.409 23:20:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:02.409 23:20:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:02.409 killing process with pid 167395 00:35:02.409 23:20:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 167395' 00:35:02.409 Received shutdown signal, test time was about 60.000000 seconds 00:35:02.409 00:35:02.409 Latency(us) 00:35:02.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:02.409 =================================================================================================================== 00:35:02.409 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:02.409 23:20:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 167395 00:35:02.409 [2024-07-13 23:20:51.580387] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:02.409 23:20:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 167395 00:35:02.409 [2024-07-13 23:20:51.580547] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:02.409 [2024-07-13 23:20:51.580644] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:02.409 [2024-07-13 23:20:51.580662] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:35:02.409 [2024-07-13 23:20:51.625521] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:02.667 23:20:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:35:02.667 00:35:02.667 real 0m40.003s 00:35:02.667 user 1m2.727s 00:35:02.667 sys 0m4.317s 00:35:02.667 23:20:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:02.667 23:20:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:02.667 ************************************ 00:35:02.667 END TEST raid5f_rebuild_test_sb 00:35:02.667 ************************************ 00:35:02.667 23:20:51 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:35:02.667 23:20:51 bdev_raid -- bdev/bdev_raid.sh@896 -- # base_blocklen=4096 00:35:02.667 23:20:51 bdev_raid -- bdev/bdev_raid.sh@898 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:35:02.667 23:20:51 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:35:02.667 23:20:51 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:02.667 23:20:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:02.667 ************************************ 00:35:02.667 START TEST raid_state_function_test_sb_4k 00:35:02.667 ************************************ 00:35:02.667 23:20:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:35:02.667 23:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:35:02.667 23:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:35:02.667 23:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:35:02.667 23:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:35:02.667 23:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:35:02.667 23:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:35:02.667 23:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:35:02.667 23:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:35:02.667 23:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:35:02.667 23:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:35:02.667 23:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:35:02.667 23:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:35:02.667 23:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:35:02.667 23:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:35:02.667 23:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:35:02.667 23:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:35:02.668 23:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:35:02.668 23:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:35:02.668 23:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:35:02.668 23:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:35:02.668 23:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:35:02.668 23:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:35:02.668 23:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=168401 00:35:02.668 23:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 168401' 00:35:02.668 Process raid pid: 168401 00:35:02.668 23:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:35:02.668 23:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 168401 /var/tmp/spdk-raid.sock 00:35:02.668 23:20:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@829 -- # '[' -z 168401 ']' 00:35:02.668 23:20:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:02.668 23:20:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:02.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:02.668 23:20:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:02.668 23:20:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:02.668 23:20:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:02.668 [2024-07-13 23:20:51.994677] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:35:02.668 [2024-07-13 23:20:51.994947] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:02.926 [2024-07-13 23:20:52.148639] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:02.926 [2024-07-13 23:20:52.209949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:02.926 [2024-07-13 23:20:52.260796] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:03.862 23:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:03.862 23:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # return 0 00:35:03.862 23:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:35:03.862 [2024-07-13 23:20:53.203372] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:03.862 [2024-07-13 23:20:53.203480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:03.862 [2024-07-13 23:20:53.203510] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:03.862 [2024-07-13 23:20:53.203528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:03.862 23:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:35:03.862 23:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:03.862 23:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:03.862 23:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:03.862 23:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:03.862 23:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:03.862 23:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:03.862 23:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:03.862 23:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:03.862 23:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:03.862 23:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:03.862 23:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:04.119 23:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:04.119 "name": "Existed_Raid", 00:35:04.119 "uuid": "06c9c92f-9dfc-4f70-bfe4-7669b5488575", 00:35:04.119 "strip_size_kb": 0, 00:35:04.119 "state": "configuring", 00:35:04.119 "raid_level": "raid1", 00:35:04.119 "superblock": true, 00:35:04.119 "num_base_bdevs": 2, 00:35:04.119 "num_base_bdevs_discovered": 0, 00:35:04.119 "num_base_bdevs_operational": 2, 00:35:04.119 "base_bdevs_list": [ 00:35:04.119 { 00:35:04.119 "name": "BaseBdev1", 00:35:04.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:04.119 "is_configured": false, 00:35:04.119 "data_offset": 0, 00:35:04.119 "data_size": 0 00:35:04.119 }, 00:35:04.119 { 00:35:04.119 "name": "BaseBdev2", 00:35:04.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:04.119 "is_configured": false, 00:35:04.119 "data_offset": 0, 00:35:04.119 "data_size": 0 00:35:04.119 } 00:35:04.119 ] 00:35:04.119 }' 00:35:04.119 23:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:04.119 23:20:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:05.052 23:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:35:05.052 [2024-07-13 23:20:54.303588] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:05.052 [2024-07-13 23:20:54.303653] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:35:05.052 23:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:35:05.310 [2024-07-13 23:20:54.515647] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:05.310 [2024-07-13 23:20:54.515757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:05.310 [2024-07-13 23:20:54.515788] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:05.310 [2024-07-13 23:20:54.515818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:05.310 23:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:35:05.569 [2024-07-13 23:20:54.746449] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:05.569 BaseBdev1 00:35:05.569 23:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:35:05.569 23:20:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:35:05.569 23:20:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:35:05.569 23:20:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:35:05.569 23:20:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:35:05.569 23:20:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:35:05.569 23:20:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:05.569 23:20:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:05.828 [ 00:35:05.828 { 00:35:05.828 "name": "BaseBdev1", 00:35:05.828 "aliases": [ 00:35:05.828 "4516ebc2-987c-40b4-9fef-d566ac0c63bd" 00:35:05.828 ], 00:35:05.828 "product_name": "Malloc disk", 00:35:05.828 "block_size": 4096, 00:35:05.828 "num_blocks": 8192, 00:35:05.828 "uuid": "4516ebc2-987c-40b4-9fef-d566ac0c63bd", 00:35:05.828 "assigned_rate_limits": { 00:35:05.828 "rw_ios_per_sec": 0, 00:35:05.828 "rw_mbytes_per_sec": 0, 00:35:05.828 "r_mbytes_per_sec": 0, 00:35:05.828 "w_mbytes_per_sec": 0 00:35:05.828 }, 00:35:05.828 "claimed": true, 00:35:05.828 "claim_type": "exclusive_write", 00:35:05.828 "zoned": false, 00:35:05.828 "supported_io_types": { 00:35:05.828 "read": true, 00:35:05.828 "write": true, 00:35:05.828 "unmap": true, 00:35:05.828 "flush": true, 00:35:05.828 "reset": true, 00:35:05.828 "nvme_admin": false, 00:35:05.828 "nvme_io": false, 00:35:05.828 "nvme_io_md": false, 00:35:05.828 "write_zeroes": true, 00:35:05.828 "zcopy": true, 00:35:05.828 "get_zone_info": false, 00:35:05.828 "zone_management": false, 00:35:05.828 "zone_append": false, 00:35:05.828 "compare": false, 00:35:05.828 "compare_and_write": false, 00:35:05.828 "abort": true, 00:35:05.828 "seek_hole": false, 00:35:05.828 "seek_data": false, 00:35:05.828 "copy": true, 00:35:05.828 "nvme_iov_md": false 00:35:05.828 }, 00:35:05.828 "memory_domains": [ 00:35:05.828 { 00:35:05.828 "dma_device_id": "system", 00:35:05.828 "dma_device_type": 1 00:35:05.828 }, 00:35:05.828 { 00:35:05.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:05.828 "dma_device_type": 2 00:35:05.828 } 00:35:05.828 ], 00:35:05.828 "driver_specific": {} 00:35:05.828 } 00:35:05.828 ] 00:35:05.828 23:20:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:35:05.828 23:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:35:05.828 23:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:05.828 23:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:06.087 23:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:06.087 23:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:06.087 23:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:06.087 23:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:06.087 23:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:06.087 23:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:06.087 23:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:06.087 23:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:06.087 23:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:06.087 23:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:06.087 "name": "Existed_Raid", 00:35:06.087 "uuid": "82abf26a-088e-4da5-b1f0-37b5c8964edc", 00:35:06.087 "strip_size_kb": 0, 00:35:06.087 "state": "configuring", 00:35:06.087 "raid_level": "raid1", 00:35:06.087 "superblock": true, 00:35:06.087 "num_base_bdevs": 2, 00:35:06.087 "num_base_bdevs_discovered": 1, 00:35:06.087 "num_base_bdevs_operational": 2, 00:35:06.087 "base_bdevs_list": [ 00:35:06.087 { 00:35:06.087 "name": "BaseBdev1", 00:35:06.087 "uuid": "4516ebc2-987c-40b4-9fef-d566ac0c63bd", 00:35:06.087 "is_configured": true, 00:35:06.087 "data_offset": 256, 00:35:06.087 "data_size": 7936 00:35:06.087 }, 00:35:06.087 { 00:35:06.087 "name": "BaseBdev2", 00:35:06.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:06.087 "is_configured": false, 00:35:06.087 "data_offset": 0, 00:35:06.087 "data_size": 0 00:35:06.087 } 00:35:06.087 ] 00:35:06.087 }' 00:35:06.087 23:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:06.087 23:20:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:07.021 23:20:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:35:07.021 [2024-07-13 23:20:56.374826] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:07.021 [2024-07-13 23:20:56.374894] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:35:07.021 23:20:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:35:07.279 [2024-07-13 23:20:56.638994] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:07.279 [2024-07-13 23:20:56.641363] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:07.279 [2024-07-13 23:20:56.641438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:07.279 23:20:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:35:07.279 23:20:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:35:07.279 23:20:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:35:07.280 23:20:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:07.280 23:20:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:07.280 23:20:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:07.280 23:20:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:07.280 23:20:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:07.280 23:20:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:07.280 23:20:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:07.280 23:20:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:07.280 23:20:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:07.280 23:20:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:07.280 23:20:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:07.538 23:20:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:07.538 "name": "Existed_Raid", 00:35:07.538 "uuid": "d027c16d-e154-44c2-ac74-ab98ea9eb934", 00:35:07.538 "strip_size_kb": 0, 00:35:07.538 "state": "configuring", 00:35:07.538 "raid_level": "raid1", 00:35:07.538 "superblock": true, 00:35:07.538 "num_base_bdevs": 2, 00:35:07.538 "num_base_bdevs_discovered": 1, 00:35:07.538 "num_base_bdevs_operational": 2, 00:35:07.538 "base_bdevs_list": [ 00:35:07.538 { 00:35:07.538 "name": "BaseBdev1", 00:35:07.538 "uuid": "4516ebc2-987c-40b4-9fef-d566ac0c63bd", 00:35:07.538 "is_configured": true, 00:35:07.538 "data_offset": 256, 00:35:07.538 "data_size": 7936 00:35:07.538 }, 00:35:07.538 { 00:35:07.538 "name": "BaseBdev2", 00:35:07.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:07.538 "is_configured": false, 00:35:07.538 "data_offset": 0, 00:35:07.538 "data_size": 0 00:35:07.538 } 00:35:07.538 ] 00:35:07.538 }' 00:35:07.538 23:20:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:07.538 23:20:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:08.472 23:20:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:35:08.472 [2024-07-13 23:20:57.804280] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:08.472 [2024-07-13 23:20:57.804528] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:35:08.472 [2024-07-13 23:20:57.804544] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:35:08.472 [2024-07-13 23:20:57.804734] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:35:08.472 [2024-07-13 23:20:57.805274] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:35:08.472 [2024-07-13 23:20:57.805302] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:35:08.472 BaseBdev2 00:35:08.472 [2024-07-13 23:20:57.805500] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:08.472 23:20:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:35:08.472 23:20:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:35:08.472 23:20:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:35:08.472 23:20:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:35:08.472 23:20:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:35:08.472 23:20:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:35:08.472 23:20:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:08.731 23:20:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:08.989 [ 00:35:08.989 { 00:35:08.989 "name": "BaseBdev2", 00:35:08.989 "aliases": [ 00:35:08.989 "b8066989-2e7b-46eb-b589-457e0886471e" 00:35:08.989 ], 00:35:08.989 "product_name": "Malloc disk", 00:35:08.989 "block_size": 4096, 00:35:08.989 "num_blocks": 8192, 00:35:08.989 "uuid": "b8066989-2e7b-46eb-b589-457e0886471e", 00:35:08.989 "assigned_rate_limits": { 00:35:08.989 "rw_ios_per_sec": 0, 00:35:08.989 "rw_mbytes_per_sec": 0, 00:35:08.989 "r_mbytes_per_sec": 0, 00:35:08.989 "w_mbytes_per_sec": 0 00:35:08.989 }, 00:35:08.989 "claimed": true, 00:35:08.989 "claim_type": "exclusive_write", 00:35:08.989 "zoned": false, 00:35:08.989 "supported_io_types": { 00:35:08.989 "read": true, 00:35:08.989 "write": true, 00:35:08.989 "unmap": true, 00:35:08.989 "flush": true, 00:35:08.989 "reset": true, 00:35:08.989 "nvme_admin": false, 00:35:08.989 "nvme_io": false, 00:35:08.989 "nvme_io_md": false, 00:35:08.989 "write_zeroes": true, 00:35:08.989 "zcopy": true, 00:35:08.989 "get_zone_info": false, 00:35:08.989 "zone_management": false, 00:35:08.989 "zone_append": false, 00:35:08.989 "compare": false, 00:35:08.989 "compare_and_write": false, 00:35:08.989 "abort": true, 00:35:08.989 "seek_hole": false, 00:35:08.989 "seek_data": false, 00:35:08.990 "copy": true, 00:35:08.990 "nvme_iov_md": false 00:35:08.990 }, 00:35:08.990 "memory_domains": [ 00:35:08.990 { 00:35:08.990 "dma_device_id": "system", 00:35:08.990 "dma_device_type": 1 00:35:08.990 }, 00:35:08.990 { 00:35:08.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:08.990 "dma_device_type": 2 00:35:08.990 } 00:35:08.990 ], 00:35:08.990 "driver_specific": {} 00:35:08.990 } 00:35:08.990 ] 00:35:08.990 23:20:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:35:08.990 23:20:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:35:08.990 23:20:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:35:08.990 23:20:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:35:08.990 23:20:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:08.990 23:20:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:08.990 23:20:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:08.990 23:20:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:08.990 23:20:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:08.990 23:20:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:08.990 23:20:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:08.990 23:20:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:08.990 23:20:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:08.990 23:20:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:08.990 23:20:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:09.248 23:20:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:09.248 "name": "Existed_Raid", 00:35:09.248 "uuid": "d027c16d-e154-44c2-ac74-ab98ea9eb934", 00:35:09.248 "strip_size_kb": 0, 00:35:09.248 "state": "online", 00:35:09.248 "raid_level": "raid1", 00:35:09.248 "superblock": true, 00:35:09.248 "num_base_bdevs": 2, 00:35:09.248 "num_base_bdevs_discovered": 2, 00:35:09.248 "num_base_bdevs_operational": 2, 00:35:09.248 "base_bdevs_list": [ 00:35:09.248 { 00:35:09.248 "name": "BaseBdev1", 00:35:09.248 "uuid": "4516ebc2-987c-40b4-9fef-d566ac0c63bd", 00:35:09.248 "is_configured": true, 00:35:09.248 "data_offset": 256, 00:35:09.248 "data_size": 7936 00:35:09.248 }, 00:35:09.248 { 00:35:09.248 "name": "BaseBdev2", 00:35:09.248 "uuid": "b8066989-2e7b-46eb-b589-457e0886471e", 00:35:09.248 "is_configured": true, 00:35:09.248 "data_offset": 256, 00:35:09.248 "data_size": 7936 00:35:09.248 } 00:35:09.248 ] 00:35:09.248 }' 00:35:09.248 23:20:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:09.248 23:20:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:09.815 23:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:35:09.815 23:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:35:09.815 23:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:35:09.815 23:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:35:09.815 23:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:35:09.815 23:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:35:09.815 23:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:35:09.815 23:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:35:10.074 [2024-07-13 23:20:59.373021] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:10.074 23:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:35:10.074 "name": "Existed_Raid", 00:35:10.074 "aliases": [ 00:35:10.074 "d027c16d-e154-44c2-ac74-ab98ea9eb934" 00:35:10.074 ], 00:35:10.074 "product_name": "Raid Volume", 00:35:10.074 "block_size": 4096, 00:35:10.074 "num_blocks": 7936, 00:35:10.074 "uuid": "d027c16d-e154-44c2-ac74-ab98ea9eb934", 00:35:10.074 "assigned_rate_limits": { 00:35:10.074 "rw_ios_per_sec": 0, 00:35:10.074 "rw_mbytes_per_sec": 0, 00:35:10.074 "r_mbytes_per_sec": 0, 00:35:10.074 "w_mbytes_per_sec": 0 00:35:10.074 }, 00:35:10.074 "claimed": false, 00:35:10.074 "zoned": false, 00:35:10.074 "supported_io_types": { 00:35:10.074 "read": true, 00:35:10.074 "write": true, 00:35:10.074 "unmap": false, 00:35:10.074 "flush": false, 00:35:10.074 "reset": true, 00:35:10.074 "nvme_admin": false, 00:35:10.074 "nvme_io": false, 00:35:10.074 "nvme_io_md": false, 00:35:10.074 "write_zeroes": true, 00:35:10.074 "zcopy": false, 00:35:10.074 "get_zone_info": false, 00:35:10.074 "zone_management": false, 00:35:10.074 "zone_append": false, 00:35:10.074 "compare": false, 00:35:10.074 "compare_and_write": false, 00:35:10.074 "abort": false, 00:35:10.074 "seek_hole": false, 00:35:10.074 "seek_data": false, 00:35:10.074 "copy": false, 00:35:10.074 "nvme_iov_md": false 00:35:10.074 }, 00:35:10.074 "memory_domains": [ 00:35:10.074 { 00:35:10.074 "dma_device_id": "system", 00:35:10.074 "dma_device_type": 1 00:35:10.074 }, 00:35:10.074 { 00:35:10.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:10.074 "dma_device_type": 2 00:35:10.074 }, 00:35:10.074 { 00:35:10.074 "dma_device_id": "system", 00:35:10.074 "dma_device_type": 1 00:35:10.074 }, 00:35:10.074 { 00:35:10.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:10.074 "dma_device_type": 2 00:35:10.074 } 00:35:10.074 ], 00:35:10.074 "driver_specific": { 00:35:10.074 "raid": { 00:35:10.074 "uuid": "d027c16d-e154-44c2-ac74-ab98ea9eb934", 00:35:10.074 "strip_size_kb": 0, 00:35:10.074 "state": "online", 00:35:10.074 "raid_level": "raid1", 00:35:10.074 "superblock": true, 00:35:10.074 "num_base_bdevs": 2, 00:35:10.074 "num_base_bdevs_discovered": 2, 00:35:10.074 "num_base_bdevs_operational": 2, 00:35:10.074 "base_bdevs_list": [ 00:35:10.074 { 00:35:10.074 "name": "BaseBdev1", 00:35:10.074 "uuid": "4516ebc2-987c-40b4-9fef-d566ac0c63bd", 00:35:10.074 "is_configured": true, 00:35:10.074 "data_offset": 256, 00:35:10.074 "data_size": 7936 00:35:10.074 }, 00:35:10.074 { 00:35:10.074 "name": "BaseBdev2", 00:35:10.074 "uuid": "b8066989-2e7b-46eb-b589-457e0886471e", 00:35:10.074 "is_configured": true, 00:35:10.074 "data_offset": 256, 00:35:10.074 "data_size": 7936 00:35:10.074 } 00:35:10.074 ] 00:35:10.074 } 00:35:10.074 } 00:35:10.074 }' 00:35:10.074 23:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:10.074 23:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:35:10.074 BaseBdev2' 00:35:10.074 23:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:10.074 23:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:10.074 23:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:35:10.332 23:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:10.332 "name": "BaseBdev1", 00:35:10.332 "aliases": [ 00:35:10.332 "4516ebc2-987c-40b4-9fef-d566ac0c63bd" 00:35:10.332 ], 00:35:10.332 "product_name": "Malloc disk", 00:35:10.332 "block_size": 4096, 00:35:10.332 "num_blocks": 8192, 00:35:10.332 "uuid": "4516ebc2-987c-40b4-9fef-d566ac0c63bd", 00:35:10.332 "assigned_rate_limits": { 00:35:10.332 "rw_ios_per_sec": 0, 00:35:10.332 "rw_mbytes_per_sec": 0, 00:35:10.332 "r_mbytes_per_sec": 0, 00:35:10.332 "w_mbytes_per_sec": 0 00:35:10.332 }, 00:35:10.332 "claimed": true, 00:35:10.332 "claim_type": "exclusive_write", 00:35:10.332 "zoned": false, 00:35:10.332 "supported_io_types": { 00:35:10.332 "read": true, 00:35:10.332 "write": true, 00:35:10.332 "unmap": true, 00:35:10.332 "flush": true, 00:35:10.332 "reset": true, 00:35:10.332 "nvme_admin": false, 00:35:10.332 "nvme_io": false, 00:35:10.332 "nvme_io_md": false, 00:35:10.332 "write_zeroes": true, 00:35:10.332 "zcopy": true, 00:35:10.332 "get_zone_info": false, 00:35:10.332 "zone_management": false, 00:35:10.332 "zone_append": false, 00:35:10.332 "compare": false, 00:35:10.332 "compare_and_write": false, 00:35:10.332 "abort": true, 00:35:10.332 "seek_hole": false, 00:35:10.332 "seek_data": false, 00:35:10.332 "copy": true, 00:35:10.332 "nvme_iov_md": false 00:35:10.332 }, 00:35:10.332 "memory_domains": [ 00:35:10.332 { 00:35:10.332 "dma_device_id": "system", 00:35:10.332 "dma_device_type": 1 00:35:10.332 }, 00:35:10.332 { 00:35:10.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:10.332 "dma_device_type": 2 00:35:10.332 } 00:35:10.332 ], 00:35:10.332 "driver_specific": {} 00:35:10.332 }' 00:35:10.333 23:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:10.591 23:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:10.591 23:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:35:10.591 23:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:10.591 23:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:10.591 23:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:10.591 23:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:10.591 23:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:10.591 23:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:10.591 23:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:10.850 23:21:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:10.850 23:21:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:10.850 23:21:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:10.850 23:21:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:35:10.850 23:21:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:11.109 23:21:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:11.109 "name": "BaseBdev2", 00:35:11.109 "aliases": [ 00:35:11.109 "b8066989-2e7b-46eb-b589-457e0886471e" 00:35:11.109 ], 00:35:11.109 "product_name": "Malloc disk", 00:35:11.109 "block_size": 4096, 00:35:11.109 "num_blocks": 8192, 00:35:11.109 "uuid": "b8066989-2e7b-46eb-b589-457e0886471e", 00:35:11.109 "assigned_rate_limits": { 00:35:11.109 "rw_ios_per_sec": 0, 00:35:11.109 "rw_mbytes_per_sec": 0, 00:35:11.109 "r_mbytes_per_sec": 0, 00:35:11.109 "w_mbytes_per_sec": 0 00:35:11.109 }, 00:35:11.109 "claimed": true, 00:35:11.109 "claim_type": "exclusive_write", 00:35:11.109 "zoned": false, 00:35:11.109 "supported_io_types": { 00:35:11.109 "read": true, 00:35:11.109 "write": true, 00:35:11.109 "unmap": true, 00:35:11.109 "flush": true, 00:35:11.109 "reset": true, 00:35:11.109 "nvme_admin": false, 00:35:11.109 "nvme_io": false, 00:35:11.109 "nvme_io_md": false, 00:35:11.109 "write_zeroes": true, 00:35:11.109 "zcopy": true, 00:35:11.109 "get_zone_info": false, 00:35:11.109 "zone_management": false, 00:35:11.109 "zone_append": false, 00:35:11.109 "compare": false, 00:35:11.109 "compare_and_write": false, 00:35:11.109 "abort": true, 00:35:11.109 "seek_hole": false, 00:35:11.109 "seek_data": false, 00:35:11.109 "copy": true, 00:35:11.109 "nvme_iov_md": false 00:35:11.109 }, 00:35:11.109 "memory_domains": [ 00:35:11.109 { 00:35:11.109 "dma_device_id": "system", 00:35:11.109 "dma_device_type": 1 00:35:11.109 }, 00:35:11.109 { 00:35:11.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:11.109 "dma_device_type": 2 00:35:11.109 } 00:35:11.109 ], 00:35:11.109 "driver_specific": {} 00:35:11.109 }' 00:35:11.109 23:21:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:11.109 23:21:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:11.109 23:21:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:35:11.109 23:21:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:11.109 23:21:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:11.109 23:21:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:11.109 23:21:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:11.368 23:21:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:11.368 23:21:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:11.368 23:21:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:11.368 23:21:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:11.368 23:21:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:11.368 23:21:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:35:11.627 [2024-07-13 23:21:00.977205] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:11.627 23:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:35:11.627 23:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:35:11.627 23:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:35:11.627 23:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:35:11.627 23:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:35:11.627 23:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:35:11.627 23:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:11.627 23:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:11.627 23:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:11.627 23:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:11.627 23:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:35:11.627 23:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:11.627 23:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:11.627 23:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:11.627 23:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:11.627 23:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:11.627 23:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:11.885 23:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:11.885 "name": "Existed_Raid", 00:35:11.885 "uuid": "d027c16d-e154-44c2-ac74-ab98ea9eb934", 00:35:11.885 "strip_size_kb": 0, 00:35:11.885 "state": "online", 00:35:11.885 "raid_level": "raid1", 00:35:11.885 "superblock": true, 00:35:11.885 "num_base_bdevs": 2, 00:35:11.885 "num_base_bdevs_discovered": 1, 00:35:11.885 "num_base_bdevs_operational": 1, 00:35:11.885 "base_bdevs_list": [ 00:35:11.885 { 00:35:11.885 "name": null, 00:35:11.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:11.885 "is_configured": false, 00:35:11.885 "data_offset": 256, 00:35:11.885 "data_size": 7936 00:35:11.885 }, 00:35:11.885 { 00:35:11.885 "name": "BaseBdev2", 00:35:11.885 "uuid": "b8066989-2e7b-46eb-b589-457e0886471e", 00:35:11.885 "is_configured": true, 00:35:11.885 "data_offset": 256, 00:35:11.885 "data_size": 7936 00:35:11.885 } 00:35:11.885 ] 00:35:11.885 }' 00:35:11.885 23:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:11.885 23:21:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:12.816 23:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:35:12.816 23:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:35:12.816 23:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:12.816 23:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:35:12.816 23:21:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:35:12.816 23:21:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:12.816 23:21:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:35:13.074 [2024-07-13 23:21:02.399588] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:13.074 [2024-07-13 23:21:02.399734] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:13.074 [2024-07-13 23:21:02.410059] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:13.074 [2024-07-13 23:21:02.410117] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:13.074 [2024-07-13 23:21:02.410129] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:35:13.074 23:21:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:35:13.074 23:21:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:35:13.074 23:21:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:13.074 23:21:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:35:13.333 23:21:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:35:13.333 23:21:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:35:13.333 23:21:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:35:13.333 23:21:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 168401 00:35:13.333 23:21:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@948 -- # '[' -z 168401 ']' 00:35:13.333 23:21:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # kill -0 168401 00:35:13.333 23:21:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # uname 00:35:13.333 23:21:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:13.333 23:21:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 168401 00:35:13.333 killing process with pid 168401 00:35:13.333 23:21:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:13.333 23:21:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:13.333 23:21:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 168401' 00:35:13.333 23:21:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@967 -- # kill 168401 00:35:13.333 [2024-07-13 23:21:02.700332] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:13.333 23:21:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # wait 168401 00:35:13.333 [2024-07-13 23:21:02.700406] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:13.592 ************************************ 00:35:13.592 END TEST raid_state_function_test_sb_4k 00:35:13.592 ************************************ 00:35:13.592 23:21:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:35:13.592 00:35:13.592 real 0m10.993s 00:35:13.592 user 0m20.247s 00:35:13.592 sys 0m1.387s 00:35:13.592 23:21:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:13.592 23:21:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:13.592 23:21:02 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:35:13.592 23:21:02 bdev_raid -- bdev/bdev_raid.sh@899 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:35:13.592 23:21:02 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:35:13.592 23:21:02 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:13.592 23:21:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:13.592 ************************************ 00:35:13.592 START TEST raid_superblock_test_4k 00:35:13.592 ************************************ 00:35:13.592 23:21:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:35:13.592 23:21:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:35:13.592 23:21:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:35:13.592 23:21:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:35:13.592 23:21:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:35:13.592 23:21:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:35:13.592 23:21:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:35:13.592 23:21:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:35:13.592 23:21:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:35:13.592 23:21:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:35:13.592 23:21:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local strip_size 00:35:13.592 23:21:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:35:13.592 23:21:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:35:13.592 23:21:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:35:13.592 23:21:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:35:13.592 23:21:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:35:13.592 23:21:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # raid_pid=168766 00:35:13.592 23:21:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # waitforlisten 168766 /var/tmp/spdk-raid.sock 00:35:13.592 23:21:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:35:13.592 23:21:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@829 -- # '[' -z 168766 ']' 00:35:13.592 23:21:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:13.592 23:21:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:13.592 23:21:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:13.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:13.592 23:21:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:13.592 23:21:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:35:13.851 [2024-07-13 23:21:03.027221] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:35:13.852 [2024-07-13 23:21:03.027573] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168766 ] 00:35:13.852 [2024-07-13 23:21:03.166246] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:13.852 [2024-07-13 23:21:03.232218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:14.110 [2024-07-13 23:21:03.286161] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:14.677 23:21:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:14.677 23:21:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # return 0 00:35:14.677 23:21:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:35:14.677 23:21:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:35:14.677 23:21:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:35:14.677 23:21:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:35:14.677 23:21:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:35:14.677 23:21:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:14.677 23:21:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:35:14.677 23:21:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:14.677 23:21:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:35:14.942 malloc1 00:35:14.942 23:21:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:15.214 [2024-07-13 23:21:04.542657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:15.214 [2024-07-13 23:21:04.542939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:15.214 [2024-07-13 23:21:04.543119] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:35:15.214 [2024-07-13 23:21:04.543302] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:15.214 [2024-07-13 23:21:04.546208] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:15.214 [2024-07-13 23:21:04.546427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:15.214 pt1 00:35:15.214 23:21:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:35:15.214 23:21:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:35:15.214 23:21:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:35:15.214 23:21:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:35:15.214 23:21:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:35:15.214 23:21:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:15.214 23:21:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:35:15.214 23:21:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:15.214 23:21:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:35:15.473 malloc2 00:35:15.473 23:21:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:15.732 [2024-07-13 23:21:05.053685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:15.732 [2024-07-13 23:21:05.053988] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:15.732 [2024-07-13 23:21:05.054066] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:35:15.732 [2024-07-13 23:21:05.054213] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:15.732 [2024-07-13 23:21:05.056487] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:15.732 [2024-07-13 23:21:05.056684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:15.732 pt2 00:35:15.732 23:21:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:35:15.732 23:21:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:35:15.732 23:21:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:35:15.990 [2024-07-13 23:21:05.277818] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:15.990 [2024-07-13 23:21:05.280099] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:15.990 [2024-07-13 23:21:05.280479] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006c80 00:35:15.990 [2024-07-13 23:21:05.280609] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:35:15.990 [2024-07-13 23:21:05.280816] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:35:15.990 [2024-07-13 23:21:05.281476] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006c80 00:35:15.990 [2024-07-13 23:21:05.281596] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000006c80 00:35:15.990 [2024-07-13 23:21:05.281917] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:15.990 23:21:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:15.990 23:21:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:15.990 23:21:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:15.990 23:21:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:15.990 23:21:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:15.990 23:21:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:15.990 23:21:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:15.990 23:21:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:15.990 23:21:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:15.990 23:21:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:15.990 23:21:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:15.990 23:21:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:16.249 23:21:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:16.249 "name": "raid_bdev1", 00:35:16.249 "uuid": "4929046a-3b0c-40f6-b419-4be6cd63f6fe", 00:35:16.249 "strip_size_kb": 0, 00:35:16.249 "state": "online", 00:35:16.249 "raid_level": "raid1", 00:35:16.249 "superblock": true, 00:35:16.249 "num_base_bdevs": 2, 00:35:16.249 "num_base_bdevs_discovered": 2, 00:35:16.249 "num_base_bdevs_operational": 2, 00:35:16.249 "base_bdevs_list": [ 00:35:16.249 { 00:35:16.249 "name": "pt1", 00:35:16.249 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:16.249 "is_configured": true, 00:35:16.249 "data_offset": 256, 00:35:16.249 "data_size": 7936 00:35:16.249 }, 00:35:16.249 { 00:35:16.249 "name": "pt2", 00:35:16.249 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:16.249 "is_configured": true, 00:35:16.249 "data_offset": 256, 00:35:16.249 "data_size": 7936 00:35:16.249 } 00:35:16.249 ] 00:35:16.249 }' 00:35:16.249 23:21:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:16.249 23:21:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:35:16.815 23:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:35:16.815 23:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:35:16.815 23:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:35:16.815 23:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:35:16.815 23:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:35:16.815 23:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:35:16.815 23:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:35:16.815 23:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:17.072 [2024-07-13 23:21:06.382350] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:17.072 23:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:35:17.072 "name": "raid_bdev1", 00:35:17.072 "aliases": [ 00:35:17.072 "4929046a-3b0c-40f6-b419-4be6cd63f6fe" 00:35:17.072 ], 00:35:17.072 "product_name": "Raid Volume", 00:35:17.072 "block_size": 4096, 00:35:17.072 "num_blocks": 7936, 00:35:17.072 "uuid": "4929046a-3b0c-40f6-b419-4be6cd63f6fe", 00:35:17.072 "assigned_rate_limits": { 00:35:17.072 "rw_ios_per_sec": 0, 00:35:17.072 "rw_mbytes_per_sec": 0, 00:35:17.072 "r_mbytes_per_sec": 0, 00:35:17.072 "w_mbytes_per_sec": 0 00:35:17.072 }, 00:35:17.072 "claimed": false, 00:35:17.072 "zoned": false, 00:35:17.072 "supported_io_types": { 00:35:17.072 "read": true, 00:35:17.072 "write": true, 00:35:17.072 "unmap": false, 00:35:17.072 "flush": false, 00:35:17.072 "reset": true, 00:35:17.072 "nvme_admin": false, 00:35:17.072 "nvme_io": false, 00:35:17.072 "nvme_io_md": false, 00:35:17.072 "write_zeroes": true, 00:35:17.072 "zcopy": false, 00:35:17.072 "get_zone_info": false, 00:35:17.072 "zone_management": false, 00:35:17.072 "zone_append": false, 00:35:17.072 "compare": false, 00:35:17.072 "compare_and_write": false, 00:35:17.072 "abort": false, 00:35:17.072 "seek_hole": false, 00:35:17.072 "seek_data": false, 00:35:17.072 "copy": false, 00:35:17.072 "nvme_iov_md": false 00:35:17.072 }, 00:35:17.072 "memory_domains": [ 00:35:17.072 { 00:35:17.072 "dma_device_id": "system", 00:35:17.072 "dma_device_type": 1 00:35:17.072 }, 00:35:17.072 { 00:35:17.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:17.072 "dma_device_type": 2 00:35:17.072 }, 00:35:17.072 { 00:35:17.072 "dma_device_id": "system", 00:35:17.072 "dma_device_type": 1 00:35:17.072 }, 00:35:17.072 { 00:35:17.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:17.072 "dma_device_type": 2 00:35:17.072 } 00:35:17.072 ], 00:35:17.072 "driver_specific": { 00:35:17.072 "raid": { 00:35:17.072 "uuid": "4929046a-3b0c-40f6-b419-4be6cd63f6fe", 00:35:17.072 "strip_size_kb": 0, 00:35:17.072 "state": "online", 00:35:17.072 "raid_level": "raid1", 00:35:17.072 "superblock": true, 00:35:17.072 "num_base_bdevs": 2, 00:35:17.072 "num_base_bdevs_discovered": 2, 00:35:17.072 "num_base_bdevs_operational": 2, 00:35:17.072 "base_bdevs_list": [ 00:35:17.072 { 00:35:17.072 "name": "pt1", 00:35:17.072 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:17.072 "is_configured": true, 00:35:17.072 "data_offset": 256, 00:35:17.072 "data_size": 7936 00:35:17.072 }, 00:35:17.072 { 00:35:17.072 "name": "pt2", 00:35:17.072 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:17.072 "is_configured": true, 00:35:17.072 "data_offset": 256, 00:35:17.072 "data_size": 7936 00:35:17.072 } 00:35:17.072 ] 00:35:17.072 } 00:35:17.072 } 00:35:17.072 }' 00:35:17.072 23:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:17.072 23:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:35:17.072 pt2' 00:35:17.072 23:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:17.072 23:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:17.072 23:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:35:17.330 23:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:17.330 "name": "pt1", 00:35:17.330 "aliases": [ 00:35:17.330 "00000000-0000-0000-0000-000000000001" 00:35:17.330 ], 00:35:17.330 "product_name": "passthru", 00:35:17.330 "block_size": 4096, 00:35:17.330 "num_blocks": 8192, 00:35:17.330 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:17.330 "assigned_rate_limits": { 00:35:17.330 "rw_ios_per_sec": 0, 00:35:17.330 "rw_mbytes_per_sec": 0, 00:35:17.330 "r_mbytes_per_sec": 0, 00:35:17.330 "w_mbytes_per_sec": 0 00:35:17.330 }, 00:35:17.330 "claimed": true, 00:35:17.330 "claim_type": "exclusive_write", 00:35:17.330 "zoned": false, 00:35:17.330 "supported_io_types": { 00:35:17.330 "read": true, 00:35:17.330 "write": true, 00:35:17.330 "unmap": true, 00:35:17.330 "flush": true, 00:35:17.330 "reset": true, 00:35:17.330 "nvme_admin": false, 00:35:17.330 "nvme_io": false, 00:35:17.330 "nvme_io_md": false, 00:35:17.330 "write_zeroes": true, 00:35:17.330 "zcopy": true, 00:35:17.330 "get_zone_info": false, 00:35:17.330 "zone_management": false, 00:35:17.330 "zone_append": false, 00:35:17.330 "compare": false, 00:35:17.330 "compare_and_write": false, 00:35:17.330 "abort": true, 00:35:17.330 "seek_hole": false, 00:35:17.330 "seek_data": false, 00:35:17.330 "copy": true, 00:35:17.330 "nvme_iov_md": false 00:35:17.330 }, 00:35:17.330 "memory_domains": [ 00:35:17.330 { 00:35:17.330 "dma_device_id": "system", 00:35:17.330 "dma_device_type": 1 00:35:17.330 }, 00:35:17.331 { 00:35:17.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:17.331 "dma_device_type": 2 00:35:17.331 } 00:35:17.331 ], 00:35:17.331 "driver_specific": { 00:35:17.331 "passthru": { 00:35:17.331 "name": "pt1", 00:35:17.331 "base_bdev_name": "malloc1" 00:35:17.331 } 00:35:17.331 } 00:35:17.331 }' 00:35:17.331 23:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:17.589 23:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:17.589 23:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:35:17.589 23:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:17.589 23:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:17.589 23:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:17.589 23:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:17.589 23:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:17.589 23:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:17.589 23:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:17.846 23:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:17.846 23:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:17.846 23:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:17.846 23:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:17.846 23:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:35:18.103 23:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:18.103 "name": "pt2", 00:35:18.103 "aliases": [ 00:35:18.103 "00000000-0000-0000-0000-000000000002" 00:35:18.103 ], 00:35:18.103 "product_name": "passthru", 00:35:18.103 "block_size": 4096, 00:35:18.103 "num_blocks": 8192, 00:35:18.103 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:18.103 "assigned_rate_limits": { 00:35:18.103 "rw_ios_per_sec": 0, 00:35:18.103 "rw_mbytes_per_sec": 0, 00:35:18.103 "r_mbytes_per_sec": 0, 00:35:18.103 "w_mbytes_per_sec": 0 00:35:18.103 }, 00:35:18.103 "claimed": true, 00:35:18.103 "claim_type": "exclusive_write", 00:35:18.103 "zoned": false, 00:35:18.103 "supported_io_types": { 00:35:18.103 "read": true, 00:35:18.103 "write": true, 00:35:18.103 "unmap": true, 00:35:18.103 "flush": true, 00:35:18.103 "reset": true, 00:35:18.103 "nvme_admin": false, 00:35:18.103 "nvme_io": false, 00:35:18.103 "nvme_io_md": false, 00:35:18.103 "write_zeroes": true, 00:35:18.103 "zcopy": true, 00:35:18.103 "get_zone_info": false, 00:35:18.103 "zone_management": false, 00:35:18.103 "zone_append": false, 00:35:18.103 "compare": false, 00:35:18.103 "compare_and_write": false, 00:35:18.103 "abort": true, 00:35:18.103 "seek_hole": false, 00:35:18.103 "seek_data": false, 00:35:18.103 "copy": true, 00:35:18.103 "nvme_iov_md": false 00:35:18.103 }, 00:35:18.103 "memory_domains": [ 00:35:18.103 { 00:35:18.103 "dma_device_id": "system", 00:35:18.103 "dma_device_type": 1 00:35:18.103 }, 00:35:18.103 { 00:35:18.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:18.103 "dma_device_type": 2 00:35:18.103 } 00:35:18.103 ], 00:35:18.103 "driver_specific": { 00:35:18.103 "passthru": { 00:35:18.103 "name": "pt2", 00:35:18.103 "base_bdev_name": "malloc2" 00:35:18.103 } 00:35:18.103 } 00:35:18.103 }' 00:35:18.103 23:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:18.103 23:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:18.103 23:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:35:18.103 23:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:18.361 23:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:18.361 23:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:18.361 23:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:18.361 23:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:18.361 23:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:18.361 23:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:18.361 23:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:18.361 23:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:18.619 23:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:18.619 23:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:35:18.877 [2024-07-13 23:21:08.030654] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:18.877 23:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=4929046a-3b0c-40f6-b419-4be6cd63f6fe 00:35:18.877 23:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # '[' -z 4929046a-3b0c-40f6-b419-4be6cd63f6fe ']' 00:35:18.877 23:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:19.134 [2024-07-13 23:21:08.290443] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:19.134 [2024-07-13 23:21:08.290640] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:19.134 [2024-07-13 23:21:08.290854] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:19.134 [2024-07-13 23:21:08.291046] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:19.134 [2024-07-13 23:21:08.291199] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name raid_bdev1, state offline 00:35:19.134 23:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:19.134 23:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:35:19.392 23:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:35:19.392 23:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:35:19.392 23:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:35:19.392 23:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:35:19.392 23:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:35:19.392 23:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:35:19.648 23:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:35:19.648 23:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:35:19.906 23:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:35:19.906 23:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:35:19.906 23:21:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@648 -- # local es=0 00:35:19.906 23:21:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:35:19.906 23:21:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:19.906 23:21:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:19.906 23:21:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:19.906 23:21:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:19.906 23:21:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:19.906 23:21:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:19.906 23:21:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:19.906 23:21:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:35:19.906 23:21:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:35:20.164 [2024-07-13 23:21:09.498692] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:35:20.164 [2024-07-13 23:21:09.501001] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:35:20.164 [2024-07-13 23:21:09.501236] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:35:20.164 [2024-07-13 23:21:09.501460] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:35:20.164 [2024-07-13 23:21:09.501545] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:20.164 [2024-07-13 23:21:09.501643] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state configuring 00:35:20.164 request: 00:35:20.164 { 00:35:20.164 "name": "raid_bdev1", 00:35:20.164 "raid_level": "raid1", 00:35:20.164 "base_bdevs": [ 00:35:20.164 "malloc1", 00:35:20.164 "malloc2" 00:35:20.164 ], 00:35:20.164 "superblock": false, 00:35:20.164 "method": "bdev_raid_create", 00:35:20.164 "req_id": 1 00:35:20.164 } 00:35:20.164 Got JSON-RPC error response 00:35:20.164 response: 00:35:20.164 { 00:35:20.164 "code": -17, 00:35:20.164 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:35:20.164 } 00:35:20.164 23:21:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # es=1 00:35:20.164 23:21:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:20.164 23:21:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:20.164 23:21:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:20.164 23:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:20.164 23:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:35:20.422 23:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:35:20.422 23:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:35:20.422 23:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:20.679 [2024-07-13 23:21:09.942709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:20.679 [2024-07-13 23:21:09.942991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:20.679 [2024-07-13 23:21:09.943076] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:35:20.679 [2024-07-13 23:21:09.943260] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:20.679 [2024-07-13 23:21:09.945816] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:20.679 [2024-07-13 23:21:09.946000] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:20.679 [2024-07-13 23:21:09.946191] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:35:20.679 [2024-07-13 23:21:09.946386] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:20.679 pt1 00:35:20.679 23:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:35:20.679 23:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:20.679 23:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:20.679 23:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:20.679 23:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:20.679 23:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:20.679 23:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:20.679 23:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:20.679 23:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:20.679 23:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:20.679 23:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:20.679 23:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:20.937 23:21:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:20.937 "name": "raid_bdev1", 00:35:20.937 "uuid": "4929046a-3b0c-40f6-b419-4be6cd63f6fe", 00:35:20.937 "strip_size_kb": 0, 00:35:20.937 "state": "configuring", 00:35:20.937 "raid_level": "raid1", 00:35:20.937 "superblock": true, 00:35:20.937 "num_base_bdevs": 2, 00:35:20.937 "num_base_bdevs_discovered": 1, 00:35:20.937 "num_base_bdevs_operational": 2, 00:35:20.937 "base_bdevs_list": [ 00:35:20.937 { 00:35:20.937 "name": "pt1", 00:35:20.937 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:20.937 "is_configured": true, 00:35:20.937 "data_offset": 256, 00:35:20.937 "data_size": 7936 00:35:20.937 }, 00:35:20.937 { 00:35:20.937 "name": null, 00:35:20.937 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:20.937 "is_configured": false, 00:35:20.937 "data_offset": 256, 00:35:20.937 "data_size": 7936 00:35:20.937 } 00:35:20.937 ] 00:35:20.937 }' 00:35:20.937 23:21:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:20.937 23:21:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:35:21.502 23:21:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:35:21.502 23:21:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:35:21.502 23:21:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:35:21.502 23:21:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:21.760 [2024-07-13 23:21:10.968838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:21.760 [2024-07-13 23:21:10.969176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:21.760 [2024-07-13 23:21:10.969259] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:35:21.760 [2024-07-13 23:21:10.969560] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:21.760 [2024-07-13 23:21:10.970112] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:21.760 [2024-07-13 23:21:10.970289] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:21.760 [2024-07-13 23:21:10.970427] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:35:21.761 [2024-07-13 23:21:10.970591] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:21.761 [2024-07-13 23:21:10.970784] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:35:21.761 [2024-07-13 23:21:10.970933] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:35:21.761 [2024-07-13 23:21:10.971141] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:35:21.761 [2024-07-13 23:21:10.971603] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:35:21.761 [2024-07-13 23:21:10.971650] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:35:21.761 [2024-07-13 23:21:10.971878] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:21.761 pt2 00:35:21.761 23:21:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:35:21.761 23:21:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:35:21.761 23:21:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:21.761 23:21:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:21.761 23:21:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:21.761 23:21:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:21.761 23:21:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:21.761 23:21:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:21.761 23:21:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:21.761 23:21:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:21.761 23:21:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:21.761 23:21:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:21.761 23:21:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:21.761 23:21:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:22.019 23:21:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:22.019 "name": "raid_bdev1", 00:35:22.019 "uuid": "4929046a-3b0c-40f6-b419-4be6cd63f6fe", 00:35:22.019 "strip_size_kb": 0, 00:35:22.019 "state": "online", 00:35:22.019 "raid_level": "raid1", 00:35:22.019 "superblock": true, 00:35:22.019 "num_base_bdevs": 2, 00:35:22.019 "num_base_bdevs_discovered": 2, 00:35:22.019 "num_base_bdevs_operational": 2, 00:35:22.019 "base_bdevs_list": [ 00:35:22.019 { 00:35:22.019 "name": "pt1", 00:35:22.019 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:22.019 "is_configured": true, 00:35:22.019 "data_offset": 256, 00:35:22.019 "data_size": 7936 00:35:22.019 }, 00:35:22.019 { 00:35:22.019 "name": "pt2", 00:35:22.019 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:22.019 "is_configured": true, 00:35:22.019 "data_offset": 256, 00:35:22.019 "data_size": 7936 00:35:22.019 } 00:35:22.019 ] 00:35:22.019 }' 00:35:22.019 23:21:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:22.019 23:21:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:35:22.585 23:21:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:35:22.586 23:21:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:35:22.586 23:21:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:35:22.586 23:21:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:35:22.586 23:21:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:35:22.586 23:21:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:35:22.586 23:21:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:22.586 23:21:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:35:22.844 [2024-07-13 23:21:12.125445] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:22.844 23:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:35:22.844 "name": "raid_bdev1", 00:35:22.844 "aliases": [ 00:35:22.844 "4929046a-3b0c-40f6-b419-4be6cd63f6fe" 00:35:22.844 ], 00:35:22.844 "product_name": "Raid Volume", 00:35:22.844 "block_size": 4096, 00:35:22.844 "num_blocks": 7936, 00:35:22.844 "uuid": "4929046a-3b0c-40f6-b419-4be6cd63f6fe", 00:35:22.844 "assigned_rate_limits": { 00:35:22.844 "rw_ios_per_sec": 0, 00:35:22.844 "rw_mbytes_per_sec": 0, 00:35:22.844 "r_mbytes_per_sec": 0, 00:35:22.844 "w_mbytes_per_sec": 0 00:35:22.844 }, 00:35:22.844 "claimed": false, 00:35:22.844 "zoned": false, 00:35:22.844 "supported_io_types": { 00:35:22.844 "read": true, 00:35:22.844 "write": true, 00:35:22.844 "unmap": false, 00:35:22.844 "flush": false, 00:35:22.844 "reset": true, 00:35:22.844 "nvme_admin": false, 00:35:22.844 "nvme_io": false, 00:35:22.844 "nvme_io_md": false, 00:35:22.844 "write_zeroes": true, 00:35:22.844 "zcopy": false, 00:35:22.844 "get_zone_info": false, 00:35:22.844 "zone_management": false, 00:35:22.844 "zone_append": false, 00:35:22.844 "compare": false, 00:35:22.844 "compare_and_write": false, 00:35:22.844 "abort": false, 00:35:22.844 "seek_hole": false, 00:35:22.844 "seek_data": false, 00:35:22.844 "copy": false, 00:35:22.844 "nvme_iov_md": false 00:35:22.844 }, 00:35:22.844 "memory_domains": [ 00:35:22.844 { 00:35:22.844 "dma_device_id": "system", 00:35:22.844 "dma_device_type": 1 00:35:22.844 }, 00:35:22.844 { 00:35:22.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:22.844 "dma_device_type": 2 00:35:22.844 }, 00:35:22.844 { 00:35:22.844 "dma_device_id": "system", 00:35:22.844 "dma_device_type": 1 00:35:22.844 }, 00:35:22.844 { 00:35:22.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:22.844 "dma_device_type": 2 00:35:22.844 } 00:35:22.844 ], 00:35:22.844 "driver_specific": { 00:35:22.844 "raid": { 00:35:22.844 "uuid": "4929046a-3b0c-40f6-b419-4be6cd63f6fe", 00:35:22.844 "strip_size_kb": 0, 00:35:22.844 "state": "online", 00:35:22.844 "raid_level": "raid1", 00:35:22.844 "superblock": true, 00:35:22.844 "num_base_bdevs": 2, 00:35:22.844 "num_base_bdevs_discovered": 2, 00:35:22.844 "num_base_bdevs_operational": 2, 00:35:22.844 "base_bdevs_list": [ 00:35:22.844 { 00:35:22.844 "name": "pt1", 00:35:22.844 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:22.844 "is_configured": true, 00:35:22.844 "data_offset": 256, 00:35:22.844 "data_size": 7936 00:35:22.844 }, 00:35:22.844 { 00:35:22.844 "name": "pt2", 00:35:22.844 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:22.844 "is_configured": true, 00:35:22.844 "data_offset": 256, 00:35:22.844 "data_size": 7936 00:35:22.844 } 00:35:22.844 ] 00:35:22.844 } 00:35:22.844 } 00:35:22.844 }' 00:35:22.844 23:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:22.844 23:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:35:22.844 pt2' 00:35:22.844 23:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:22.844 23:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:35:22.844 23:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:23.102 23:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:23.102 "name": "pt1", 00:35:23.102 "aliases": [ 00:35:23.102 "00000000-0000-0000-0000-000000000001" 00:35:23.102 ], 00:35:23.102 "product_name": "passthru", 00:35:23.102 "block_size": 4096, 00:35:23.102 "num_blocks": 8192, 00:35:23.102 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:23.102 "assigned_rate_limits": { 00:35:23.102 "rw_ios_per_sec": 0, 00:35:23.102 "rw_mbytes_per_sec": 0, 00:35:23.102 "r_mbytes_per_sec": 0, 00:35:23.102 "w_mbytes_per_sec": 0 00:35:23.102 }, 00:35:23.102 "claimed": true, 00:35:23.102 "claim_type": "exclusive_write", 00:35:23.102 "zoned": false, 00:35:23.102 "supported_io_types": { 00:35:23.102 "read": true, 00:35:23.102 "write": true, 00:35:23.102 "unmap": true, 00:35:23.102 "flush": true, 00:35:23.102 "reset": true, 00:35:23.102 "nvme_admin": false, 00:35:23.102 "nvme_io": false, 00:35:23.102 "nvme_io_md": false, 00:35:23.102 "write_zeroes": true, 00:35:23.102 "zcopy": true, 00:35:23.102 "get_zone_info": false, 00:35:23.102 "zone_management": false, 00:35:23.102 "zone_append": false, 00:35:23.102 "compare": false, 00:35:23.102 "compare_and_write": false, 00:35:23.102 "abort": true, 00:35:23.103 "seek_hole": false, 00:35:23.103 "seek_data": false, 00:35:23.103 "copy": true, 00:35:23.103 "nvme_iov_md": false 00:35:23.103 }, 00:35:23.103 "memory_domains": [ 00:35:23.103 { 00:35:23.103 "dma_device_id": "system", 00:35:23.103 "dma_device_type": 1 00:35:23.103 }, 00:35:23.103 { 00:35:23.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:23.103 "dma_device_type": 2 00:35:23.103 } 00:35:23.103 ], 00:35:23.103 "driver_specific": { 00:35:23.103 "passthru": { 00:35:23.103 "name": "pt1", 00:35:23.103 "base_bdev_name": "malloc1" 00:35:23.103 } 00:35:23.103 } 00:35:23.103 }' 00:35:23.103 23:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:23.103 23:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:23.360 23:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:35:23.360 23:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:23.360 23:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:23.360 23:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:23.360 23:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:23.360 23:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:23.360 23:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:23.360 23:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:23.619 23:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:23.619 23:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:23.619 23:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:23.619 23:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:35:23.619 23:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:23.878 23:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:23.878 "name": "pt2", 00:35:23.878 "aliases": [ 00:35:23.878 "00000000-0000-0000-0000-000000000002" 00:35:23.878 ], 00:35:23.878 "product_name": "passthru", 00:35:23.878 "block_size": 4096, 00:35:23.878 "num_blocks": 8192, 00:35:23.878 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:23.878 "assigned_rate_limits": { 00:35:23.878 "rw_ios_per_sec": 0, 00:35:23.878 "rw_mbytes_per_sec": 0, 00:35:23.878 "r_mbytes_per_sec": 0, 00:35:23.878 "w_mbytes_per_sec": 0 00:35:23.878 }, 00:35:23.878 "claimed": true, 00:35:23.878 "claim_type": "exclusive_write", 00:35:23.878 "zoned": false, 00:35:23.878 "supported_io_types": { 00:35:23.878 "read": true, 00:35:23.878 "write": true, 00:35:23.878 "unmap": true, 00:35:23.878 "flush": true, 00:35:23.878 "reset": true, 00:35:23.878 "nvme_admin": false, 00:35:23.878 "nvme_io": false, 00:35:23.878 "nvme_io_md": false, 00:35:23.878 "write_zeroes": true, 00:35:23.878 "zcopy": true, 00:35:23.878 "get_zone_info": false, 00:35:23.878 "zone_management": false, 00:35:23.878 "zone_append": false, 00:35:23.878 "compare": false, 00:35:23.878 "compare_and_write": false, 00:35:23.878 "abort": true, 00:35:23.878 "seek_hole": false, 00:35:23.878 "seek_data": false, 00:35:23.878 "copy": true, 00:35:23.878 "nvme_iov_md": false 00:35:23.878 }, 00:35:23.878 "memory_domains": [ 00:35:23.878 { 00:35:23.878 "dma_device_id": "system", 00:35:23.878 "dma_device_type": 1 00:35:23.878 }, 00:35:23.878 { 00:35:23.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:23.879 "dma_device_type": 2 00:35:23.879 } 00:35:23.879 ], 00:35:23.879 "driver_specific": { 00:35:23.879 "passthru": { 00:35:23.879 "name": "pt2", 00:35:23.879 "base_bdev_name": "malloc2" 00:35:23.879 } 00:35:23.879 } 00:35:23.879 }' 00:35:23.879 23:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:23.879 23:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:23.879 23:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:35:23.879 23:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:23.879 23:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:24.137 23:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:24.137 23:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:24.137 23:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:24.137 23:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:24.137 23:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:24.137 23:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:24.137 23:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:24.137 23:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:24.137 23:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:35:24.394 [2024-07-13 23:21:13.757857] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:24.394 23:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # '[' 4929046a-3b0c-40f6-b419-4be6cd63f6fe '!=' 4929046a-3b0c-40f6-b419-4be6cd63f6fe ']' 00:35:24.394 23:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:35:24.394 23:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:35:24.394 23:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:35:24.394 23:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:35:24.651 [2024-07-13 23:21:14.049725] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:35:24.909 23:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:24.909 23:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:24.909 23:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:24.909 23:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:24.909 23:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:24.909 23:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:35:24.909 23:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:24.909 23:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:24.909 23:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:24.909 23:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:24.909 23:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:24.909 23:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:25.167 23:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:25.167 "name": "raid_bdev1", 00:35:25.167 "uuid": "4929046a-3b0c-40f6-b419-4be6cd63f6fe", 00:35:25.167 "strip_size_kb": 0, 00:35:25.167 "state": "online", 00:35:25.167 "raid_level": "raid1", 00:35:25.167 "superblock": true, 00:35:25.167 "num_base_bdevs": 2, 00:35:25.167 "num_base_bdevs_discovered": 1, 00:35:25.167 "num_base_bdevs_operational": 1, 00:35:25.167 "base_bdevs_list": [ 00:35:25.167 { 00:35:25.167 "name": null, 00:35:25.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:25.167 "is_configured": false, 00:35:25.167 "data_offset": 256, 00:35:25.167 "data_size": 7936 00:35:25.167 }, 00:35:25.167 { 00:35:25.167 "name": "pt2", 00:35:25.167 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:25.167 "is_configured": true, 00:35:25.167 "data_offset": 256, 00:35:25.167 "data_size": 7936 00:35:25.167 } 00:35:25.167 ] 00:35:25.167 }' 00:35:25.167 23:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:25.167 23:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:35:25.733 23:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:25.991 [2024-07-13 23:21:15.225937] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:25.991 [2024-07-13 23:21:15.226136] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:25.991 [2024-07-13 23:21:15.226332] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:25.991 [2024-07-13 23:21:15.226501] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:25.991 [2024-07-13 23:21:15.226614] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:35:25.991 23:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:25.991 23:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:35:26.249 23:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:35:26.249 23:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:35:26.249 23:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:35:26.249 23:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:35:26.249 23:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:35:26.520 23:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:35:26.520 23:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:35:26.520 23:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:35:26.520 23:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:35:26.520 23:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@518 -- # i=1 00:35:26.520 23:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:26.790 [2024-07-13 23:21:15.929305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:26.790 [2024-07-13 23:21:15.929564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:26.790 [2024-07-13 23:21:15.929654] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:35:26.790 [2024-07-13 23:21:15.929982] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:26.790 [2024-07-13 23:21:15.932642] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:26.790 [2024-07-13 23:21:15.932873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:26.790 [2024-07-13 23:21:15.933103] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:35:26.791 [2024-07-13 23:21:15.933265] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:26.791 [2024-07-13 23:21:15.933446] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:35:26.791 [2024-07-13 23:21:15.933548] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:35:26.791 [2024-07-13 23:21:15.933692] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:35:26.791 [2024-07-13 23:21:15.934157] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:35:26.791 [2024-07-13 23:21:15.934277] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:35:26.791 [2024-07-13 23:21:15.934551] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:26.791 pt2 00:35:26.791 23:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:26.791 23:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:26.791 23:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:26.791 23:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:26.791 23:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:26.791 23:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:35:26.791 23:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:26.791 23:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:26.791 23:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:26.791 23:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:26.791 23:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:26.791 23:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:27.049 23:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:27.049 "name": "raid_bdev1", 00:35:27.049 "uuid": "4929046a-3b0c-40f6-b419-4be6cd63f6fe", 00:35:27.049 "strip_size_kb": 0, 00:35:27.049 "state": "online", 00:35:27.049 "raid_level": "raid1", 00:35:27.049 "superblock": true, 00:35:27.049 "num_base_bdevs": 2, 00:35:27.049 "num_base_bdevs_discovered": 1, 00:35:27.049 "num_base_bdevs_operational": 1, 00:35:27.049 "base_bdevs_list": [ 00:35:27.049 { 00:35:27.049 "name": null, 00:35:27.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:27.049 "is_configured": false, 00:35:27.049 "data_offset": 256, 00:35:27.049 "data_size": 7936 00:35:27.049 }, 00:35:27.049 { 00:35:27.049 "name": "pt2", 00:35:27.049 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:27.049 "is_configured": true, 00:35:27.049 "data_offset": 256, 00:35:27.049 "data_size": 7936 00:35:27.049 } 00:35:27.049 ] 00:35:27.049 }' 00:35:27.049 23:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:27.049 23:21:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:35:27.615 23:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:27.873 [2024-07-13 23:21:17.152479] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:27.873 [2024-07-13 23:21:17.152707] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:27.873 [2024-07-13 23:21:17.152923] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:27.873 [2024-07-13 23:21:17.153154] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:27.873 [2024-07-13 23:21:17.153307] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:35:27.873 23:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:27.873 23:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:35:28.131 23:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:35:28.131 23:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:35:28.131 23:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:35:28.131 23:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:28.389 [2024-07-13 23:21:17.584525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:28.389 [2024-07-13 23:21:17.584769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:28.389 [2024-07-13 23:21:17.584873] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:35:28.389 [2024-07-13 23:21:17.585031] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:28.389 [2024-07-13 23:21:17.587449] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:28.389 [2024-07-13 23:21:17.587631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:28.389 [2024-07-13 23:21:17.587822] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:35:28.389 [2024-07-13 23:21:17.587947] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:28.389 [2024-07-13 23:21:17.588160] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:35:28.389 [2024-07-13 23:21:17.588356] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:28.389 [2024-07-13 23:21:17.588497] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state configuring 00:35:28.389 [2024-07-13 23:21:17.588641] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:28.390 [2024-07-13 23:21:17.588821] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:35:28.390 [2024-07-13 23:21:17.588962] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:35:28.390 [2024-07-13 23:21:17.589074] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:35:28.390 [2024-07-13 23:21:17.589532] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:35:28.390 [2024-07-13 23:21:17.589584] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:35:28.390 [2024-07-13 23:21:17.589773] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:28.390 pt1 00:35:28.390 23:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:35:28.390 23:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:28.390 23:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:28.390 23:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:28.390 23:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:28.390 23:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:28.390 23:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:35:28.390 23:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:28.390 23:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:28.390 23:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:28.390 23:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:28.390 23:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:28.390 23:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:28.648 23:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:28.648 "name": "raid_bdev1", 00:35:28.648 "uuid": "4929046a-3b0c-40f6-b419-4be6cd63f6fe", 00:35:28.648 "strip_size_kb": 0, 00:35:28.648 "state": "online", 00:35:28.648 "raid_level": "raid1", 00:35:28.648 "superblock": true, 00:35:28.648 "num_base_bdevs": 2, 00:35:28.648 "num_base_bdevs_discovered": 1, 00:35:28.648 "num_base_bdevs_operational": 1, 00:35:28.648 "base_bdevs_list": [ 00:35:28.648 { 00:35:28.648 "name": null, 00:35:28.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:28.648 "is_configured": false, 00:35:28.648 "data_offset": 256, 00:35:28.648 "data_size": 7936 00:35:28.648 }, 00:35:28.648 { 00:35:28.648 "name": "pt2", 00:35:28.648 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:28.648 "is_configured": true, 00:35:28.648 "data_offset": 256, 00:35:28.648 "data_size": 7936 00:35:28.648 } 00:35:28.648 ] 00:35:28.648 }' 00:35:28.648 23:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:28.648 23:21:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:35:29.215 23:21:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:35:29.215 23:21:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:35:29.473 23:21:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:35:29.473 23:21:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:29.473 23:21:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:35:29.732 [2024-07-13 23:21:18.937514] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:29.732 23:21:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' 4929046a-3b0c-40f6-b419-4be6cd63f6fe '!=' 4929046a-3b0c-40f6-b419-4be6cd63f6fe ']' 00:35:29.732 23:21:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@562 -- # killprocess 168766 00:35:29.732 23:21:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@948 -- # '[' -z 168766 ']' 00:35:29.732 23:21:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # kill -0 168766 00:35:29.732 23:21:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # uname 00:35:29.732 23:21:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:29.732 23:21:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 168766 00:35:29.732 killing process with pid 168766 00:35:29.732 23:21:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:29.732 23:21:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:29.732 23:21:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 168766' 00:35:29.732 23:21:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@967 -- # kill 168766 00:35:29.732 [2024-07-13 23:21:18.977407] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:29.732 23:21:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # wait 168766 00:35:29.732 [2024-07-13 23:21:18.977487] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:29.732 [2024-07-13 23:21:18.977544] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:29.732 [2024-07-13 23:21:18.977566] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:35:29.732 [2024-07-13 23:21:18.997634] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:29.991 ************************************ 00:35:29.991 END TEST raid_superblock_test_4k 00:35:29.991 ************************************ 00:35:29.991 23:21:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@564 -- # return 0 00:35:29.991 00:35:29.991 real 0m16.241s 00:35:29.991 user 0m30.478s 00:35:29.991 sys 0m2.077s 00:35:29.991 23:21:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:29.991 23:21:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:35:29.991 23:21:19 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:35:29.991 23:21:19 bdev_raid -- bdev/bdev_raid.sh@900 -- # '[' true = true ']' 00:35:29.991 23:21:19 bdev_raid -- bdev/bdev_raid.sh@901 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:35:29.991 23:21:19 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:35:29.991 23:21:19 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:29.991 23:21:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:29.991 ************************************ 00:35:29.991 START TEST raid_rebuild_test_sb_4k 00:35:29.991 ************************************ 00:35:29.991 23:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:35:29.991 23:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:35:29.991 23:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:35:29.991 23:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:35:29.991 23:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:35:29.991 23:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local verify=true 00:35:29.991 23:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:35:29.991 23:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:29.991 23:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:35:29.991 23:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:35:29.991 23:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:29.991 23:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:35:29.991 23:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:35:29.991 23:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:29.991 23:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:35:29.991 23:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:35:29.991 23:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:35:29.991 23:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local strip_size 00:35:29.991 23:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local create_arg 00:35:29.991 23:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:35:29.991 23:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local data_offset 00:35:29.991 23:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:35:29.991 23:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:35:29.991 23:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:35:29.991 23:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:35:29.991 23:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # raid_pid=169289 00:35:29.991 23:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # waitforlisten 169289 /var/tmp/spdk-raid.sock 00:35:29.991 23:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:35:29.991 23:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@829 -- # '[' -z 169289 ']' 00:35:29.991 23:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:29.991 23:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:29.991 23:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:29.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:29.991 23:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:29.991 23:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:29.991 [2024-07-13 23:21:19.351732] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:35:29.992 [2024-07-13 23:21:19.352169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169289 ] 00:35:29.992 I/O size of 3145728 is greater than zero copy threshold (65536). 00:35:29.992 Zero copy mechanism will not be used. 00:35:30.250 [2024-07-13 23:21:19.502476] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:30.250 [2024-07-13 23:21:19.574170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:30.250 [2024-07-13 23:21:19.631982] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:31.185 23:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:31.185 23:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@862 -- # return 0 00:35:31.185 23:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:35:31.185 23:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:35:31.443 BaseBdev1_malloc 00:35:31.443 23:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:35:31.701 [2024-07-13 23:21:20.886161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:35:31.701 [2024-07-13 23:21:20.886458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:31.701 [2024-07-13 23:21:20.886617] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:35:31.701 [2024-07-13 23:21:20.886774] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:31.701 [2024-07-13 23:21:20.889556] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:31.701 [2024-07-13 23:21:20.889751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:31.701 BaseBdev1 00:35:31.701 23:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:35:31.701 23:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:35:31.959 BaseBdev2_malloc 00:35:31.959 23:21:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:35:31.959 [2024-07-13 23:21:21.344812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:35:31.959 [2024-07-13 23:21:21.345117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:31.959 [2024-07-13 23:21:21.345224] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:35:31.959 [2024-07-13 23:21:21.345579] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:31.959 [2024-07-13 23:21:21.347886] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:31.959 [2024-07-13 23:21:21.348064] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:35:31.959 BaseBdev2 00:35:31.959 23:21:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b spare_malloc 00:35:32.218 spare_malloc 00:35:32.218 23:21:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:35:32.476 spare_delay 00:35:32.476 23:21:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:35:32.733 [2024-07-13 23:21:22.069080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:32.733 [2024-07-13 23:21:22.069371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:32.733 [2024-07-13 23:21:22.069474] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:35:32.733 [2024-07-13 23:21:22.069778] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:32.733 [2024-07-13 23:21:22.072648] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:32.733 [2024-07-13 23:21:22.072873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:32.733 spare 00:35:32.734 23:21:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:35:32.991 [2024-07-13 23:21:22.285351] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:32.991 [2024-07-13 23:21:22.287527] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:32.991 [2024-07-13 23:21:22.287876] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:35:32.991 [2024-07-13 23:21:22.288003] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:35:32.991 [2024-07-13 23:21:22.288231] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:35:32.991 [2024-07-13 23:21:22.288781] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:35:32.991 [2024-07-13 23:21:22.288974] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:35:32.991 [2024-07-13 23:21:22.289313] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:32.991 23:21:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:32.991 23:21:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:32.991 23:21:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:32.991 23:21:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:32.991 23:21:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:32.991 23:21:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:32.991 23:21:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:32.992 23:21:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:32.992 23:21:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:32.992 23:21:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:32.992 23:21:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:32.992 23:21:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:33.250 23:21:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:33.250 "name": "raid_bdev1", 00:35:33.250 "uuid": "30eb73bf-3cc2-4bca-9655-cccb3b600990", 00:35:33.250 "strip_size_kb": 0, 00:35:33.250 "state": "online", 00:35:33.250 "raid_level": "raid1", 00:35:33.250 "superblock": true, 00:35:33.250 "num_base_bdevs": 2, 00:35:33.250 "num_base_bdevs_discovered": 2, 00:35:33.250 "num_base_bdevs_operational": 2, 00:35:33.250 "base_bdevs_list": [ 00:35:33.250 { 00:35:33.250 "name": "BaseBdev1", 00:35:33.250 "uuid": "696f8703-5086-5ac3-983c-ee097cac6dbc", 00:35:33.250 "is_configured": true, 00:35:33.250 "data_offset": 256, 00:35:33.250 "data_size": 7936 00:35:33.250 }, 00:35:33.250 { 00:35:33.250 "name": "BaseBdev2", 00:35:33.250 "uuid": "a5cced7f-b59c-5dc6-8677-b3332487d470", 00:35:33.250 "is_configured": true, 00:35:33.250 "data_offset": 256, 00:35:33.250 "data_size": 7936 00:35:33.250 } 00:35:33.250 ] 00:35:33.250 }' 00:35:33.250 23:21:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:33.250 23:21:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:33.816 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:33.816 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:35:34.073 [2024-07-13 23:21:23.305877] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:34.073 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:35:34.073 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:34.073 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:35:34.330 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:35:34.330 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:35:34.330 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:35:34.330 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:35:34.330 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:35:34.330 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:34.330 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:35:34.330 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:34.330 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:35:34.330 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:34.330 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:35:34.330 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:34.330 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:34.330 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:35:34.588 [2024-07-13 23:21:23.845854] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:35:34.588 /dev/nbd0 00:35:34.588 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:34.588 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:34.588 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:35:34.588 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:35:34.588 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:35:34.588 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:35:34.588 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:35:34.588 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:35:34.588 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:35:34.588 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:35:34.588 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:34.588 1+0 records in 00:35:34.588 1+0 records out 00:35:34.588 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000907357 s, 4.5 MB/s 00:35:34.588 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:34.588 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:35:34.588 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:34.588 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:35:34.588 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:35:34.588 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:34.588 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:34.588 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:35:34.588 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:35:34.588 23:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:35:35.521 7936+0 records in 00:35:35.521 7936+0 records out 00:35:35.521 32505856 bytes (33 MB, 31 MiB) copied, 0.710463 s, 45.8 MB/s 00:35:35.521 23:21:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:35:35.521 23:21:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:35.521 23:21:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:35:35.521 23:21:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:35.521 23:21:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:35:35.522 23:21:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:35.522 23:21:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:35:35.522 23:21:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:35.522 23:21:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:35.522 23:21:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:35.522 23:21:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:35.522 23:21:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:35.522 23:21:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:35.522 [2024-07-13 23:21:24.908310] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:35.522 23:21:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:35:35.522 23:21:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:35:35.522 23:21:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:35:35.780 [2024-07-13 23:21:25.111978] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:35.780 23:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:35.780 23:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:35.780 23:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:35.780 23:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:35.780 23:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:35.780 23:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:35:35.780 23:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:35.780 23:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:35.780 23:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:35.780 23:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:35.780 23:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:35.780 23:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:36.038 23:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:36.038 "name": "raid_bdev1", 00:35:36.038 "uuid": "30eb73bf-3cc2-4bca-9655-cccb3b600990", 00:35:36.038 "strip_size_kb": 0, 00:35:36.038 "state": "online", 00:35:36.038 "raid_level": "raid1", 00:35:36.038 "superblock": true, 00:35:36.038 "num_base_bdevs": 2, 00:35:36.038 "num_base_bdevs_discovered": 1, 00:35:36.038 "num_base_bdevs_operational": 1, 00:35:36.038 "base_bdevs_list": [ 00:35:36.038 { 00:35:36.038 "name": null, 00:35:36.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:36.038 "is_configured": false, 00:35:36.038 "data_offset": 256, 00:35:36.038 "data_size": 7936 00:35:36.038 }, 00:35:36.038 { 00:35:36.038 "name": "BaseBdev2", 00:35:36.038 "uuid": "a5cced7f-b59c-5dc6-8677-b3332487d470", 00:35:36.038 "is_configured": true, 00:35:36.038 "data_offset": 256, 00:35:36.038 "data_size": 7936 00:35:36.038 } 00:35:36.038 ] 00:35:36.038 }' 00:35:36.038 23:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:36.038 23:21:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:36.604 23:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:35:36.863 [2024-07-13 23:21:26.184272] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:36.863 [2024-07-13 23:21:26.189941] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019c7c0 00:35:36.863 [2024-07-13 23:21:26.192287] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:36.863 23:21:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # sleep 1 00:35:38.237 23:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:38.237 23:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:38.237 23:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:38.237 23:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:38.237 23:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:38.237 23:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:38.237 23:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:38.237 23:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:38.237 "name": "raid_bdev1", 00:35:38.237 "uuid": "30eb73bf-3cc2-4bca-9655-cccb3b600990", 00:35:38.237 "strip_size_kb": 0, 00:35:38.237 "state": "online", 00:35:38.237 "raid_level": "raid1", 00:35:38.237 "superblock": true, 00:35:38.237 "num_base_bdevs": 2, 00:35:38.237 "num_base_bdevs_discovered": 2, 00:35:38.237 "num_base_bdevs_operational": 2, 00:35:38.237 "process": { 00:35:38.237 "type": "rebuild", 00:35:38.237 "target": "spare", 00:35:38.237 "progress": { 00:35:38.237 "blocks": 3072, 00:35:38.237 "percent": 38 00:35:38.237 } 00:35:38.237 }, 00:35:38.237 "base_bdevs_list": [ 00:35:38.237 { 00:35:38.237 "name": "spare", 00:35:38.237 "uuid": "e56d8c01-4cf1-5f16-8dce-c9549bffed64", 00:35:38.237 "is_configured": true, 00:35:38.237 "data_offset": 256, 00:35:38.237 "data_size": 7936 00:35:38.237 }, 00:35:38.237 { 00:35:38.237 "name": "BaseBdev2", 00:35:38.237 "uuid": "a5cced7f-b59c-5dc6-8677-b3332487d470", 00:35:38.237 "is_configured": true, 00:35:38.237 "data_offset": 256, 00:35:38.237 "data_size": 7936 00:35:38.237 } 00:35:38.237 ] 00:35:38.237 }' 00:35:38.237 23:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:38.237 23:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:38.237 23:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:38.237 23:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:38.237 23:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:35:38.495 [2024-07-13 23:21:27.814651] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:38.752 [2024-07-13 23:21:27.903780] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:38.752 [2024-07-13 23:21:27.904033] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:38.752 [2024-07-13 23:21:27.904097] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:38.752 [2024-07-13 23:21:27.904213] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:38.752 23:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:38.752 23:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:38.752 23:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:38.752 23:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:38.752 23:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:38.752 23:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:35:38.752 23:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:38.752 23:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:38.752 23:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:38.752 23:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:38.752 23:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:38.753 23:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:39.011 23:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:39.011 "name": "raid_bdev1", 00:35:39.011 "uuid": "30eb73bf-3cc2-4bca-9655-cccb3b600990", 00:35:39.011 "strip_size_kb": 0, 00:35:39.011 "state": "online", 00:35:39.011 "raid_level": "raid1", 00:35:39.011 "superblock": true, 00:35:39.011 "num_base_bdevs": 2, 00:35:39.011 "num_base_bdevs_discovered": 1, 00:35:39.011 "num_base_bdevs_operational": 1, 00:35:39.011 "base_bdevs_list": [ 00:35:39.011 { 00:35:39.011 "name": null, 00:35:39.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:39.011 "is_configured": false, 00:35:39.011 "data_offset": 256, 00:35:39.011 "data_size": 7936 00:35:39.011 }, 00:35:39.011 { 00:35:39.011 "name": "BaseBdev2", 00:35:39.011 "uuid": "a5cced7f-b59c-5dc6-8677-b3332487d470", 00:35:39.011 "is_configured": true, 00:35:39.011 "data_offset": 256, 00:35:39.011 "data_size": 7936 00:35:39.011 } 00:35:39.011 ] 00:35:39.011 }' 00:35:39.011 23:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:39.011 23:21:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:39.602 23:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:39.602 23:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:39.602 23:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:39.602 23:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:39.602 23:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:39.602 23:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:39.602 23:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:39.860 23:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:39.860 "name": "raid_bdev1", 00:35:39.860 "uuid": "30eb73bf-3cc2-4bca-9655-cccb3b600990", 00:35:39.860 "strip_size_kb": 0, 00:35:39.860 "state": "online", 00:35:39.860 "raid_level": "raid1", 00:35:39.860 "superblock": true, 00:35:39.860 "num_base_bdevs": 2, 00:35:39.860 "num_base_bdevs_discovered": 1, 00:35:39.860 "num_base_bdevs_operational": 1, 00:35:39.860 "base_bdevs_list": [ 00:35:39.860 { 00:35:39.860 "name": null, 00:35:39.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:39.860 "is_configured": false, 00:35:39.860 "data_offset": 256, 00:35:39.860 "data_size": 7936 00:35:39.860 }, 00:35:39.860 { 00:35:39.860 "name": "BaseBdev2", 00:35:39.860 "uuid": "a5cced7f-b59c-5dc6-8677-b3332487d470", 00:35:39.860 "is_configured": true, 00:35:39.860 "data_offset": 256, 00:35:39.860 "data_size": 7936 00:35:39.860 } 00:35:39.860 ] 00:35:39.860 }' 00:35:39.860 23:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:39.860 23:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:39.860 23:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:39.860 23:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:39.860 23:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:35:40.119 [2024-07-13 23:21:29.322058] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:40.119 [2024-07-13 23:21:29.327549] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019c960 00:35:40.119 [2024-07-13 23:21:29.329760] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:40.119 23:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # sleep 1 00:35:41.054 23:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:41.054 23:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:41.054 23:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:41.054 23:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:41.054 23:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:41.054 23:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:41.054 23:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:41.313 23:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:41.313 "name": "raid_bdev1", 00:35:41.313 "uuid": "30eb73bf-3cc2-4bca-9655-cccb3b600990", 00:35:41.313 "strip_size_kb": 0, 00:35:41.313 "state": "online", 00:35:41.313 "raid_level": "raid1", 00:35:41.313 "superblock": true, 00:35:41.313 "num_base_bdevs": 2, 00:35:41.313 "num_base_bdevs_discovered": 2, 00:35:41.313 "num_base_bdevs_operational": 2, 00:35:41.313 "process": { 00:35:41.313 "type": "rebuild", 00:35:41.313 "target": "spare", 00:35:41.313 "progress": { 00:35:41.313 "blocks": 3072, 00:35:41.313 "percent": 38 00:35:41.313 } 00:35:41.313 }, 00:35:41.313 "base_bdevs_list": [ 00:35:41.313 { 00:35:41.313 "name": "spare", 00:35:41.313 "uuid": "e56d8c01-4cf1-5f16-8dce-c9549bffed64", 00:35:41.313 "is_configured": true, 00:35:41.313 "data_offset": 256, 00:35:41.313 "data_size": 7936 00:35:41.313 }, 00:35:41.313 { 00:35:41.313 "name": "BaseBdev2", 00:35:41.313 "uuid": "a5cced7f-b59c-5dc6-8677-b3332487d470", 00:35:41.313 "is_configured": true, 00:35:41.313 "data_offset": 256, 00:35:41.313 "data_size": 7936 00:35:41.313 } 00:35:41.313 ] 00:35:41.313 }' 00:35:41.313 23:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:41.313 23:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:41.313 23:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:41.313 23:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:41.313 23:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:35:41.313 23:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:35:41.313 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:35:41.313 23:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:35:41.313 23:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:35:41.313 23:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:35:41.313 23:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@705 -- # local timeout=1328 00:35:41.313 23:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:41.313 23:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:41.313 23:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:41.313 23:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:41.313 23:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:41.313 23:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:41.313 23:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:41.313 23:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:41.571 23:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:41.571 "name": "raid_bdev1", 00:35:41.571 "uuid": "30eb73bf-3cc2-4bca-9655-cccb3b600990", 00:35:41.571 "strip_size_kb": 0, 00:35:41.571 "state": "online", 00:35:41.571 "raid_level": "raid1", 00:35:41.571 "superblock": true, 00:35:41.571 "num_base_bdevs": 2, 00:35:41.571 "num_base_bdevs_discovered": 2, 00:35:41.571 "num_base_bdevs_operational": 2, 00:35:41.571 "process": { 00:35:41.571 "type": "rebuild", 00:35:41.571 "target": "spare", 00:35:41.571 "progress": { 00:35:41.571 "blocks": 3840, 00:35:41.571 "percent": 48 00:35:41.571 } 00:35:41.571 }, 00:35:41.571 "base_bdevs_list": [ 00:35:41.571 { 00:35:41.571 "name": "spare", 00:35:41.571 "uuid": "e56d8c01-4cf1-5f16-8dce-c9549bffed64", 00:35:41.571 "is_configured": true, 00:35:41.571 "data_offset": 256, 00:35:41.571 "data_size": 7936 00:35:41.571 }, 00:35:41.571 { 00:35:41.571 "name": "BaseBdev2", 00:35:41.571 "uuid": "a5cced7f-b59c-5dc6-8677-b3332487d470", 00:35:41.571 "is_configured": true, 00:35:41.571 "data_offset": 256, 00:35:41.571 "data_size": 7936 00:35:41.571 } 00:35:41.571 ] 00:35:41.571 }' 00:35:41.571 23:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:41.571 23:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:41.571 23:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:41.830 23:21:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:41.830 23:21:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:42.764 23:21:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:42.764 23:21:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:42.764 23:21:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:42.764 23:21:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:42.764 23:21:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:42.764 23:21:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:42.764 23:21:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:42.764 23:21:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:43.023 23:21:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:43.023 "name": "raid_bdev1", 00:35:43.023 "uuid": "30eb73bf-3cc2-4bca-9655-cccb3b600990", 00:35:43.023 "strip_size_kb": 0, 00:35:43.023 "state": "online", 00:35:43.023 "raid_level": "raid1", 00:35:43.023 "superblock": true, 00:35:43.023 "num_base_bdevs": 2, 00:35:43.023 "num_base_bdevs_discovered": 2, 00:35:43.023 "num_base_bdevs_operational": 2, 00:35:43.023 "process": { 00:35:43.023 "type": "rebuild", 00:35:43.023 "target": "spare", 00:35:43.023 "progress": { 00:35:43.023 "blocks": 7168, 00:35:43.023 "percent": 90 00:35:43.023 } 00:35:43.023 }, 00:35:43.023 "base_bdevs_list": [ 00:35:43.023 { 00:35:43.023 "name": "spare", 00:35:43.023 "uuid": "e56d8c01-4cf1-5f16-8dce-c9549bffed64", 00:35:43.023 "is_configured": true, 00:35:43.023 "data_offset": 256, 00:35:43.023 "data_size": 7936 00:35:43.023 }, 00:35:43.023 { 00:35:43.023 "name": "BaseBdev2", 00:35:43.023 "uuid": "a5cced7f-b59c-5dc6-8677-b3332487d470", 00:35:43.023 "is_configured": true, 00:35:43.023 "data_offset": 256, 00:35:43.023 "data_size": 7936 00:35:43.023 } 00:35:43.023 ] 00:35:43.023 }' 00:35:43.023 23:21:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:43.023 23:21:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:43.023 23:21:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:43.023 23:21:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:43.023 23:21:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:43.281 [2024-07-13 23:21:32.447322] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:35:43.281 [2024-07-13 23:21:32.447544] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:35:43.281 [2024-07-13 23:21:32.447826] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:44.214 23:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:44.214 23:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:44.214 23:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:44.214 23:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:44.214 23:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:44.214 23:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:44.214 23:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:44.214 23:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:44.214 23:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:44.214 "name": "raid_bdev1", 00:35:44.214 "uuid": "30eb73bf-3cc2-4bca-9655-cccb3b600990", 00:35:44.214 "strip_size_kb": 0, 00:35:44.214 "state": "online", 00:35:44.214 "raid_level": "raid1", 00:35:44.214 "superblock": true, 00:35:44.214 "num_base_bdevs": 2, 00:35:44.214 "num_base_bdevs_discovered": 2, 00:35:44.214 "num_base_bdevs_operational": 2, 00:35:44.214 "base_bdevs_list": [ 00:35:44.214 { 00:35:44.214 "name": "spare", 00:35:44.214 "uuid": "e56d8c01-4cf1-5f16-8dce-c9549bffed64", 00:35:44.214 "is_configured": true, 00:35:44.214 "data_offset": 256, 00:35:44.214 "data_size": 7936 00:35:44.214 }, 00:35:44.214 { 00:35:44.214 "name": "BaseBdev2", 00:35:44.214 "uuid": "a5cced7f-b59c-5dc6-8677-b3332487d470", 00:35:44.214 "is_configured": true, 00:35:44.214 "data_offset": 256, 00:35:44.214 "data_size": 7936 00:35:44.214 } 00:35:44.214 ] 00:35:44.214 }' 00:35:44.214 23:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:44.472 23:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:35:44.472 23:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:44.472 23:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:35:44.472 23:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # break 00:35:44.472 23:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:44.472 23:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:44.472 23:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:44.472 23:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:44.472 23:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:44.472 23:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:44.472 23:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:44.729 23:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:44.729 "name": "raid_bdev1", 00:35:44.729 "uuid": "30eb73bf-3cc2-4bca-9655-cccb3b600990", 00:35:44.729 "strip_size_kb": 0, 00:35:44.729 "state": "online", 00:35:44.729 "raid_level": "raid1", 00:35:44.729 "superblock": true, 00:35:44.729 "num_base_bdevs": 2, 00:35:44.729 "num_base_bdevs_discovered": 2, 00:35:44.729 "num_base_bdevs_operational": 2, 00:35:44.729 "base_bdevs_list": [ 00:35:44.729 { 00:35:44.729 "name": "spare", 00:35:44.729 "uuid": "e56d8c01-4cf1-5f16-8dce-c9549bffed64", 00:35:44.729 "is_configured": true, 00:35:44.729 "data_offset": 256, 00:35:44.729 "data_size": 7936 00:35:44.729 }, 00:35:44.729 { 00:35:44.729 "name": "BaseBdev2", 00:35:44.729 "uuid": "a5cced7f-b59c-5dc6-8677-b3332487d470", 00:35:44.729 "is_configured": true, 00:35:44.729 "data_offset": 256, 00:35:44.729 "data_size": 7936 00:35:44.729 } 00:35:44.729 ] 00:35:44.729 }' 00:35:44.729 23:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:44.729 23:21:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:44.729 23:21:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:44.729 23:21:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:44.729 23:21:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:44.729 23:21:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:44.729 23:21:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:44.730 23:21:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:44.730 23:21:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:44.730 23:21:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:44.730 23:21:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:44.730 23:21:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:44.730 23:21:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:44.730 23:21:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:44.730 23:21:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:44.730 23:21:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:44.988 23:21:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:44.988 "name": "raid_bdev1", 00:35:44.988 "uuid": "30eb73bf-3cc2-4bca-9655-cccb3b600990", 00:35:44.988 "strip_size_kb": 0, 00:35:44.988 "state": "online", 00:35:44.988 "raid_level": "raid1", 00:35:44.988 "superblock": true, 00:35:44.988 "num_base_bdevs": 2, 00:35:44.988 "num_base_bdevs_discovered": 2, 00:35:44.988 "num_base_bdevs_operational": 2, 00:35:44.988 "base_bdevs_list": [ 00:35:44.988 { 00:35:44.988 "name": "spare", 00:35:44.988 "uuid": "e56d8c01-4cf1-5f16-8dce-c9549bffed64", 00:35:44.988 "is_configured": true, 00:35:44.988 "data_offset": 256, 00:35:44.988 "data_size": 7936 00:35:44.988 }, 00:35:44.988 { 00:35:44.988 "name": "BaseBdev2", 00:35:44.988 "uuid": "a5cced7f-b59c-5dc6-8677-b3332487d470", 00:35:44.988 "is_configured": true, 00:35:44.988 "data_offset": 256, 00:35:44.988 "data_size": 7936 00:35:44.988 } 00:35:44.988 ] 00:35:44.988 }' 00:35:44.988 23:21:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:44.988 23:21:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:45.552 23:21:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:45.810 [2024-07-13 23:21:35.193856] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:45.810 [2024-07-13 23:21:35.194045] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:45.810 [2024-07-13 23:21:35.194270] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:45.810 [2024-07-13 23:21:35.194466] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:45.810 [2024-07-13 23:21:35.194587] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:35:45.810 23:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:45.810 23:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # jq length 00:35:46.378 23:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:35:46.378 23:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:35:46.378 23:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:35:46.378 23:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:35:46.378 23:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:46.378 23:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:35:46.378 23:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:46.378 23:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:35:46.378 23:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:46.378 23:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:35:46.378 23:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:46.378 23:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:46.378 23:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:35:46.378 /dev/nbd0 00:35:46.636 23:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:46.636 23:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:46.636 23:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:35:46.636 23:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:35:46.636 23:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:35:46.636 23:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:35:46.636 23:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:35:46.636 23:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:35:46.636 23:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:35:46.636 23:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:35:46.636 23:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:46.636 1+0 records in 00:35:46.636 1+0 records out 00:35:46.636 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000740525 s, 5.5 MB/s 00:35:46.636 23:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:46.636 23:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:35:46.637 23:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:46.637 23:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:35:46.637 23:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:35:46.637 23:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:46.637 23:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:46.637 23:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:35:46.895 /dev/nbd1 00:35:46.895 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:35:46.895 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:35:46.895 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:35:46.895 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:35:46.895 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:35:46.895 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:35:46.895 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:35:46.895 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:35:46.895 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:35:46.895 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:35:46.895 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:46.895 1+0 records in 00:35:46.895 1+0 records out 00:35:46.895 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000607825 s, 6.7 MB/s 00:35:46.895 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:46.895 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:35:46.895 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:46.895 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:35:46.895 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:35:46.895 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:46.895 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:46.895 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:35:46.895 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:35:46.895 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:46.895 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:35:46.895 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:46.895 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:35:46.895 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:46.895 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:35:47.153 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:47.153 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:47.153 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:47.153 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:47.153 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:47.153 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:47.153 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:35:47.153 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:35:47.153 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:47.153 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:35:47.411 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:35:47.411 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:35:47.411 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:35:47.411 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:47.411 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:47.411 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:47.411 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:35:47.411 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:35:47.411 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:35:47.411 23:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:35:47.669 23:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:35:47.927 [2024-07-13 23:21:37.217639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:47.927 [2024-07-13 23:21:37.217935] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:47.927 [2024-07-13 23:21:37.218105] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:35:47.927 [2024-07-13 23:21:37.218236] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:47.927 [2024-07-13 23:21:37.220693] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:47.927 [2024-07-13 23:21:37.220932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:47.928 [2024-07-13 23:21:37.221169] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:35:47.928 [2024-07-13 23:21:37.221386] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:47.928 [2024-07-13 23:21:37.221718] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:47.928 spare 00:35:47.928 23:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:47.928 23:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:47.928 23:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:47.928 23:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:47.928 23:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:47.928 23:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:47.928 23:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:47.928 23:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:47.928 23:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:47.928 23:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:47.928 23:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:47.928 23:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:47.928 [2024-07-13 23:21:37.322010] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:35:47.928 [2024-07-13 23:21:37.322207] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:35:47.928 [2024-07-13 23:21:37.322421] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb350 00:35:47.928 [2024-07-13 23:21:37.323227] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:35:47.928 [2024-07-13 23:21:37.323358] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:35:47.928 [2024-07-13 23:21:37.323594] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:48.186 23:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:48.186 "name": "raid_bdev1", 00:35:48.186 "uuid": "30eb73bf-3cc2-4bca-9655-cccb3b600990", 00:35:48.186 "strip_size_kb": 0, 00:35:48.186 "state": "online", 00:35:48.186 "raid_level": "raid1", 00:35:48.186 "superblock": true, 00:35:48.186 "num_base_bdevs": 2, 00:35:48.186 "num_base_bdevs_discovered": 2, 00:35:48.186 "num_base_bdevs_operational": 2, 00:35:48.186 "base_bdevs_list": [ 00:35:48.186 { 00:35:48.186 "name": "spare", 00:35:48.186 "uuid": "e56d8c01-4cf1-5f16-8dce-c9549bffed64", 00:35:48.186 "is_configured": true, 00:35:48.186 "data_offset": 256, 00:35:48.186 "data_size": 7936 00:35:48.186 }, 00:35:48.186 { 00:35:48.186 "name": "BaseBdev2", 00:35:48.186 "uuid": "a5cced7f-b59c-5dc6-8677-b3332487d470", 00:35:48.186 "is_configured": true, 00:35:48.186 "data_offset": 256, 00:35:48.186 "data_size": 7936 00:35:48.186 } 00:35:48.186 ] 00:35:48.186 }' 00:35:48.186 23:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:48.186 23:21:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:48.753 23:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:48.753 23:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:48.753 23:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:48.753 23:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:48.753 23:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:48.753 23:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:48.753 23:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:49.013 23:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:49.013 "name": "raid_bdev1", 00:35:49.013 "uuid": "30eb73bf-3cc2-4bca-9655-cccb3b600990", 00:35:49.013 "strip_size_kb": 0, 00:35:49.013 "state": "online", 00:35:49.013 "raid_level": "raid1", 00:35:49.013 "superblock": true, 00:35:49.013 "num_base_bdevs": 2, 00:35:49.013 "num_base_bdevs_discovered": 2, 00:35:49.013 "num_base_bdevs_operational": 2, 00:35:49.013 "base_bdevs_list": [ 00:35:49.013 { 00:35:49.013 "name": "spare", 00:35:49.013 "uuid": "e56d8c01-4cf1-5f16-8dce-c9549bffed64", 00:35:49.013 "is_configured": true, 00:35:49.013 "data_offset": 256, 00:35:49.013 "data_size": 7936 00:35:49.013 }, 00:35:49.013 { 00:35:49.013 "name": "BaseBdev2", 00:35:49.013 "uuid": "a5cced7f-b59c-5dc6-8677-b3332487d470", 00:35:49.013 "is_configured": true, 00:35:49.013 "data_offset": 256, 00:35:49.013 "data_size": 7936 00:35:49.013 } 00:35:49.013 ] 00:35:49.013 }' 00:35:49.013 23:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:49.013 23:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:49.271 23:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:49.271 23:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:49.271 23:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:49.271 23:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:35:49.530 23:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:35:49.530 23:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:35:49.790 [2024-07-13 23:21:38.942261] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:49.790 23:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:49.790 23:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:49.790 23:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:49.790 23:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:49.790 23:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:49.790 23:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:35:49.790 23:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:49.790 23:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:49.790 23:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:49.790 23:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:49.790 23:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:49.790 23:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:49.790 23:21:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:49.790 "name": "raid_bdev1", 00:35:49.790 "uuid": "30eb73bf-3cc2-4bca-9655-cccb3b600990", 00:35:49.790 "strip_size_kb": 0, 00:35:49.790 "state": "online", 00:35:49.790 "raid_level": "raid1", 00:35:49.790 "superblock": true, 00:35:49.790 "num_base_bdevs": 2, 00:35:49.790 "num_base_bdevs_discovered": 1, 00:35:49.790 "num_base_bdevs_operational": 1, 00:35:49.790 "base_bdevs_list": [ 00:35:49.790 { 00:35:49.790 "name": null, 00:35:49.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:49.790 "is_configured": false, 00:35:49.790 "data_offset": 256, 00:35:49.790 "data_size": 7936 00:35:49.790 }, 00:35:49.790 { 00:35:49.790 "name": "BaseBdev2", 00:35:49.790 "uuid": "a5cced7f-b59c-5dc6-8677-b3332487d470", 00:35:49.790 "is_configured": true, 00:35:49.790 "data_offset": 256, 00:35:49.790 "data_size": 7936 00:35:49.790 } 00:35:49.790 ] 00:35:49.790 }' 00:35:49.790 23:21:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:49.790 23:21:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:50.727 23:21:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:35:50.727 [2024-07-13 23:21:40.074534] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:50.727 [2024-07-13 23:21:40.074940] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:35:50.727 [2024-07-13 23:21:40.075070] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:35:50.727 [2024-07-13 23:21:40.075190] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:50.727 [2024-07-13 23:21:40.080720] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb4f0 00:35:50.727 [2024-07-13 23:21:40.083061] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:50.727 23:21:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # sleep 1 00:35:52.110 23:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:52.110 23:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:52.110 23:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:52.110 23:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:52.110 23:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:52.110 23:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:52.110 23:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:52.110 23:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:52.110 "name": "raid_bdev1", 00:35:52.110 "uuid": "30eb73bf-3cc2-4bca-9655-cccb3b600990", 00:35:52.110 "strip_size_kb": 0, 00:35:52.110 "state": "online", 00:35:52.110 "raid_level": "raid1", 00:35:52.110 "superblock": true, 00:35:52.110 "num_base_bdevs": 2, 00:35:52.110 "num_base_bdevs_discovered": 2, 00:35:52.110 "num_base_bdevs_operational": 2, 00:35:52.110 "process": { 00:35:52.110 "type": "rebuild", 00:35:52.110 "target": "spare", 00:35:52.110 "progress": { 00:35:52.110 "blocks": 3072, 00:35:52.110 "percent": 38 00:35:52.110 } 00:35:52.110 }, 00:35:52.110 "base_bdevs_list": [ 00:35:52.110 { 00:35:52.110 "name": "spare", 00:35:52.110 "uuid": "e56d8c01-4cf1-5f16-8dce-c9549bffed64", 00:35:52.110 "is_configured": true, 00:35:52.110 "data_offset": 256, 00:35:52.110 "data_size": 7936 00:35:52.110 }, 00:35:52.110 { 00:35:52.110 "name": "BaseBdev2", 00:35:52.110 "uuid": "a5cced7f-b59c-5dc6-8677-b3332487d470", 00:35:52.110 "is_configured": true, 00:35:52.110 "data_offset": 256, 00:35:52.110 "data_size": 7936 00:35:52.110 } 00:35:52.110 ] 00:35:52.110 }' 00:35:52.110 23:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:52.110 23:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:52.110 23:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:52.110 23:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:52.110 23:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:35:52.369 [2024-07-13 23:21:41.712941] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:52.628 [2024-07-13 23:21:41.792809] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:52.628 [2024-07-13 23:21:41.793106] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:52.628 [2024-07-13 23:21:41.793285] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:52.628 [2024-07-13 23:21:41.793332] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:52.628 23:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:52.628 23:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:52.628 23:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:52.628 23:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:52.628 23:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:52.628 23:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:35:52.628 23:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:52.628 23:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:52.628 23:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:52.628 23:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:52.628 23:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:52.628 23:21:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:52.887 23:21:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:52.887 "name": "raid_bdev1", 00:35:52.887 "uuid": "30eb73bf-3cc2-4bca-9655-cccb3b600990", 00:35:52.887 "strip_size_kb": 0, 00:35:52.887 "state": "online", 00:35:52.887 "raid_level": "raid1", 00:35:52.887 "superblock": true, 00:35:52.887 "num_base_bdevs": 2, 00:35:52.887 "num_base_bdevs_discovered": 1, 00:35:52.887 "num_base_bdevs_operational": 1, 00:35:52.887 "base_bdevs_list": [ 00:35:52.887 { 00:35:52.887 "name": null, 00:35:52.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:52.887 "is_configured": false, 00:35:52.887 "data_offset": 256, 00:35:52.887 "data_size": 7936 00:35:52.887 }, 00:35:52.887 { 00:35:52.887 "name": "BaseBdev2", 00:35:52.887 "uuid": "a5cced7f-b59c-5dc6-8677-b3332487d470", 00:35:52.887 "is_configured": true, 00:35:52.887 "data_offset": 256, 00:35:52.887 "data_size": 7936 00:35:52.887 } 00:35:52.887 ] 00:35:52.887 }' 00:35:52.887 23:21:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:52.887 23:21:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:53.455 23:21:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:35:53.714 [2024-07-13 23:21:42.909244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:53.714 [2024-07-13 23:21:42.909566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:53.714 [2024-07-13 23:21:42.909665] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:35:53.714 [2024-07-13 23:21:42.910009] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:53.714 [2024-07-13 23:21:42.910541] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:53.714 [2024-07-13 23:21:42.910763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:53.714 [2024-07-13 23:21:42.911019] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:35:53.714 [2024-07-13 23:21:42.911145] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:35:53.714 [2024-07-13 23:21:42.911257] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:35:53.714 [2024-07-13 23:21:42.911383] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:53.714 [2024-07-13 23:21:42.916551] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb830 00:35:53.714 spare 00:35:53.714 [2024-07-13 23:21:42.918945] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:53.714 23:21:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # sleep 1 00:35:54.647 23:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:54.647 23:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:54.647 23:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:54.647 23:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:54.647 23:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:54.647 23:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:54.647 23:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:54.905 23:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:54.905 "name": "raid_bdev1", 00:35:54.905 "uuid": "30eb73bf-3cc2-4bca-9655-cccb3b600990", 00:35:54.905 "strip_size_kb": 0, 00:35:54.905 "state": "online", 00:35:54.905 "raid_level": "raid1", 00:35:54.905 "superblock": true, 00:35:54.905 "num_base_bdevs": 2, 00:35:54.905 "num_base_bdevs_discovered": 2, 00:35:54.905 "num_base_bdevs_operational": 2, 00:35:54.905 "process": { 00:35:54.905 "type": "rebuild", 00:35:54.905 "target": "spare", 00:35:54.905 "progress": { 00:35:54.905 "blocks": 3072, 00:35:54.905 "percent": 38 00:35:54.905 } 00:35:54.905 }, 00:35:54.905 "base_bdevs_list": [ 00:35:54.905 { 00:35:54.905 "name": "spare", 00:35:54.905 "uuid": "e56d8c01-4cf1-5f16-8dce-c9549bffed64", 00:35:54.905 "is_configured": true, 00:35:54.905 "data_offset": 256, 00:35:54.905 "data_size": 7936 00:35:54.905 }, 00:35:54.905 { 00:35:54.905 "name": "BaseBdev2", 00:35:54.905 "uuid": "a5cced7f-b59c-5dc6-8677-b3332487d470", 00:35:54.905 "is_configured": true, 00:35:54.905 "data_offset": 256, 00:35:54.905 "data_size": 7936 00:35:54.905 } 00:35:54.905 ] 00:35:54.905 }' 00:35:54.905 23:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:54.905 23:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:54.905 23:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:54.905 23:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:54.905 23:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:35:55.164 [2024-07-13 23:21:44.537266] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:55.423 [2024-07-13 23:21:44.628517] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:55.423 [2024-07-13 23:21:44.628830] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:55.423 [2024-07-13 23:21:44.629011] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:55.423 [2024-07-13 23:21:44.629126] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:55.423 23:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:55.423 23:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:55.423 23:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:55.423 23:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:55.423 23:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:55.423 23:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:35:55.423 23:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:55.423 23:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:55.423 23:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:55.423 23:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:55.423 23:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:55.423 23:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:55.682 23:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:55.682 "name": "raid_bdev1", 00:35:55.682 "uuid": "30eb73bf-3cc2-4bca-9655-cccb3b600990", 00:35:55.682 "strip_size_kb": 0, 00:35:55.682 "state": "online", 00:35:55.682 "raid_level": "raid1", 00:35:55.682 "superblock": true, 00:35:55.682 "num_base_bdevs": 2, 00:35:55.682 "num_base_bdevs_discovered": 1, 00:35:55.682 "num_base_bdevs_operational": 1, 00:35:55.682 "base_bdevs_list": [ 00:35:55.682 { 00:35:55.682 "name": null, 00:35:55.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:55.682 "is_configured": false, 00:35:55.682 "data_offset": 256, 00:35:55.682 "data_size": 7936 00:35:55.682 }, 00:35:55.682 { 00:35:55.682 "name": "BaseBdev2", 00:35:55.682 "uuid": "a5cced7f-b59c-5dc6-8677-b3332487d470", 00:35:55.682 "is_configured": true, 00:35:55.682 "data_offset": 256, 00:35:55.682 "data_size": 7936 00:35:55.682 } 00:35:55.682 ] 00:35:55.682 }' 00:35:55.682 23:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:55.682 23:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:56.250 23:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:56.250 23:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:56.250 23:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:56.250 23:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:56.250 23:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:56.250 23:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:56.250 23:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:56.508 23:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:56.508 "name": "raid_bdev1", 00:35:56.508 "uuid": "30eb73bf-3cc2-4bca-9655-cccb3b600990", 00:35:56.508 "strip_size_kb": 0, 00:35:56.508 "state": "online", 00:35:56.508 "raid_level": "raid1", 00:35:56.508 "superblock": true, 00:35:56.508 "num_base_bdevs": 2, 00:35:56.508 "num_base_bdevs_discovered": 1, 00:35:56.508 "num_base_bdevs_operational": 1, 00:35:56.508 "base_bdevs_list": [ 00:35:56.508 { 00:35:56.508 "name": null, 00:35:56.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:56.508 "is_configured": false, 00:35:56.508 "data_offset": 256, 00:35:56.508 "data_size": 7936 00:35:56.508 }, 00:35:56.508 { 00:35:56.508 "name": "BaseBdev2", 00:35:56.508 "uuid": "a5cced7f-b59c-5dc6-8677-b3332487d470", 00:35:56.508 "is_configured": true, 00:35:56.508 "data_offset": 256, 00:35:56.508 "data_size": 7936 00:35:56.508 } 00:35:56.508 ] 00:35:56.508 }' 00:35:56.508 23:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:56.508 23:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:56.508 23:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:56.508 23:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:56.508 23:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:35:56.766 23:21:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:35:57.025 [2024-07-13 23:21:46.358913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:35:57.025 [2024-07-13 23:21:46.359258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:57.025 [2024-07-13 23:21:46.359434] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:35:57.025 [2024-07-13 23:21:46.359588] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:57.025 [2024-07-13 23:21:46.360097] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:57.025 [2024-07-13 23:21:46.360290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:57.025 [2024-07-13 23:21:46.360519] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:35:57.025 [2024-07-13 23:21:46.360641] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:35:57.025 [2024-07-13 23:21:46.360751] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:35:57.025 BaseBdev1 00:35:57.025 23:21:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # sleep 1 00:35:58.399 23:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:58.399 23:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:58.399 23:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:58.399 23:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:58.399 23:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:58.399 23:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:35:58.399 23:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:58.399 23:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:58.399 23:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:58.399 23:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:58.399 23:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:58.399 23:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:58.399 23:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:58.399 "name": "raid_bdev1", 00:35:58.399 "uuid": "30eb73bf-3cc2-4bca-9655-cccb3b600990", 00:35:58.399 "strip_size_kb": 0, 00:35:58.399 "state": "online", 00:35:58.399 "raid_level": "raid1", 00:35:58.399 "superblock": true, 00:35:58.399 "num_base_bdevs": 2, 00:35:58.399 "num_base_bdevs_discovered": 1, 00:35:58.399 "num_base_bdevs_operational": 1, 00:35:58.399 "base_bdevs_list": [ 00:35:58.399 { 00:35:58.399 "name": null, 00:35:58.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:58.399 "is_configured": false, 00:35:58.399 "data_offset": 256, 00:35:58.399 "data_size": 7936 00:35:58.399 }, 00:35:58.399 { 00:35:58.399 "name": "BaseBdev2", 00:35:58.399 "uuid": "a5cced7f-b59c-5dc6-8677-b3332487d470", 00:35:58.399 "is_configured": true, 00:35:58.399 "data_offset": 256, 00:35:58.399 "data_size": 7936 00:35:58.399 } 00:35:58.399 ] 00:35:58.399 }' 00:35:58.399 23:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:58.399 23:21:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:58.966 23:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:58.966 23:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:58.966 23:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:58.966 23:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:58.966 23:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:58.966 23:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:58.966 23:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:59.224 23:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:59.224 "name": "raid_bdev1", 00:35:59.224 "uuid": "30eb73bf-3cc2-4bca-9655-cccb3b600990", 00:35:59.224 "strip_size_kb": 0, 00:35:59.224 "state": "online", 00:35:59.224 "raid_level": "raid1", 00:35:59.224 "superblock": true, 00:35:59.224 "num_base_bdevs": 2, 00:35:59.224 "num_base_bdevs_discovered": 1, 00:35:59.224 "num_base_bdevs_operational": 1, 00:35:59.224 "base_bdevs_list": [ 00:35:59.224 { 00:35:59.224 "name": null, 00:35:59.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:59.224 "is_configured": false, 00:35:59.224 "data_offset": 256, 00:35:59.224 "data_size": 7936 00:35:59.224 }, 00:35:59.224 { 00:35:59.224 "name": "BaseBdev2", 00:35:59.224 "uuid": "a5cced7f-b59c-5dc6-8677-b3332487d470", 00:35:59.224 "is_configured": true, 00:35:59.224 "data_offset": 256, 00:35:59.224 "data_size": 7936 00:35:59.224 } 00:35:59.224 ] 00:35:59.224 }' 00:35:59.224 23:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:59.225 23:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:59.225 23:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:59.483 23:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:59.483 23:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:35:59.483 23:21:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@648 -- # local es=0 00:35:59.483 23:21:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:35:59.483 23:21:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:59.483 23:21:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:59.484 23:21:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:59.484 23:21:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:59.484 23:21:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:59.484 23:21:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:59.484 23:21:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:59.484 23:21:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:35:59.484 23:21:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:35:59.484 [2024-07-13 23:21:48.875571] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:59.484 [2024-07-13 23:21:48.875966] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:35:59.484 [2024-07-13 23:21:48.876096] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:35:59.484 request: 00:35:59.484 { 00:35:59.484 "base_bdev": "BaseBdev1", 00:35:59.484 "raid_bdev": "raid_bdev1", 00:35:59.484 "method": "bdev_raid_add_base_bdev", 00:35:59.484 "req_id": 1 00:35:59.484 } 00:35:59.484 Got JSON-RPC error response 00:35:59.484 response: 00:35:59.484 { 00:35:59.484 "code": -22, 00:35:59.484 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:35:59.484 } 00:35:59.742 23:21:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@651 -- # es=1 00:35:59.742 23:21:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:59.742 23:21:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:59.742 23:21:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:59.742 23:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # sleep 1 00:36:00.679 23:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:00.679 23:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:00.679 23:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:00.679 23:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:00.679 23:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:00.679 23:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:00.679 23:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:00.679 23:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:00.679 23:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:00.679 23:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:00.679 23:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:00.679 23:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:00.937 23:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:00.937 "name": "raid_bdev1", 00:36:00.937 "uuid": "30eb73bf-3cc2-4bca-9655-cccb3b600990", 00:36:00.937 "strip_size_kb": 0, 00:36:00.937 "state": "online", 00:36:00.937 "raid_level": "raid1", 00:36:00.937 "superblock": true, 00:36:00.937 "num_base_bdevs": 2, 00:36:00.937 "num_base_bdevs_discovered": 1, 00:36:00.937 "num_base_bdevs_operational": 1, 00:36:00.937 "base_bdevs_list": [ 00:36:00.937 { 00:36:00.937 "name": null, 00:36:00.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:00.937 "is_configured": false, 00:36:00.937 "data_offset": 256, 00:36:00.937 "data_size": 7936 00:36:00.937 }, 00:36:00.937 { 00:36:00.937 "name": "BaseBdev2", 00:36:00.937 "uuid": "a5cced7f-b59c-5dc6-8677-b3332487d470", 00:36:00.937 "is_configured": true, 00:36:00.937 "data_offset": 256, 00:36:00.937 "data_size": 7936 00:36:00.937 } 00:36:00.937 ] 00:36:00.937 }' 00:36:00.937 23:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:00.937 23:21:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:01.504 23:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:01.504 23:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:01.504 23:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:01.504 23:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:01.504 23:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:01.504 23:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:01.504 23:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:01.762 23:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:01.762 "name": "raid_bdev1", 00:36:01.762 "uuid": "30eb73bf-3cc2-4bca-9655-cccb3b600990", 00:36:01.762 "strip_size_kb": 0, 00:36:01.762 "state": "online", 00:36:01.762 "raid_level": "raid1", 00:36:01.762 "superblock": true, 00:36:01.762 "num_base_bdevs": 2, 00:36:01.762 "num_base_bdevs_discovered": 1, 00:36:01.762 "num_base_bdevs_operational": 1, 00:36:01.762 "base_bdevs_list": [ 00:36:01.762 { 00:36:01.762 "name": null, 00:36:01.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:01.762 "is_configured": false, 00:36:01.762 "data_offset": 256, 00:36:01.762 "data_size": 7936 00:36:01.762 }, 00:36:01.762 { 00:36:01.762 "name": "BaseBdev2", 00:36:01.762 "uuid": "a5cced7f-b59c-5dc6-8677-b3332487d470", 00:36:01.762 "is_configured": true, 00:36:01.762 "data_offset": 256, 00:36:01.762 "data_size": 7936 00:36:01.762 } 00:36:01.762 ] 00:36:01.762 }' 00:36:01.762 23:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:01.762 23:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:01.762 23:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:02.020 23:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:02.020 23:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@782 -- # killprocess 169289 00:36:02.020 23:21:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@948 -- # '[' -z 169289 ']' 00:36:02.020 23:21:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # kill -0 169289 00:36:02.020 23:21:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@953 -- # uname 00:36:02.020 23:21:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:02.020 23:21:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 169289 00:36:02.020 23:21:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:02.020 23:21:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:02.020 23:21:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 169289' 00:36:02.020 killing process with pid 169289 00:36:02.020 23:21:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@967 -- # kill 169289 00:36:02.020 Received shutdown signal, test time was about 60.000000 seconds 00:36:02.020 00:36:02.020 Latency(us) 00:36:02.020 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:02.020 =================================================================================================================== 00:36:02.020 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:02.020 23:21:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # wait 169289 00:36:02.020 [2024-07-13 23:21:51.240612] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:02.020 [2024-07-13 23:21:51.240924] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:02.020 [2024-07-13 23:21:51.241092] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:02.020 [2024-07-13 23:21:51.241203] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:36:02.020 [2024-07-13 23:21:51.268993] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:02.278 ************************************ 00:36:02.278 END TEST raid_rebuild_test_sb_4k 00:36:02.278 ************************************ 00:36:02.278 23:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # return 0 00:36:02.278 00:36:02.278 real 0m32.215s 00:36:02.278 user 0m52.129s 00:36:02.278 sys 0m3.643s 00:36:02.278 23:21:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:02.278 23:21:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:02.278 23:21:51 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:36:02.278 23:21:51 bdev_raid -- bdev/bdev_raid.sh@904 -- # base_malloc_params='-m 32' 00:36:02.278 23:21:51 bdev_raid -- bdev/bdev_raid.sh@905 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:36:02.278 23:21:51 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:36:02.278 23:21:51 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:02.278 23:21:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:02.278 ************************************ 00:36:02.278 START TEST raid_state_function_test_sb_md_separate 00:36:02.278 ************************************ 00:36:02.278 23:21:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:36:02.278 23:21:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:36:02.278 23:21:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:36:02.278 23:21:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:36:02.278 23:21:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:36:02.278 23:21:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:36:02.278 23:21:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:36:02.278 23:21:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:36:02.278 23:21:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:36:02.278 23:21:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:36:02.278 23:21:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:36:02.278 23:21:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:36:02.278 23:21:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:36:02.278 23:21:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:36:02.278 23:21:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:36:02.278 23:21:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:36:02.279 23:21:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:36:02.279 23:21:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:36:02.279 23:21:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:36:02.279 23:21:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:36:02.279 23:21:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:36:02.279 23:21:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:36:02.279 23:21:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:36:02.279 23:21:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=170169 00:36:02.279 23:21:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 170169' 00:36:02.279 Process raid pid: 170169 00:36:02.279 23:21:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:36:02.279 23:21:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 170169 /var/tmp/spdk-raid.sock 00:36:02.279 23:21:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@829 -- # '[' -z 170169 ']' 00:36:02.279 23:21:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:36:02.279 23:21:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:02.279 23:21:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:36:02.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:36:02.279 23:21:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:02.279 23:21:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:02.279 [2024-07-13 23:21:51.637874] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:36:02.279 [2024-07-13 23:21:51.638406] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:02.538 [2024-07-13 23:21:51.790386] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:02.538 [2024-07-13 23:21:51.866656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:02.538 [2024-07-13 23:21:51.920281] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:03.493 23:21:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:03.493 23:21:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # return 0 00:36:03.493 23:21:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:36:03.493 [2024-07-13 23:21:52.775672] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:03.493 [2024-07-13 23:21:52.775968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:03.493 [2024-07-13 23:21:52.776081] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:03.493 [2024-07-13 23:21:52.776143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:03.493 23:21:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:36:03.493 23:21:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:03.493 23:21:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:03.493 23:21:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:03.493 23:21:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:03.493 23:21:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:03.494 23:21:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:03.494 23:21:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:03.494 23:21:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:03.494 23:21:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:03.494 23:21:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:03.494 23:21:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:03.752 23:21:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:03.752 "name": "Existed_Raid", 00:36:03.752 "uuid": "5586cda1-d5c7-4090-a6d0-9156fb046408", 00:36:03.752 "strip_size_kb": 0, 00:36:03.752 "state": "configuring", 00:36:03.752 "raid_level": "raid1", 00:36:03.752 "superblock": true, 00:36:03.752 "num_base_bdevs": 2, 00:36:03.752 "num_base_bdevs_discovered": 0, 00:36:03.752 "num_base_bdevs_operational": 2, 00:36:03.752 "base_bdevs_list": [ 00:36:03.752 { 00:36:03.752 "name": "BaseBdev1", 00:36:03.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:03.752 "is_configured": false, 00:36:03.752 "data_offset": 0, 00:36:03.752 "data_size": 0 00:36:03.752 }, 00:36:03.752 { 00:36:03.752 "name": "BaseBdev2", 00:36:03.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:03.752 "is_configured": false, 00:36:03.752 "data_offset": 0, 00:36:03.752 "data_size": 0 00:36:03.752 } 00:36:03.752 ] 00:36:03.752 }' 00:36:03.752 23:21:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:03.752 23:21:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:04.317 23:21:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:36:04.574 [2024-07-13 23:21:53.871827] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:04.574 [2024-07-13 23:21:53.872164] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:36:04.574 23:21:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:36:04.832 [2024-07-13 23:21:54.131899] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:04.832 [2024-07-13 23:21:54.132209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:04.832 [2024-07-13 23:21:54.132323] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:04.833 [2024-07-13 23:21:54.132466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:04.833 23:21:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:36:05.090 [2024-07-13 23:21:54.357136] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:05.090 BaseBdev1 00:36:05.090 23:21:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:36:05.090 23:21:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:36:05.090 23:21:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:05.090 23:21:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:36:05.090 23:21:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:05.090 23:21:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:05.090 23:21:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:05.347 23:21:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:05.606 [ 00:36:05.606 { 00:36:05.606 "name": "BaseBdev1", 00:36:05.606 "aliases": [ 00:36:05.606 "67f94b49-561b-4663-b1c3-85952b47dcc8" 00:36:05.606 ], 00:36:05.606 "product_name": "Malloc disk", 00:36:05.606 "block_size": 4096, 00:36:05.606 "num_blocks": 8192, 00:36:05.606 "uuid": "67f94b49-561b-4663-b1c3-85952b47dcc8", 00:36:05.606 "md_size": 32, 00:36:05.606 "md_interleave": false, 00:36:05.606 "dif_type": 0, 00:36:05.606 "assigned_rate_limits": { 00:36:05.606 "rw_ios_per_sec": 0, 00:36:05.606 "rw_mbytes_per_sec": 0, 00:36:05.606 "r_mbytes_per_sec": 0, 00:36:05.606 "w_mbytes_per_sec": 0 00:36:05.606 }, 00:36:05.606 "claimed": true, 00:36:05.606 "claim_type": "exclusive_write", 00:36:05.606 "zoned": false, 00:36:05.606 "supported_io_types": { 00:36:05.606 "read": true, 00:36:05.606 "write": true, 00:36:05.606 "unmap": true, 00:36:05.606 "flush": true, 00:36:05.606 "reset": true, 00:36:05.606 "nvme_admin": false, 00:36:05.606 "nvme_io": false, 00:36:05.606 "nvme_io_md": false, 00:36:05.606 "write_zeroes": true, 00:36:05.606 "zcopy": true, 00:36:05.606 "get_zone_info": false, 00:36:05.606 "zone_management": false, 00:36:05.606 "zone_append": false, 00:36:05.606 "compare": false, 00:36:05.606 "compare_and_write": false, 00:36:05.606 "abort": true, 00:36:05.606 "seek_hole": false, 00:36:05.606 "seek_data": false, 00:36:05.606 "copy": true, 00:36:05.606 "nvme_iov_md": false 00:36:05.606 }, 00:36:05.606 "memory_domains": [ 00:36:05.606 { 00:36:05.606 "dma_device_id": "system", 00:36:05.606 "dma_device_type": 1 00:36:05.606 }, 00:36:05.606 { 00:36:05.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:05.606 "dma_device_type": 2 00:36:05.606 } 00:36:05.606 ], 00:36:05.606 "driver_specific": {} 00:36:05.606 } 00:36:05.606 ] 00:36:05.606 23:21:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:36:05.606 23:21:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:36:05.606 23:21:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:05.606 23:21:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:05.606 23:21:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:05.606 23:21:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:05.606 23:21:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:05.606 23:21:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:05.606 23:21:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:05.606 23:21:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:05.606 23:21:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:05.606 23:21:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:05.606 23:21:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:05.606 23:21:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:05.606 "name": "Existed_Raid", 00:36:05.606 "uuid": "66c340cb-4791-4695-a1fc-55d7c22c6596", 00:36:05.606 "strip_size_kb": 0, 00:36:05.606 "state": "configuring", 00:36:05.606 "raid_level": "raid1", 00:36:05.606 "superblock": true, 00:36:05.606 "num_base_bdevs": 2, 00:36:05.606 "num_base_bdevs_discovered": 1, 00:36:05.606 "num_base_bdevs_operational": 2, 00:36:05.606 "base_bdevs_list": [ 00:36:05.606 { 00:36:05.606 "name": "BaseBdev1", 00:36:05.606 "uuid": "67f94b49-561b-4663-b1c3-85952b47dcc8", 00:36:05.606 "is_configured": true, 00:36:05.606 "data_offset": 256, 00:36:05.606 "data_size": 7936 00:36:05.606 }, 00:36:05.606 { 00:36:05.606 "name": "BaseBdev2", 00:36:05.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:05.606 "is_configured": false, 00:36:05.606 "data_offset": 0, 00:36:05.606 "data_size": 0 00:36:05.606 } 00:36:05.606 ] 00:36:05.606 }' 00:36:05.606 23:21:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:05.606 23:21:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:06.541 23:21:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:36:06.541 [2024-07-13 23:21:55.777566] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:06.541 [2024-07-13 23:21:55.777824] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:36:06.541 23:21:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:36:06.799 [2024-07-13 23:21:55.997708] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:06.799 [2024-07-13 23:21:56.000018] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:06.799 [2024-07-13 23:21:56.000216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:06.799 23:21:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:36:06.799 23:21:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:36:06.799 23:21:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:36:06.799 23:21:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:06.799 23:21:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:06.799 23:21:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:06.799 23:21:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:06.799 23:21:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:06.799 23:21:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:06.799 23:21:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:06.799 23:21:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:06.799 23:21:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:06.799 23:21:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:06.799 23:21:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:07.057 23:21:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:07.057 "name": "Existed_Raid", 00:36:07.057 "uuid": "742a70c2-e9fe-49dd-a6c4-0a3f90e5f071", 00:36:07.057 "strip_size_kb": 0, 00:36:07.057 "state": "configuring", 00:36:07.057 "raid_level": "raid1", 00:36:07.057 "superblock": true, 00:36:07.057 "num_base_bdevs": 2, 00:36:07.057 "num_base_bdevs_discovered": 1, 00:36:07.057 "num_base_bdevs_operational": 2, 00:36:07.057 "base_bdevs_list": [ 00:36:07.057 { 00:36:07.057 "name": "BaseBdev1", 00:36:07.057 "uuid": "67f94b49-561b-4663-b1c3-85952b47dcc8", 00:36:07.057 "is_configured": true, 00:36:07.057 "data_offset": 256, 00:36:07.057 "data_size": 7936 00:36:07.057 }, 00:36:07.057 { 00:36:07.057 "name": "BaseBdev2", 00:36:07.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:07.057 "is_configured": false, 00:36:07.057 "data_offset": 0, 00:36:07.057 "data_size": 0 00:36:07.057 } 00:36:07.057 ] 00:36:07.057 }' 00:36:07.057 23:21:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:07.057 23:21:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:07.623 23:21:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:36:07.881 [2024-07-13 23:21:57.198682] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:07.881 [2024-07-13 23:21:57.199233] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:36:07.881 [2024-07-13 23:21:57.199419] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:07.881 [2024-07-13 23:21:57.199742] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:36:07.881 BaseBdev2 00:36:07.881 [2024-07-13 23:21:57.200107] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:36:07.881 [2024-07-13 23:21:57.200128] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:36:07.881 [2024-07-13 23:21:57.200269] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:07.881 23:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:36:07.881 23:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:36:07.881 23:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:07.881 23:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:36:07.881 23:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:07.881 23:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:07.881 23:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:08.139 23:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:08.397 [ 00:36:08.397 { 00:36:08.397 "name": "BaseBdev2", 00:36:08.397 "aliases": [ 00:36:08.397 "ac77cc80-14a6-44cf-b412-29bafd907bb9" 00:36:08.397 ], 00:36:08.397 "product_name": "Malloc disk", 00:36:08.397 "block_size": 4096, 00:36:08.397 "num_blocks": 8192, 00:36:08.397 "uuid": "ac77cc80-14a6-44cf-b412-29bafd907bb9", 00:36:08.397 "md_size": 32, 00:36:08.397 "md_interleave": false, 00:36:08.397 "dif_type": 0, 00:36:08.397 "assigned_rate_limits": { 00:36:08.397 "rw_ios_per_sec": 0, 00:36:08.397 "rw_mbytes_per_sec": 0, 00:36:08.397 "r_mbytes_per_sec": 0, 00:36:08.397 "w_mbytes_per_sec": 0 00:36:08.397 }, 00:36:08.397 "claimed": true, 00:36:08.397 "claim_type": "exclusive_write", 00:36:08.397 "zoned": false, 00:36:08.397 "supported_io_types": { 00:36:08.397 "read": true, 00:36:08.397 "write": true, 00:36:08.397 "unmap": true, 00:36:08.397 "flush": true, 00:36:08.397 "reset": true, 00:36:08.397 "nvme_admin": false, 00:36:08.397 "nvme_io": false, 00:36:08.397 "nvme_io_md": false, 00:36:08.397 "write_zeroes": true, 00:36:08.397 "zcopy": true, 00:36:08.397 "get_zone_info": false, 00:36:08.397 "zone_management": false, 00:36:08.397 "zone_append": false, 00:36:08.397 "compare": false, 00:36:08.397 "compare_and_write": false, 00:36:08.397 "abort": true, 00:36:08.397 "seek_hole": false, 00:36:08.397 "seek_data": false, 00:36:08.397 "copy": true, 00:36:08.397 "nvme_iov_md": false 00:36:08.397 }, 00:36:08.397 "memory_domains": [ 00:36:08.397 { 00:36:08.397 "dma_device_id": "system", 00:36:08.397 "dma_device_type": 1 00:36:08.397 }, 00:36:08.397 { 00:36:08.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:08.397 "dma_device_type": 2 00:36:08.397 } 00:36:08.397 ], 00:36:08.397 "driver_specific": {} 00:36:08.397 } 00:36:08.397 ] 00:36:08.397 23:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:36:08.397 23:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:36:08.397 23:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:36:08.397 23:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:36:08.397 23:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:08.397 23:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:08.397 23:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:08.397 23:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:08.397 23:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:08.397 23:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:08.397 23:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:08.397 23:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:08.397 23:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:08.397 23:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:08.397 23:21:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:08.657 23:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:08.657 "name": "Existed_Raid", 00:36:08.657 "uuid": "742a70c2-e9fe-49dd-a6c4-0a3f90e5f071", 00:36:08.657 "strip_size_kb": 0, 00:36:08.657 "state": "online", 00:36:08.657 "raid_level": "raid1", 00:36:08.657 "superblock": true, 00:36:08.657 "num_base_bdevs": 2, 00:36:08.657 "num_base_bdevs_discovered": 2, 00:36:08.657 "num_base_bdevs_operational": 2, 00:36:08.657 "base_bdevs_list": [ 00:36:08.657 { 00:36:08.657 "name": "BaseBdev1", 00:36:08.657 "uuid": "67f94b49-561b-4663-b1c3-85952b47dcc8", 00:36:08.657 "is_configured": true, 00:36:08.657 "data_offset": 256, 00:36:08.657 "data_size": 7936 00:36:08.657 }, 00:36:08.657 { 00:36:08.657 "name": "BaseBdev2", 00:36:08.657 "uuid": "ac77cc80-14a6-44cf-b412-29bafd907bb9", 00:36:08.657 "is_configured": true, 00:36:08.657 "data_offset": 256, 00:36:08.657 "data_size": 7936 00:36:08.657 } 00:36:08.657 ] 00:36:08.657 }' 00:36:08.657 23:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:08.657 23:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:09.592 23:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:36:09.592 23:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:36:09.592 23:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:36:09.592 23:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:36:09.592 23:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:36:09.592 23:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:36:09.592 23:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:36:09.592 23:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:36:09.592 [2024-07-13 23:21:58.887462] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:09.592 23:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:36:09.592 "name": "Existed_Raid", 00:36:09.592 "aliases": [ 00:36:09.592 "742a70c2-e9fe-49dd-a6c4-0a3f90e5f071" 00:36:09.592 ], 00:36:09.592 "product_name": "Raid Volume", 00:36:09.592 "block_size": 4096, 00:36:09.592 "num_blocks": 7936, 00:36:09.592 "uuid": "742a70c2-e9fe-49dd-a6c4-0a3f90e5f071", 00:36:09.592 "md_size": 32, 00:36:09.592 "md_interleave": false, 00:36:09.592 "dif_type": 0, 00:36:09.592 "assigned_rate_limits": { 00:36:09.592 "rw_ios_per_sec": 0, 00:36:09.592 "rw_mbytes_per_sec": 0, 00:36:09.592 "r_mbytes_per_sec": 0, 00:36:09.592 "w_mbytes_per_sec": 0 00:36:09.592 }, 00:36:09.592 "claimed": false, 00:36:09.592 "zoned": false, 00:36:09.592 "supported_io_types": { 00:36:09.592 "read": true, 00:36:09.592 "write": true, 00:36:09.592 "unmap": false, 00:36:09.592 "flush": false, 00:36:09.592 "reset": true, 00:36:09.592 "nvme_admin": false, 00:36:09.592 "nvme_io": false, 00:36:09.592 "nvme_io_md": false, 00:36:09.592 "write_zeroes": true, 00:36:09.592 "zcopy": false, 00:36:09.592 "get_zone_info": false, 00:36:09.592 "zone_management": false, 00:36:09.592 "zone_append": false, 00:36:09.592 "compare": false, 00:36:09.592 "compare_and_write": false, 00:36:09.592 "abort": false, 00:36:09.592 "seek_hole": false, 00:36:09.592 "seek_data": false, 00:36:09.592 "copy": false, 00:36:09.592 "nvme_iov_md": false 00:36:09.592 }, 00:36:09.592 "memory_domains": [ 00:36:09.592 { 00:36:09.592 "dma_device_id": "system", 00:36:09.592 "dma_device_type": 1 00:36:09.592 }, 00:36:09.592 { 00:36:09.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:09.592 "dma_device_type": 2 00:36:09.592 }, 00:36:09.592 { 00:36:09.592 "dma_device_id": "system", 00:36:09.592 "dma_device_type": 1 00:36:09.592 }, 00:36:09.592 { 00:36:09.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:09.592 "dma_device_type": 2 00:36:09.592 } 00:36:09.592 ], 00:36:09.592 "driver_specific": { 00:36:09.592 "raid": { 00:36:09.592 "uuid": "742a70c2-e9fe-49dd-a6c4-0a3f90e5f071", 00:36:09.592 "strip_size_kb": 0, 00:36:09.592 "state": "online", 00:36:09.592 "raid_level": "raid1", 00:36:09.592 "superblock": true, 00:36:09.592 "num_base_bdevs": 2, 00:36:09.592 "num_base_bdevs_discovered": 2, 00:36:09.592 "num_base_bdevs_operational": 2, 00:36:09.592 "base_bdevs_list": [ 00:36:09.592 { 00:36:09.592 "name": "BaseBdev1", 00:36:09.592 "uuid": "67f94b49-561b-4663-b1c3-85952b47dcc8", 00:36:09.592 "is_configured": true, 00:36:09.592 "data_offset": 256, 00:36:09.592 "data_size": 7936 00:36:09.592 }, 00:36:09.592 { 00:36:09.592 "name": "BaseBdev2", 00:36:09.592 "uuid": "ac77cc80-14a6-44cf-b412-29bafd907bb9", 00:36:09.592 "is_configured": true, 00:36:09.592 "data_offset": 256, 00:36:09.592 "data_size": 7936 00:36:09.592 } 00:36:09.592 ] 00:36:09.592 } 00:36:09.592 } 00:36:09.592 }' 00:36:09.592 23:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:09.592 23:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:36:09.592 BaseBdev2' 00:36:09.592 23:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:09.592 23:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:36:09.592 23:21:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:09.851 23:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:09.851 "name": "BaseBdev1", 00:36:09.851 "aliases": [ 00:36:09.851 "67f94b49-561b-4663-b1c3-85952b47dcc8" 00:36:09.851 ], 00:36:09.851 "product_name": "Malloc disk", 00:36:09.851 "block_size": 4096, 00:36:09.851 "num_blocks": 8192, 00:36:09.851 "uuid": "67f94b49-561b-4663-b1c3-85952b47dcc8", 00:36:09.851 "md_size": 32, 00:36:09.851 "md_interleave": false, 00:36:09.851 "dif_type": 0, 00:36:09.851 "assigned_rate_limits": { 00:36:09.851 "rw_ios_per_sec": 0, 00:36:09.851 "rw_mbytes_per_sec": 0, 00:36:09.851 "r_mbytes_per_sec": 0, 00:36:09.851 "w_mbytes_per_sec": 0 00:36:09.851 }, 00:36:09.851 "claimed": true, 00:36:09.851 "claim_type": "exclusive_write", 00:36:09.851 "zoned": false, 00:36:09.851 "supported_io_types": { 00:36:09.851 "read": true, 00:36:09.851 "write": true, 00:36:09.851 "unmap": true, 00:36:09.851 "flush": true, 00:36:09.851 "reset": true, 00:36:09.851 "nvme_admin": false, 00:36:09.851 "nvme_io": false, 00:36:09.851 "nvme_io_md": false, 00:36:09.851 "write_zeroes": true, 00:36:09.851 "zcopy": true, 00:36:09.851 "get_zone_info": false, 00:36:09.851 "zone_management": false, 00:36:09.851 "zone_append": false, 00:36:09.851 "compare": false, 00:36:09.851 "compare_and_write": false, 00:36:09.851 "abort": true, 00:36:09.851 "seek_hole": false, 00:36:09.851 "seek_data": false, 00:36:09.851 "copy": true, 00:36:09.851 "nvme_iov_md": false 00:36:09.851 }, 00:36:09.851 "memory_domains": [ 00:36:09.851 { 00:36:09.851 "dma_device_id": "system", 00:36:09.851 "dma_device_type": 1 00:36:09.851 }, 00:36:09.851 { 00:36:09.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:09.851 "dma_device_type": 2 00:36:09.851 } 00:36:09.851 ], 00:36:09.851 "driver_specific": {} 00:36:09.851 }' 00:36:09.851 23:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:09.851 23:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:10.110 23:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:36:10.110 23:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:10.110 23:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:10.110 23:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:36:10.110 23:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:10.110 23:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:10.110 23:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:36:10.110 23:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:10.369 23:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:10.369 23:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:36:10.369 23:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:10.369 23:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:10.369 23:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:36:10.627 23:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:10.627 "name": "BaseBdev2", 00:36:10.627 "aliases": [ 00:36:10.627 "ac77cc80-14a6-44cf-b412-29bafd907bb9" 00:36:10.627 ], 00:36:10.627 "product_name": "Malloc disk", 00:36:10.627 "block_size": 4096, 00:36:10.627 "num_blocks": 8192, 00:36:10.627 "uuid": "ac77cc80-14a6-44cf-b412-29bafd907bb9", 00:36:10.627 "md_size": 32, 00:36:10.627 "md_interleave": false, 00:36:10.627 "dif_type": 0, 00:36:10.627 "assigned_rate_limits": { 00:36:10.627 "rw_ios_per_sec": 0, 00:36:10.627 "rw_mbytes_per_sec": 0, 00:36:10.627 "r_mbytes_per_sec": 0, 00:36:10.627 "w_mbytes_per_sec": 0 00:36:10.627 }, 00:36:10.627 "claimed": true, 00:36:10.627 "claim_type": "exclusive_write", 00:36:10.627 "zoned": false, 00:36:10.627 "supported_io_types": { 00:36:10.627 "read": true, 00:36:10.627 "write": true, 00:36:10.627 "unmap": true, 00:36:10.627 "flush": true, 00:36:10.627 "reset": true, 00:36:10.627 "nvme_admin": false, 00:36:10.627 "nvme_io": false, 00:36:10.627 "nvme_io_md": false, 00:36:10.627 "write_zeroes": true, 00:36:10.627 "zcopy": true, 00:36:10.627 "get_zone_info": false, 00:36:10.627 "zone_management": false, 00:36:10.627 "zone_append": false, 00:36:10.627 "compare": false, 00:36:10.627 "compare_and_write": false, 00:36:10.627 "abort": true, 00:36:10.627 "seek_hole": false, 00:36:10.627 "seek_data": false, 00:36:10.627 "copy": true, 00:36:10.628 "nvme_iov_md": false 00:36:10.628 }, 00:36:10.628 "memory_domains": [ 00:36:10.628 { 00:36:10.628 "dma_device_id": "system", 00:36:10.628 "dma_device_type": 1 00:36:10.628 }, 00:36:10.628 { 00:36:10.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:10.628 "dma_device_type": 2 00:36:10.628 } 00:36:10.628 ], 00:36:10.628 "driver_specific": {} 00:36:10.628 }' 00:36:10.628 23:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:10.628 23:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:10.628 23:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:36:10.628 23:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:10.628 23:21:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:10.628 23:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:36:10.628 23:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:10.886 23:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:10.886 23:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:36:10.886 23:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:10.886 23:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:10.886 23:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:36:10.886 23:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:36:11.144 [2024-07-13 23:22:00.488520] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:11.144 23:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:36:11.144 23:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:36:11.144 23:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:36:11.144 23:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:36:11.144 23:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:36:11.144 23:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:36:11.144 23:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:11.144 23:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:11.144 23:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:11.144 23:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:11.144 23:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:11.144 23:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:11.144 23:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:11.144 23:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:11.144 23:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:11.144 23:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:11.144 23:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:11.403 23:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:11.403 "name": "Existed_Raid", 00:36:11.403 "uuid": "742a70c2-e9fe-49dd-a6c4-0a3f90e5f071", 00:36:11.403 "strip_size_kb": 0, 00:36:11.403 "state": "online", 00:36:11.403 "raid_level": "raid1", 00:36:11.403 "superblock": true, 00:36:11.403 "num_base_bdevs": 2, 00:36:11.403 "num_base_bdevs_discovered": 1, 00:36:11.403 "num_base_bdevs_operational": 1, 00:36:11.403 "base_bdevs_list": [ 00:36:11.403 { 00:36:11.403 "name": null, 00:36:11.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:11.403 "is_configured": false, 00:36:11.403 "data_offset": 256, 00:36:11.403 "data_size": 7936 00:36:11.403 }, 00:36:11.403 { 00:36:11.403 "name": "BaseBdev2", 00:36:11.403 "uuid": "ac77cc80-14a6-44cf-b412-29bafd907bb9", 00:36:11.403 "is_configured": true, 00:36:11.403 "data_offset": 256, 00:36:11.403 "data_size": 7936 00:36:11.403 } 00:36:11.403 ] 00:36:11.403 }' 00:36:11.403 23:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:11.403 23:22:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:12.336 23:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:36:12.336 23:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:36:12.336 23:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:12.336 23:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:36:12.336 23:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:36:12.336 23:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:12.336 23:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:36:12.594 [2024-07-13 23:22:01.861701] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:12.594 [2024-07-13 23:22:01.862011] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:12.594 [2024-07-13 23:22:01.873622] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:12.594 [2024-07-13 23:22:01.873888] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:12.594 [2024-07-13 23:22:01.874026] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:36:12.594 23:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:36:12.594 23:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:36:12.594 23:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:36:12.594 23:22:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:12.852 23:22:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:36:12.852 23:22:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:36:12.852 23:22:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:36:12.852 23:22:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 170169 00:36:12.852 23:22:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@948 -- # '[' -z 170169 ']' 00:36:12.852 23:22:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # kill -0 170169 00:36:12.852 23:22:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # uname 00:36:12.852 23:22:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:12.852 23:22:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 170169 00:36:12.852 23:22:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:12.852 23:22:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:12.852 23:22:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 170169' 00:36:12.852 killing process with pid 170169 00:36:12.852 23:22:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@967 -- # kill 170169 00:36:12.852 23:22:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # wait 170169 00:36:12.852 [2024-07-13 23:22:02.126142] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:12.852 [2024-07-13 23:22:02.126218] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:13.111 23:22:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:36:13.111 00:36:13.111 real 0m10.789s 00:36:13.111 user 0m19.926s 00:36:13.111 sys 0m1.256s 00:36:13.111 23:22:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:13.111 23:22:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:13.111 ************************************ 00:36:13.111 END TEST raid_state_function_test_sb_md_separate 00:36:13.111 ************************************ 00:36:13.111 23:22:02 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:36:13.111 23:22:02 bdev_raid -- bdev/bdev_raid.sh@906 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:36:13.111 23:22:02 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:36:13.111 23:22:02 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:13.111 23:22:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:13.111 ************************************ 00:36:13.111 START TEST raid_superblock_test_md_separate 00:36:13.111 ************************************ 00:36:13.111 23:22:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:36:13.111 23:22:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:36:13.111 23:22:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:36:13.111 23:22:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:36:13.111 23:22:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:36:13.111 23:22:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:36:13.111 23:22:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:36:13.111 23:22:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:36:13.111 23:22:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:36:13.111 23:22:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:36:13.111 23:22:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local strip_size 00:36:13.111 23:22:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:36:13.111 23:22:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:36:13.111 23:22:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:36:13.111 23:22:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:36:13.111 23:22:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:36:13.111 23:22:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # raid_pid=170527 00:36:13.111 23:22:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # waitforlisten 170527 /var/tmp/spdk-raid.sock 00:36:13.111 23:22:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@829 -- # '[' -z 170527 ']' 00:36:13.111 23:22:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:36:13.111 23:22:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:36:13.111 23:22:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:13.111 23:22:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:36:13.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:36:13.111 23:22:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:13.111 23:22:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:13.111 [2024-07-13 23:22:02.466623] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:36:13.111 [2024-07-13 23:22:02.467045] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170527 ] 00:36:13.370 [2024-07-13 23:22:02.610106] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:13.370 [2024-07-13 23:22:02.691221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:13.370 [2024-07-13 23:22:02.750052] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:14.324 23:22:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:14.324 23:22:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # return 0 00:36:14.324 23:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:36:14.324 23:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:36:14.324 23:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:36:14.324 23:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:36:14.324 23:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:36:14.325 23:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:14.325 23:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:36:14.325 23:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:14.325 23:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:36:14.325 malloc1 00:36:14.325 23:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:14.584 [2024-07-13 23:22:03.819435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:14.584 [2024-07-13 23:22:03.819750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:14.584 [2024-07-13 23:22:03.819943] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:36:14.584 [2024-07-13 23:22:03.820096] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:14.584 [2024-07-13 23:22:03.822764] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:14.584 [2024-07-13 23:22:03.822984] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:14.584 pt1 00:36:14.584 23:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:36:14.584 23:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:36:14.584 23:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:36:14.584 23:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:36:14.584 23:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:36:14.584 23:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:14.584 23:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:36:14.584 23:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:14.584 23:22:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:36:14.843 malloc2 00:36:14.843 23:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:15.102 [2024-07-13 23:22:04.297337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:15.102 [2024-07-13 23:22:04.297843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:15.102 [2024-07-13 23:22:04.298143] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:36:15.102 [2024-07-13 23:22:04.298429] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:15.102 [2024-07-13 23:22:04.300947] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:15.102 [2024-07-13 23:22:04.301250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:15.102 pt2 00:36:15.102 23:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:36:15.102 23:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:36:15.102 23:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:36:15.360 [2024-07-13 23:22:04.517839] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:15.360 [2024-07-13 23:22:04.520240] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:15.360 [2024-07-13 23:22:04.520605] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006c80 00:36:15.360 [2024-07-13 23:22:04.520728] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:15.360 [2024-07-13 23:22:04.521021] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:36:15.360 [2024-07-13 23:22:04.521352] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006c80 00:36:15.360 [2024-07-13 23:22:04.521466] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000006c80 00:36:15.360 [2024-07-13 23:22:04.521727] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:15.360 23:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:15.360 23:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:15.360 23:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:15.360 23:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:15.360 23:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:15.360 23:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:15.360 23:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:15.360 23:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:15.360 23:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:15.360 23:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:15.360 23:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:15.360 23:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:15.618 23:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:15.618 "name": "raid_bdev1", 00:36:15.618 "uuid": "357f0e60-d925-4cbd-acb8-8349fad615bd", 00:36:15.618 "strip_size_kb": 0, 00:36:15.618 "state": "online", 00:36:15.618 "raid_level": "raid1", 00:36:15.618 "superblock": true, 00:36:15.618 "num_base_bdevs": 2, 00:36:15.618 "num_base_bdevs_discovered": 2, 00:36:15.618 "num_base_bdevs_operational": 2, 00:36:15.618 "base_bdevs_list": [ 00:36:15.618 { 00:36:15.618 "name": "pt1", 00:36:15.618 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:15.618 "is_configured": true, 00:36:15.618 "data_offset": 256, 00:36:15.618 "data_size": 7936 00:36:15.618 }, 00:36:15.618 { 00:36:15.618 "name": "pt2", 00:36:15.618 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:15.618 "is_configured": true, 00:36:15.618 "data_offset": 256, 00:36:15.618 "data_size": 7936 00:36:15.618 } 00:36:15.618 ] 00:36:15.618 }' 00:36:15.618 23:22:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:15.618 23:22:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:16.184 23:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:36:16.184 23:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:36:16.184 23:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:36:16.184 23:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:36:16.184 23:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:36:16.184 23:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:36:16.184 23:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:36:16.184 23:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:16.184 [2024-07-13 23:22:05.538292] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:16.185 23:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:36:16.185 "name": "raid_bdev1", 00:36:16.185 "aliases": [ 00:36:16.185 "357f0e60-d925-4cbd-acb8-8349fad615bd" 00:36:16.185 ], 00:36:16.185 "product_name": "Raid Volume", 00:36:16.185 "block_size": 4096, 00:36:16.185 "num_blocks": 7936, 00:36:16.185 "uuid": "357f0e60-d925-4cbd-acb8-8349fad615bd", 00:36:16.185 "md_size": 32, 00:36:16.185 "md_interleave": false, 00:36:16.185 "dif_type": 0, 00:36:16.185 "assigned_rate_limits": { 00:36:16.185 "rw_ios_per_sec": 0, 00:36:16.185 "rw_mbytes_per_sec": 0, 00:36:16.185 "r_mbytes_per_sec": 0, 00:36:16.185 "w_mbytes_per_sec": 0 00:36:16.185 }, 00:36:16.185 "claimed": false, 00:36:16.185 "zoned": false, 00:36:16.185 "supported_io_types": { 00:36:16.185 "read": true, 00:36:16.185 "write": true, 00:36:16.185 "unmap": false, 00:36:16.185 "flush": false, 00:36:16.185 "reset": true, 00:36:16.185 "nvme_admin": false, 00:36:16.185 "nvme_io": false, 00:36:16.185 "nvme_io_md": false, 00:36:16.185 "write_zeroes": true, 00:36:16.185 "zcopy": false, 00:36:16.185 "get_zone_info": false, 00:36:16.185 "zone_management": false, 00:36:16.185 "zone_append": false, 00:36:16.185 "compare": false, 00:36:16.185 "compare_and_write": false, 00:36:16.185 "abort": false, 00:36:16.185 "seek_hole": false, 00:36:16.185 "seek_data": false, 00:36:16.185 "copy": false, 00:36:16.185 "nvme_iov_md": false 00:36:16.185 }, 00:36:16.185 "memory_domains": [ 00:36:16.185 { 00:36:16.185 "dma_device_id": "system", 00:36:16.185 "dma_device_type": 1 00:36:16.185 }, 00:36:16.185 { 00:36:16.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:16.185 "dma_device_type": 2 00:36:16.185 }, 00:36:16.185 { 00:36:16.185 "dma_device_id": "system", 00:36:16.185 "dma_device_type": 1 00:36:16.185 }, 00:36:16.185 { 00:36:16.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:16.185 "dma_device_type": 2 00:36:16.185 } 00:36:16.185 ], 00:36:16.185 "driver_specific": { 00:36:16.185 "raid": { 00:36:16.185 "uuid": "357f0e60-d925-4cbd-acb8-8349fad615bd", 00:36:16.185 "strip_size_kb": 0, 00:36:16.185 "state": "online", 00:36:16.185 "raid_level": "raid1", 00:36:16.185 "superblock": true, 00:36:16.185 "num_base_bdevs": 2, 00:36:16.185 "num_base_bdevs_discovered": 2, 00:36:16.185 "num_base_bdevs_operational": 2, 00:36:16.185 "base_bdevs_list": [ 00:36:16.185 { 00:36:16.185 "name": "pt1", 00:36:16.185 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:16.185 "is_configured": true, 00:36:16.185 "data_offset": 256, 00:36:16.185 "data_size": 7936 00:36:16.185 }, 00:36:16.185 { 00:36:16.185 "name": "pt2", 00:36:16.185 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:16.185 "is_configured": true, 00:36:16.185 "data_offset": 256, 00:36:16.185 "data_size": 7936 00:36:16.185 } 00:36:16.185 ] 00:36:16.185 } 00:36:16.185 } 00:36:16.185 }' 00:36:16.185 23:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:16.444 23:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:36:16.444 pt2' 00:36:16.444 23:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:16.444 23:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:36:16.444 23:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:16.702 23:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:16.702 "name": "pt1", 00:36:16.702 "aliases": [ 00:36:16.702 "00000000-0000-0000-0000-000000000001" 00:36:16.702 ], 00:36:16.702 "product_name": "passthru", 00:36:16.702 "block_size": 4096, 00:36:16.702 "num_blocks": 8192, 00:36:16.702 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:16.702 "md_size": 32, 00:36:16.702 "md_interleave": false, 00:36:16.702 "dif_type": 0, 00:36:16.702 "assigned_rate_limits": { 00:36:16.702 "rw_ios_per_sec": 0, 00:36:16.702 "rw_mbytes_per_sec": 0, 00:36:16.702 "r_mbytes_per_sec": 0, 00:36:16.702 "w_mbytes_per_sec": 0 00:36:16.702 }, 00:36:16.702 "claimed": true, 00:36:16.702 "claim_type": "exclusive_write", 00:36:16.703 "zoned": false, 00:36:16.703 "supported_io_types": { 00:36:16.703 "read": true, 00:36:16.703 "write": true, 00:36:16.703 "unmap": true, 00:36:16.703 "flush": true, 00:36:16.703 "reset": true, 00:36:16.703 "nvme_admin": false, 00:36:16.703 "nvme_io": false, 00:36:16.703 "nvme_io_md": false, 00:36:16.703 "write_zeroes": true, 00:36:16.703 "zcopy": true, 00:36:16.703 "get_zone_info": false, 00:36:16.703 "zone_management": false, 00:36:16.703 "zone_append": false, 00:36:16.703 "compare": false, 00:36:16.703 "compare_and_write": false, 00:36:16.703 "abort": true, 00:36:16.703 "seek_hole": false, 00:36:16.703 "seek_data": false, 00:36:16.703 "copy": true, 00:36:16.703 "nvme_iov_md": false 00:36:16.703 }, 00:36:16.703 "memory_domains": [ 00:36:16.703 { 00:36:16.703 "dma_device_id": "system", 00:36:16.703 "dma_device_type": 1 00:36:16.703 }, 00:36:16.703 { 00:36:16.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:16.703 "dma_device_type": 2 00:36:16.703 } 00:36:16.703 ], 00:36:16.703 "driver_specific": { 00:36:16.703 "passthru": { 00:36:16.703 "name": "pt1", 00:36:16.703 "base_bdev_name": "malloc1" 00:36:16.703 } 00:36:16.703 } 00:36:16.703 }' 00:36:16.703 23:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:16.703 23:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:16.703 23:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:36:16.703 23:22:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:16.703 23:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:16.703 23:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:36:16.703 23:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:16.961 23:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:16.961 23:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:36:16.961 23:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:16.961 23:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:16.961 23:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:36:16.961 23:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:16.962 23:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:36:16.962 23:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:17.220 23:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:17.220 "name": "pt2", 00:36:17.220 "aliases": [ 00:36:17.220 "00000000-0000-0000-0000-000000000002" 00:36:17.220 ], 00:36:17.220 "product_name": "passthru", 00:36:17.220 "block_size": 4096, 00:36:17.220 "num_blocks": 8192, 00:36:17.220 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:17.220 "md_size": 32, 00:36:17.220 "md_interleave": false, 00:36:17.220 "dif_type": 0, 00:36:17.220 "assigned_rate_limits": { 00:36:17.220 "rw_ios_per_sec": 0, 00:36:17.220 "rw_mbytes_per_sec": 0, 00:36:17.220 "r_mbytes_per_sec": 0, 00:36:17.220 "w_mbytes_per_sec": 0 00:36:17.220 }, 00:36:17.220 "claimed": true, 00:36:17.220 "claim_type": "exclusive_write", 00:36:17.220 "zoned": false, 00:36:17.220 "supported_io_types": { 00:36:17.220 "read": true, 00:36:17.220 "write": true, 00:36:17.220 "unmap": true, 00:36:17.220 "flush": true, 00:36:17.220 "reset": true, 00:36:17.220 "nvme_admin": false, 00:36:17.220 "nvme_io": false, 00:36:17.220 "nvme_io_md": false, 00:36:17.220 "write_zeroes": true, 00:36:17.220 "zcopy": true, 00:36:17.220 "get_zone_info": false, 00:36:17.220 "zone_management": false, 00:36:17.220 "zone_append": false, 00:36:17.220 "compare": false, 00:36:17.220 "compare_and_write": false, 00:36:17.220 "abort": true, 00:36:17.220 "seek_hole": false, 00:36:17.220 "seek_data": false, 00:36:17.220 "copy": true, 00:36:17.220 "nvme_iov_md": false 00:36:17.220 }, 00:36:17.220 "memory_domains": [ 00:36:17.220 { 00:36:17.220 "dma_device_id": "system", 00:36:17.220 "dma_device_type": 1 00:36:17.220 }, 00:36:17.220 { 00:36:17.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:17.220 "dma_device_type": 2 00:36:17.220 } 00:36:17.220 ], 00:36:17.220 "driver_specific": { 00:36:17.220 "passthru": { 00:36:17.220 "name": "pt2", 00:36:17.220 "base_bdev_name": "malloc2" 00:36:17.220 } 00:36:17.220 } 00:36:17.220 }' 00:36:17.220 23:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:17.220 23:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:17.220 23:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:36:17.220 23:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:17.478 23:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:17.478 23:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:36:17.478 23:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:17.479 23:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:17.479 23:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:36:17.479 23:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:17.479 23:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:17.479 23:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:36:17.479 23:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:17.479 23:22:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:36:17.737 [2024-07-13 23:22:07.066647] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:17.737 23:22:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=357f0e60-d925-4cbd-acb8-8349fad615bd 00:36:17.737 23:22:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # '[' -z 357f0e60-d925-4cbd-acb8-8349fad615bd ']' 00:36:17.737 23:22:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:36:17.996 [2024-07-13 23:22:07.346452] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:17.996 [2024-07-13 23:22:07.346645] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:17.996 [2024-07-13 23:22:07.346879] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:17.996 [2024-07-13 23:22:07.347087] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:17.996 [2024-07-13 23:22:07.347225] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name raid_bdev1, state offline 00:36:17.996 23:22:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:17.996 23:22:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:36:18.254 23:22:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:36:18.254 23:22:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:36:18.254 23:22:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:36:18.254 23:22:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:36:18.513 23:22:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:36:18.513 23:22:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:36:18.772 23:22:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:36:18.772 23:22:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:36:19.031 23:22:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:36:19.031 23:22:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:36:19.031 23:22:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:36:19.031 23:22:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:36:19.031 23:22:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:19.031 23:22:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:19.031 23:22:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:19.031 23:22:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:19.031 23:22:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:19.031 23:22:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:19.031 23:22:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:19.031 23:22:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:36:19.031 23:22:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:36:19.290 [2024-07-13 23:22:08.502654] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:36:19.290 [2024-07-13 23:22:08.505000] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:36:19.290 [2024-07-13 23:22:08.505220] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:36:19.290 [2024-07-13 23:22:08.505470] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:36:19.290 [2024-07-13 23:22:08.505630] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:19.290 [2024-07-13 23:22:08.505724] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state configuring 00:36:19.290 request: 00:36:19.290 { 00:36:19.290 "name": "raid_bdev1", 00:36:19.290 "raid_level": "raid1", 00:36:19.290 "base_bdevs": [ 00:36:19.290 "malloc1", 00:36:19.290 "malloc2" 00:36:19.290 ], 00:36:19.290 "superblock": false, 00:36:19.290 "method": "bdev_raid_create", 00:36:19.290 "req_id": 1 00:36:19.290 } 00:36:19.290 Got JSON-RPC error response 00:36:19.290 response: 00:36:19.290 { 00:36:19.290 "code": -17, 00:36:19.290 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:36:19.290 } 00:36:19.290 23:22:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # es=1 00:36:19.290 23:22:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:19.290 23:22:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:19.290 23:22:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:19.290 23:22:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:19.291 23:22:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:36:19.550 23:22:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:36:19.550 23:22:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:36:19.550 23:22:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:19.808 [2024-07-13 23:22:09.046751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:19.808 [2024-07-13 23:22:09.047068] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:19.808 [2024-07-13 23:22:09.047257] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:36:19.808 [2024-07-13 23:22:09.047410] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:19.808 [2024-07-13 23:22:09.049786] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:19.808 [2024-07-13 23:22:09.049963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:19.808 [2024-07-13 23:22:09.050180] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:36:19.808 [2024-07-13 23:22:09.050365] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:19.808 pt1 00:36:19.808 23:22:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:36:19.809 23:22:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:19.809 23:22:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:19.809 23:22:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:19.809 23:22:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:19.809 23:22:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:19.809 23:22:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:19.809 23:22:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:19.809 23:22:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:19.809 23:22:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:19.809 23:22:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:19.809 23:22:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:20.067 23:22:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:20.067 "name": "raid_bdev1", 00:36:20.067 "uuid": "357f0e60-d925-4cbd-acb8-8349fad615bd", 00:36:20.067 "strip_size_kb": 0, 00:36:20.067 "state": "configuring", 00:36:20.067 "raid_level": "raid1", 00:36:20.067 "superblock": true, 00:36:20.067 "num_base_bdevs": 2, 00:36:20.067 "num_base_bdevs_discovered": 1, 00:36:20.068 "num_base_bdevs_operational": 2, 00:36:20.068 "base_bdevs_list": [ 00:36:20.068 { 00:36:20.068 "name": "pt1", 00:36:20.068 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:20.068 "is_configured": true, 00:36:20.068 "data_offset": 256, 00:36:20.068 "data_size": 7936 00:36:20.068 }, 00:36:20.068 { 00:36:20.068 "name": null, 00:36:20.068 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:20.068 "is_configured": false, 00:36:20.068 "data_offset": 256, 00:36:20.068 "data_size": 7936 00:36:20.068 } 00:36:20.068 ] 00:36:20.068 }' 00:36:20.068 23:22:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:20.068 23:22:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:20.635 23:22:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:36:20.635 23:22:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:36:20.635 23:22:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:36:20.635 23:22:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:20.894 [2024-07-13 23:22:10.151018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:20.894 [2024-07-13 23:22:10.151299] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:20.894 [2024-07-13 23:22:10.151449] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:36:20.894 [2024-07-13 23:22:10.151592] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:20.894 [2024-07-13 23:22:10.151927] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:20.894 [2024-07-13 23:22:10.152108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:20.894 [2024-07-13 23:22:10.152304] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:36:20.894 [2024-07-13 23:22:10.152435] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:20.894 [2024-07-13 23:22:10.152645] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:36:20.894 [2024-07-13 23:22:10.152762] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:20.894 [2024-07-13 23:22:10.152890] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:36:20.894 [2024-07-13 23:22:10.153175] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:36:20.894 [2024-07-13 23:22:10.153298] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:36:20.894 [2024-07-13 23:22:10.153483] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:20.894 pt2 00:36:20.894 23:22:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:36:20.894 23:22:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:36:20.894 23:22:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:20.894 23:22:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:20.894 23:22:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:20.894 23:22:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:20.894 23:22:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:20.894 23:22:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:20.894 23:22:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:20.894 23:22:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:20.894 23:22:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:20.894 23:22:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:20.894 23:22:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:20.894 23:22:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:21.152 23:22:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:21.153 "name": "raid_bdev1", 00:36:21.153 "uuid": "357f0e60-d925-4cbd-acb8-8349fad615bd", 00:36:21.153 "strip_size_kb": 0, 00:36:21.153 "state": "online", 00:36:21.153 "raid_level": "raid1", 00:36:21.153 "superblock": true, 00:36:21.153 "num_base_bdevs": 2, 00:36:21.153 "num_base_bdevs_discovered": 2, 00:36:21.153 "num_base_bdevs_operational": 2, 00:36:21.153 "base_bdevs_list": [ 00:36:21.153 { 00:36:21.153 "name": "pt1", 00:36:21.153 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:21.153 "is_configured": true, 00:36:21.153 "data_offset": 256, 00:36:21.153 "data_size": 7936 00:36:21.153 }, 00:36:21.153 { 00:36:21.153 "name": "pt2", 00:36:21.153 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:21.153 "is_configured": true, 00:36:21.153 "data_offset": 256, 00:36:21.153 "data_size": 7936 00:36:21.153 } 00:36:21.153 ] 00:36:21.153 }' 00:36:21.153 23:22:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:21.153 23:22:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:21.719 23:22:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:36:21.719 23:22:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:36:21.719 23:22:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:36:21.719 23:22:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:36:21.719 23:22:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:36:21.719 23:22:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:36:21.719 23:22:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:21.719 23:22:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:36:21.978 [2024-07-13 23:22:11.307519] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:21.978 23:22:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:36:21.978 "name": "raid_bdev1", 00:36:21.978 "aliases": [ 00:36:21.978 "357f0e60-d925-4cbd-acb8-8349fad615bd" 00:36:21.978 ], 00:36:21.978 "product_name": "Raid Volume", 00:36:21.978 "block_size": 4096, 00:36:21.978 "num_blocks": 7936, 00:36:21.978 "uuid": "357f0e60-d925-4cbd-acb8-8349fad615bd", 00:36:21.978 "md_size": 32, 00:36:21.978 "md_interleave": false, 00:36:21.978 "dif_type": 0, 00:36:21.978 "assigned_rate_limits": { 00:36:21.978 "rw_ios_per_sec": 0, 00:36:21.978 "rw_mbytes_per_sec": 0, 00:36:21.978 "r_mbytes_per_sec": 0, 00:36:21.978 "w_mbytes_per_sec": 0 00:36:21.978 }, 00:36:21.978 "claimed": false, 00:36:21.978 "zoned": false, 00:36:21.978 "supported_io_types": { 00:36:21.978 "read": true, 00:36:21.978 "write": true, 00:36:21.978 "unmap": false, 00:36:21.978 "flush": false, 00:36:21.978 "reset": true, 00:36:21.978 "nvme_admin": false, 00:36:21.978 "nvme_io": false, 00:36:21.978 "nvme_io_md": false, 00:36:21.978 "write_zeroes": true, 00:36:21.978 "zcopy": false, 00:36:21.978 "get_zone_info": false, 00:36:21.978 "zone_management": false, 00:36:21.978 "zone_append": false, 00:36:21.978 "compare": false, 00:36:21.978 "compare_and_write": false, 00:36:21.978 "abort": false, 00:36:21.978 "seek_hole": false, 00:36:21.978 "seek_data": false, 00:36:21.978 "copy": false, 00:36:21.978 "nvme_iov_md": false 00:36:21.978 }, 00:36:21.978 "memory_domains": [ 00:36:21.978 { 00:36:21.978 "dma_device_id": "system", 00:36:21.978 "dma_device_type": 1 00:36:21.978 }, 00:36:21.978 { 00:36:21.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:21.978 "dma_device_type": 2 00:36:21.978 }, 00:36:21.978 { 00:36:21.978 "dma_device_id": "system", 00:36:21.978 "dma_device_type": 1 00:36:21.978 }, 00:36:21.978 { 00:36:21.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:21.978 "dma_device_type": 2 00:36:21.978 } 00:36:21.978 ], 00:36:21.978 "driver_specific": { 00:36:21.978 "raid": { 00:36:21.978 "uuid": "357f0e60-d925-4cbd-acb8-8349fad615bd", 00:36:21.978 "strip_size_kb": 0, 00:36:21.978 "state": "online", 00:36:21.978 "raid_level": "raid1", 00:36:21.978 "superblock": true, 00:36:21.978 "num_base_bdevs": 2, 00:36:21.978 "num_base_bdevs_discovered": 2, 00:36:21.978 "num_base_bdevs_operational": 2, 00:36:21.978 "base_bdevs_list": [ 00:36:21.978 { 00:36:21.978 "name": "pt1", 00:36:21.978 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:21.978 "is_configured": true, 00:36:21.978 "data_offset": 256, 00:36:21.978 "data_size": 7936 00:36:21.978 }, 00:36:21.978 { 00:36:21.978 "name": "pt2", 00:36:21.978 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:21.978 "is_configured": true, 00:36:21.978 "data_offset": 256, 00:36:21.978 "data_size": 7936 00:36:21.978 } 00:36:21.978 ] 00:36:21.978 } 00:36:21.978 } 00:36:21.978 }' 00:36:21.978 23:22:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:22.237 23:22:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:36:22.237 pt2' 00:36:22.237 23:22:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:22.237 23:22:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:36:22.237 23:22:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:22.237 23:22:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:22.237 "name": "pt1", 00:36:22.237 "aliases": [ 00:36:22.237 "00000000-0000-0000-0000-000000000001" 00:36:22.237 ], 00:36:22.237 "product_name": "passthru", 00:36:22.237 "block_size": 4096, 00:36:22.237 "num_blocks": 8192, 00:36:22.237 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:22.237 "md_size": 32, 00:36:22.237 "md_interleave": false, 00:36:22.237 "dif_type": 0, 00:36:22.237 "assigned_rate_limits": { 00:36:22.237 "rw_ios_per_sec": 0, 00:36:22.237 "rw_mbytes_per_sec": 0, 00:36:22.237 "r_mbytes_per_sec": 0, 00:36:22.237 "w_mbytes_per_sec": 0 00:36:22.237 }, 00:36:22.237 "claimed": true, 00:36:22.237 "claim_type": "exclusive_write", 00:36:22.237 "zoned": false, 00:36:22.237 "supported_io_types": { 00:36:22.237 "read": true, 00:36:22.237 "write": true, 00:36:22.237 "unmap": true, 00:36:22.237 "flush": true, 00:36:22.237 "reset": true, 00:36:22.237 "nvme_admin": false, 00:36:22.237 "nvme_io": false, 00:36:22.237 "nvme_io_md": false, 00:36:22.237 "write_zeroes": true, 00:36:22.237 "zcopy": true, 00:36:22.237 "get_zone_info": false, 00:36:22.237 "zone_management": false, 00:36:22.237 "zone_append": false, 00:36:22.237 "compare": false, 00:36:22.237 "compare_and_write": false, 00:36:22.237 "abort": true, 00:36:22.237 "seek_hole": false, 00:36:22.237 "seek_data": false, 00:36:22.237 "copy": true, 00:36:22.237 "nvme_iov_md": false 00:36:22.237 }, 00:36:22.237 "memory_domains": [ 00:36:22.237 { 00:36:22.237 "dma_device_id": "system", 00:36:22.237 "dma_device_type": 1 00:36:22.237 }, 00:36:22.237 { 00:36:22.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:22.237 "dma_device_type": 2 00:36:22.237 } 00:36:22.237 ], 00:36:22.237 "driver_specific": { 00:36:22.238 "passthru": { 00:36:22.238 "name": "pt1", 00:36:22.238 "base_bdev_name": "malloc1" 00:36:22.238 } 00:36:22.238 } 00:36:22.238 }' 00:36:22.238 23:22:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:22.496 23:22:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:22.496 23:22:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:36:22.496 23:22:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:22.496 23:22:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:22.496 23:22:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:36:22.496 23:22:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:22.496 23:22:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:22.755 23:22:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:36:22.755 23:22:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:22.755 23:22:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:22.755 23:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:36:22.755 23:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:22.755 23:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:36:22.755 23:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:23.015 23:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:23.015 "name": "pt2", 00:36:23.015 "aliases": [ 00:36:23.015 "00000000-0000-0000-0000-000000000002" 00:36:23.015 ], 00:36:23.015 "product_name": "passthru", 00:36:23.015 "block_size": 4096, 00:36:23.015 "num_blocks": 8192, 00:36:23.015 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:23.015 "md_size": 32, 00:36:23.015 "md_interleave": false, 00:36:23.015 "dif_type": 0, 00:36:23.015 "assigned_rate_limits": { 00:36:23.015 "rw_ios_per_sec": 0, 00:36:23.015 "rw_mbytes_per_sec": 0, 00:36:23.015 "r_mbytes_per_sec": 0, 00:36:23.015 "w_mbytes_per_sec": 0 00:36:23.015 }, 00:36:23.015 "claimed": true, 00:36:23.015 "claim_type": "exclusive_write", 00:36:23.015 "zoned": false, 00:36:23.015 "supported_io_types": { 00:36:23.015 "read": true, 00:36:23.015 "write": true, 00:36:23.015 "unmap": true, 00:36:23.015 "flush": true, 00:36:23.015 "reset": true, 00:36:23.015 "nvme_admin": false, 00:36:23.015 "nvme_io": false, 00:36:23.015 "nvme_io_md": false, 00:36:23.015 "write_zeroes": true, 00:36:23.015 "zcopy": true, 00:36:23.015 "get_zone_info": false, 00:36:23.015 "zone_management": false, 00:36:23.015 "zone_append": false, 00:36:23.015 "compare": false, 00:36:23.015 "compare_and_write": false, 00:36:23.015 "abort": true, 00:36:23.015 "seek_hole": false, 00:36:23.015 "seek_data": false, 00:36:23.015 "copy": true, 00:36:23.015 "nvme_iov_md": false 00:36:23.015 }, 00:36:23.015 "memory_domains": [ 00:36:23.015 { 00:36:23.015 "dma_device_id": "system", 00:36:23.015 "dma_device_type": 1 00:36:23.015 }, 00:36:23.015 { 00:36:23.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:23.015 "dma_device_type": 2 00:36:23.015 } 00:36:23.015 ], 00:36:23.015 "driver_specific": { 00:36:23.015 "passthru": { 00:36:23.015 "name": "pt2", 00:36:23.015 "base_bdev_name": "malloc2" 00:36:23.015 } 00:36:23.015 } 00:36:23.015 }' 00:36:23.015 23:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:23.015 23:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:23.015 23:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:36:23.015 23:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:23.015 23:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:23.275 23:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:36:23.275 23:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:23.275 23:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:23.275 23:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:36:23.275 23:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:23.275 23:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:23.275 23:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:36:23.275 23:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:23.275 23:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:36:23.533 [2024-07-13 23:22:12.892374] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:23.533 23:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # '[' 357f0e60-d925-4cbd-acb8-8349fad615bd '!=' 357f0e60-d925-4cbd-acb8-8349fad615bd ']' 00:36:23.533 23:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:36:23.533 23:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:36:23.533 23:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:36:23.533 23:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:36:23.792 [2024-07-13 23:22:13.180185] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:36:24.049 23:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:24.049 23:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:24.050 23:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:24.050 23:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:24.050 23:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:24.050 23:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:24.050 23:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:24.050 23:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:24.050 23:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:24.050 23:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:24.050 23:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:24.050 23:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:24.050 23:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:24.050 "name": "raid_bdev1", 00:36:24.050 "uuid": "357f0e60-d925-4cbd-acb8-8349fad615bd", 00:36:24.050 "strip_size_kb": 0, 00:36:24.050 "state": "online", 00:36:24.050 "raid_level": "raid1", 00:36:24.050 "superblock": true, 00:36:24.050 "num_base_bdevs": 2, 00:36:24.050 "num_base_bdevs_discovered": 1, 00:36:24.050 "num_base_bdevs_operational": 1, 00:36:24.050 "base_bdevs_list": [ 00:36:24.050 { 00:36:24.050 "name": null, 00:36:24.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:24.050 "is_configured": false, 00:36:24.050 "data_offset": 256, 00:36:24.050 "data_size": 7936 00:36:24.050 }, 00:36:24.050 { 00:36:24.050 "name": "pt2", 00:36:24.050 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:24.050 "is_configured": true, 00:36:24.050 "data_offset": 256, 00:36:24.050 "data_size": 7936 00:36:24.050 } 00:36:24.050 ] 00:36:24.050 }' 00:36:24.050 23:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:24.050 23:22:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:24.674 23:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:36:24.933 [2024-07-13 23:22:14.260385] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:24.933 [2024-07-13 23:22:14.260636] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:24.933 [2024-07-13 23:22:14.260820] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:24.933 [2024-07-13 23:22:14.261026] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:24.933 [2024-07-13 23:22:14.261145] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:36:24.933 23:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:24.933 23:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:36:25.190 23:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:36:25.190 23:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:36:25.190 23:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:36:25.190 23:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:36:25.190 23:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:36:25.447 23:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:36:25.447 23:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:36:25.448 23:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:36:25.448 23:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:36:25.448 23:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@518 -- # i=1 00:36:25.448 23:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:25.706 [2024-07-13 23:22:15.004508] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:25.706 [2024-07-13 23:22:15.004814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:25.706 [2024-07-13 23:22:15.005040] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:36:25.706 [2024-07-13 23:22:15.005176] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:25.706 [2024-07-13 23:22:15.007511] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:25.706 [2024-07-13 23:22:15.007685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:25.706 [2024-07-13 23:22:15.007893] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:36:25.706 [2024-07-13 23:22:15.008057] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:25.706 [2024-07-13 23:22:15.008244] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:36:25.706 [2024-07-13 23:22:15.008374] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:25.706 [2024-07-13 23:22:15.008497] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:36:25.706 [2024-07-13 23:22:15.008703] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:36:25.706 [2024-07-13 23:22:15.008814] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:36:25.706 [2024-07-13 23:22:15.009087] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:25.706 pt2 00:36:25.706 23:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:25.706 23:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:25.706 23:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:25.706 23:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:25.706 23:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:25.706 23:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:25.706 23:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:25.706 23:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:25.706 23:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:25.706 23:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:25.706 23:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:25.706 23:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:25.964 23:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:25.964 "name": "raid_bdev1", 00:36:25.964 "uuid": "357f0e60-d925-4cbd-acb8-8349fad615bd", 00:36:25.964 "strip_size_kb": 0, 00:36:25.964 "state": "online", 00:36:25.964 "raid_level": "raid1", 00:36:25.964 "superblock": true, 00:36:25.964 "num_base_bdevs": 2, 00:36:25.964 "num_base_bdevs_discovered": 1, 00:36:25.964 "num_base_bdevs_operational": 1, 00:36:25.964 "base_bdevs_list": [ 00:36:25.964 { 00:36:25.964 "name": null, 00:36:25.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:25.964 "is_configured": false, 00:36:25.964 "data_offset": 256, 00:36:25.964 "data_size": 7936 00:36:25.964 }, 00:36:25.964 { 00:36:25.964 "name": "pt2", 00:36:25.964 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:25.964 "is_configured": true, 00:36:25.965 "data_offset": 256, 00:36:25.965 "data_size": 7936 00:36:25.965 } 00:36:25.965 ] 00:36:25.965 }' 00:36:25.965 23:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:25.965 23:22:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:26.531 23:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:36:26.788 [2024-07-13 23:22:16.109392] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:26.788 [2024-07-13 23:22:16.109609] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:26.788 [2024-07-13 23:22:16.109823] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:26.788 [2024-07-13 23:22:16.109975] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:26.788 [2024-07-13 23:22:16.110092] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:36:26.788 23:22:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:26.788 23:22:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:36:27.046 23:22:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:36:27.046 23:22:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:36:27.046 23:22:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:36:27.046 23:22:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:27.304 [2024-07-13 23:22:16.605594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:27.304 [2024-07-13 23:22:16.605915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:27.304 [2024-07-13 23:22:16.606097] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:36:27.304 [2024-07-13 23:22:16.606220] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:27.304 [2024-07-13 23:22:16.608413] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:27.304 [2024-07-13 23:22:16.608594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:27.304 [2024-07-13 23:22:16.608805] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:36:27.304 [2024-07-13 23:22:16.609001] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:27.304 [2024-07-13 23:22:16.609298] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:36:27.304 [2024-07-13 23:22:16.609449] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:27.304 [2024-07-13 23:22:16.609527] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state configuring 00:36:27.304 [2024-07-13 23:22:16.609742] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:27.304 [2024-07-13 23:22:16.609961] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:36:27.304 [2024-07-13 23:22:16.610053] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:27.304 [2024-07-13 23:22:16.610218] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:36:27.304 [2024-07-13 23:22:16.610403] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:36:27.304 [2024-07-13 23:22:16.610497] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:36:27.304 [2024-07-13 23:22:16.610718] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:27.304 pt1 00:36:27.304 23:22:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:36:27.304 23:22:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:27.304 23:22:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:27.304 23:22:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:27.304 23:22:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:27.304 23:22:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:27.304 23:22:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:27.304 23:22:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:27.304 23:22:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:27.304 23:22:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:27.304 23:22:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:27.304 23:22:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:27.304 23:22:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:27.561 23:22:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:27.561 "name": "raid_bdev1", 00:36:27.561 "uuid": "357f0e60-d925-4cbd-acb8-8349fad615bd", 00:36:27.561 "strip_size_kb": 0, 00:36:27.561 "state": "online", 00:36:27.561 "raid_level": "raid1", 00:36:27.561 "superblock": true, 00:36:27.561 "num_base_bdevs": 2, 00:36:27.561 "num_base_bdevs_discovered": 1, 00:36:27.561 "num_base_bdevs_operational": 1, 00:36:27.561 "base_bdevs_list": [ 00:36:27.561 { 00:36:27.561 "name": null, 00:36:27.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:27.561 "is_configured": false, 00:36:27.561 "data_offset": 256, 00:36:27.561 "data_size": 7936 00:36:27.561 }, 00:36:27.561 { 00:36:27.561 "name": "pt2", 00:36:27.561 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:27.561 "is_configured": true, 00:36:27.561 "data_offset": 256, 00:36:27.561 "data_size": 7936 00:36:27.561 } 00:36:27.561 ] 00:36:27.561 }' 00:36:27.561 23:22:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:27.561 23:22:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:28.496 23:22:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:36:28.496 23:22:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:36:28.496 23:22:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:36:28.496 23:22:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:28.496 23:22:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:36:28.754 [2024-07-13 23:22:18.010601] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:28.754 23:22:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' 357f0e60-d925-4cbd-acb8-8349fad615bd '!=' 357f0e60-d925-4cbd-acb8-8349fad615bd ']' 00:36:28.754 23:22:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@562 -- # killprocess 170527 00:36:28.754 23:22:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@948 -- # '[' -z 170527 ']' 00:36:28.754 23:22:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # kill -0 170527 00:36:28.754 23:22:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # uname 00:36:28.754 23:22:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:28.754 23:22:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 170527 00:36:28.754 killing process with pid 170527 00:36:28.754 23:22:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:28.754 23:22:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:28.754 23:22:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 170527' 00:36:28.754 23:22:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@967 -- # kill 170527 00:36:28.754 [2024-07-13 23:22:18.054159] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:28.754 [2024-07-13 23:22:18.054237] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:28.754 23:22:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # wait 170527 00:36:28.754 [2024-07-13 23:22:18.054290] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:28.754 [2024-07-13 23:22:18.054300] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:36:28.754 [2024-07-13 23:22:18.075266] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:29.019 23:22:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@564 -- # return 0 00:36:29.019 00:36:29.019 real 0m15.878s 00:36:29.019 user 0m29.886s 00:36:29.019 sys 0m2.006s 00:36:29.019 23:22:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:29.019 23:22:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:29.019 ************************************ 00:36:29.019 END TEST raid_superblock_test_md_separate 00:36:29.019 ************************************ 00:36:29.019 23:22:18 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:36:29.019 23:22:18 bdev_raid -- bdev/bdev_raid.sh@907 -- # '[' true = true ']' 00:36:29.019 23:22:18 bdev_raid -- bdev/bdev_raid.sh@908 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:36:29.019 23:22:18 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:36:29.019 23:22:18 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:29.019 23:22:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:29.019 ************************************ 00:36:29.019 START TEST raid_rebuild_test_sb_md_separate 00:36:29.019 ************************************ 00:36:29.019 23:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:36:29.019 23:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:36:29.019 23:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:36:29.019 23:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:36:29.019 23:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:36:29.019 23:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local verify=true 00:36:29.019 23:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:36:29.019 23:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:36:29.019 23:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:36:29.019 23:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:36:29.019 23:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:36:29.019 23:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:36:29.019 23:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:36:29.019 23:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:36:29.019 23:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:36:29.019 23:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:36:29.019 23:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:36:29.019 23:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local strip_size 00:36:29.019 23:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local create_arg 00:36:29.019 23:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:36:29.019 23:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local data_offset 00:36:29.019 23:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:36:29.019 23:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:36:29.019 23:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:36:29.019 23:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:36:29.019 23:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # raid_pid=171044 00:36:29.019 23:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # waitforlisten 171044 /var/tmp/spdk-raid.sock 00:36:29.019 23:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:36:29.019 23:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@829 -- # '[' -z 171044 ']' 00:36:29.019 23:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:36:29.019 23:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:29.019 23:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:36:29.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:36:29.019 23:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:29.019 23:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:29.279 [2024-07-13 23:22:18.426568] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:36:29.280 [2024-07-13 23:22:18.427026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171044 ] 00:36:29.280 I/O size of 3145728 is greater than zero copy threshold (65536). 00:36:29.280 Zero copy mechanism will not be used. 00:36:29.280 [2024-07-13 23:22:18.577807] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:29.280 [2024-07-13 23:22:18.663334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:29.537 [2024-07-13 23:22:18.722035] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:30.105 23:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:30.105 23:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@862 -- # return 0 00:36:30.105 23:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:36:30.105 23:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:36:30.363 BaseBdev1_malloc 00:36:30.363 23:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:36:30.621 [2024-07-13 23:22:19.821684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:36:30.621 [2024-07-13 23:22:19.821954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:30.621 [2024-07-13 23:22:19.822109] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:36:30.621 [2024-07-13 23:22:19.822261] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:30.621 [2024-07-13 23:22:19.824554] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:30.621 [2024-07-13 23:22:19.824732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:30.621 BaseBdev1 00:36:30.621 23:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:36:30.621 23:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:36:30.880 BaseBdev2_malloc 00:36:30.880 23:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:36:31.138 [2024-07-13 23:22:20.308950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:36:31.138 [2024-07-13 23:22:20.309224] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:31.138 [2024-07-13 23:22:20.309460] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:36:31.138 [2024-07-13 23:22:20.309615] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:31.138 [2024-07-13 23:22:20.311895] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:31.138 [2024-07-13 23:22:20.312070] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:36:31.138 BaseBdev2 00:36:31.138 23:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:36:31.397 spare_malloc 00:36:31.397 23:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:36:31.656 spare_delay 00:36:31.656 23:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:36:31.914 [2024-07-13 23:22:21.068535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:31.914 [2024-07-13 23:22:21.068788] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:31.914 [2024-07-13 23:22:21.068998] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:36:31.914 [2024-07-13 23:22:21.069209] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:31.914 [2024-07-13 23:22:21.071520] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:31.914 [2024-07-13 23:22:21.071707] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:31.914 spare 00:36:31.914 23:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:36:31.914 [2024-07-13 23:22:21.312696] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:31.914 [2024-07-13 23:22:21.314886] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:31.914 [2024-07-13 23:22:21.315291] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:36:31.914 [2024-07-13 23:22:21.315438] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:31.914 [2024-07-13 23:22:21.315670] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:36:31.914 [2024-07-13 23:22:21.315965] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:36:31.914 [2024-07-13 23:22:21.316091] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:36:31.914 [2024-07-13 23:22:21.316331] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:32.173 23:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:32.173 23:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:32.173 23:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:32.173 23:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:32.173 23:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:32.173 23:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:32.173 23:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:32.173 23:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:32.173 23:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:32.173 23:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:32.173 23:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:32.173 23:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:32.173 23:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:32.173 "name": "raid_bdev1", 00:36:32.173 "uuid": "6af679f8-8ca5-4cfb-9b67-2cd61d8ff7d8", 00:36:32.173 "strip_size_kb": 0, 00:36:32.173 "state": "online", 00:36:32.173 "raid_level": "raid1", 00:36:32.173 "superblock": true, 00:36:32.173 "num_base_bdevs": 2, 00:36:32.173 "num_base_bdevs_discovered": 2, 00:36:32.173 "num_base_bdevs_operational": 2, 00:36:32.173 "base_bdevs_list": [ 00:36:32.173 { 00:36:32.173 "name": "BaseBdev1", 00:36:32.173 "uuid": "7402471f-e135-5df0-b3f3-3da1ec07e1d8", 00:36:32.173 "is_configured": true, 00:36:32.173 "data_offset": 256, 00:36:32.173 "data_size": 7936 00:36:32.173 }, 00:36:32.173 { 00:36:32.173 "name": "BaseBdev2", 00:36:32.173 "uuid": "9fd52403-006d-56bc-871a-868251fae58d", 00:36:32.173 "is_configured": true, 00:36:32.173 "data_offset": 256, 00:36:32.173 "data_size": 7936 00:36:32.173 } 00:36:32.173 ] 00:36:32.173 }' 00:36:32.173 23:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:32.173 23:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:33.109 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:33.109 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:36:33.109 [2024-07-13 23:22:22.445359] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:33.109 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:36:33.109 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:36:33.109 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:33.367 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:36:33.367 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:36:33.367 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:36:33.367 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:36:33.367 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:36:33.367 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:33.367 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:36:33.367 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:36:33.368 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:36:33.368 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:36:33.368 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:36:33.368 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:36:33.368 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:33.368 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:36:33.626 [2024-07-13 23:22:22.941575] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:36:33.626 /dev/nbd0 00:36:33.626 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:36:33.626 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:36:33.626 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:36:33.626 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:36:33.626 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:36:33.626 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:36:33.626 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:36:33.626 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:36:33.626 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:36:33.626 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:36:33.626 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:33.626 1+0 records in 00:36:33.626 1+0 records out 00:36:33.626 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477844 s, 8.6 MB/s 00:36:33.626 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:33.626 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:36:33.626 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:33.626 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:36:33.626 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:36:33.626 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:33.626 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:33.626 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:36:33.626 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:36:33.626 23:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:36:34.561 7936+0 records in 00:36:34.561 7936+0 records out 00:36:34.561 32505856 bytes (33 MB, 31 MiB) copied, 0.800107 s, 40.6 MB/s 00:36:34.561 23:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:36:34.561 23:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:34.561 23:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:36:34.561 23:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:34.561 23:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:36:34.561 23:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:34.561 23:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:36:34.819 23:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:34.819 [2024-07-13 23:22:24.032388] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:34.819 23:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:34.819 23:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:34.819 23:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:34.819 23:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:34.819 23:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:34.819 23:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:36:34.819 23:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:36:34.819 23:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:36:35.077 [2024-07-13 23:22:24.244167] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:35.077 23:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:35.077 23:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:35.077 23:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:35.077 23:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:35.077 23:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:35.077 23:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:35.077 23:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:35.077 23:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:35.077 23:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:35.077 23:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:35.077 23:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:35.077 23:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:35.335 23:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:35.335 "name": "raid_bdev1", 00:36:35.335 "uuid": "6af679f8-8ca5-4cfb-9b67-2cd61d8ff7d8", 00:36:35.335 "strip_size_kb": 0, 00:36:35.335 "state": "online", 00:36:35.335 "raid_level": "raid1", 00:36:35.335 "superblock": true, 00:36:35.335 "num_base_bdevs": 2, 00:36:35.335 "num_base_bdevs_discovered": 1, 00:36:35.335 "num_base_bdevs_operational": 1, 00:36:35.335 "base_bdevs_list": [ 00:36:35.335 { 00:36:35.335 "name": null, 00:36:35.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:35.335 "is_configured": false, 00:36:35.335 "data_offset": 256, 00:36:35.335 "data_size": 7936 00:36:35.335 }, 00:36:35.335 { 00:36:35.335 "name": "BaseBdev2", 00:36:35.335 "uuid": "9fd52403-006d-56bc-871a-868251fae58d", 00:36:35.335 "is_configured": true, 00:36:35.335 "data_offset": 256, 00:36:35.335 "data_size": 7936 00:36:35.335 } 00:36:35.335 ] 00:36:35.335 }' 00:36:35.335 23:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:35.335 23:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:35.901 23:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:36:36.160 [2024-07-13 23:22:25.332442] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:36.160 [2024-07-13 23:22:25.335058] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019c7c0 00:36:36.160 [2024-07-13 23:22:25.337337] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:36.160 23:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # sleep 1 00:36:37.110 23:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:37.110 23:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:37.110 23:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:37.110 23:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:37.110 23:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:37.110 23:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:37.110 23:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:37.386 23:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:37.386 "name": "raid_bdev1", 00:36:37.386 "uuid": "6af679f8-8ca5-4cfb-9b67-2cd61d8ff7d8", 00:36:37.386 "strip_size_kb": 0, 00:36:37.386 "state": "online", 00:36:37.386 "raid_level": "raid1", 00:36:37.386 "superblock": true, 00:36:37.386 "num_base_bdevs": 2, 00:36:37.386 "num_base_bdevs_discovered": 2, 00:36:37.386 "num_base_bdevs_operational": 2, 00:36:37.386 "process": { 00:36:37.386 "type": "rebuild", 00:36:37.386 "target": "spare", 00:36:37.386 "progress": { 00:36:37.386 "blocks": 3072, 00:36:37.386 "percent": 38 00:36:37.386 } 00:36:37.386 }, 00:36:37.386 "base_bdevs_list": [ 00:36:37.386 { 00:36:37.386 "name": "spare", 00:36:37.386 "uuid": "48d3f5eb-5e94-53d2-8988-eed5a5113ae0", 00:36:37.386 "is_configured": true, 00:36:37.386 "data_offset": 256, 00:36:37.386 "data_size": 7936 00:36:37.386 }, 00:36:37.386 { 00:36:37.386 "name": "BaseBdev2", 00:36:37.386 "uuid": "9fd52403-006d-56bc-871a-868251fae58d", 00:36:37.386 "is_configured": true, 00:36:37.386 "data_offset": 256, 00:36:37.386 "data_size": 7936 00:36:37.386 } 00:36:37.386 ] 00:36:37.386 }' 00:36:37.386 23:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:37.386 23:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:37.386 23:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:37.386 23:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:37.386 23:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:36:37.646 [2024-07-13 23:22:26.935147] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:37.646 [2024-07-13 23:22:26.948322] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:37.646 [2024-07-13 23:22:26.948553] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:37.646 [2024-07-13 23:22:26.948685] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:37.646 [2024-07-13 23:22:26.948785] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:37.646 23:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:37.646 23:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:37.646 23:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:37.646 23:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:37.646 23:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:37.646 23:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:37.646 23:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:37.646 23:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:37.646 23:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:37.646 23:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:37.646 23:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:37.646 23:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:37.905 23:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:37.905 "name": "raid_bdev1", 00:36:37.905 "uuid": "6af679f8-8ca5-4cfb-9b67-2cd61d8ff7d8", 00:36:37.905 "strip_size_kb": 0, 00:36:37.905 "state": "online", 00:36:37.905 "raid_level": "raid1", 00:36:37.905 "superblock": true, 00:36:37.905 "num_base_bdevs": 2, 00:36:37.905 "num_base_bdevs_discovered": 1, 00:36:37.905 "num_base_bdevs_operational": 1, 00:36:37.905 "base_bdevs_list": [ 00:36:37.905 { 00:36:37.905 "name": null, 00:36:37.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:37.905 "is_configured": false, 00:36:37.905 "data_offset": 256, 00:36:37.905 "data_size": 7936 00:36:37.905 }, 00:36:37.905 { 00:36:37.905 "name": "BaseBdev2", 00:36:37.905 "uuid": "9fd52403-006d-56bc-871a-868251fae58d", 00:36:37.905 "is_configured": true, 00:36:37.905 "data_offset": 256, 00:36:37.905 "data_size": 7936 00:36:37.905 } 00:36:37.905 ] 00:36:37.905 }' 00:36:37.905 23:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:37.905 23:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:38.473 23:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:38.473 23:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:38.473 23:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:38.473 23:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:38.473 23:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:38.473 23:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:38.473 23:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:39.040 23:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:39.040 "name": "raid_bdev1", 00:36:39.040 "uuid": "6af679f8-8ca5-4cfb-9b67-2cd61d8ff7d8", 00:36:39.040 "strip_size_kb": 0, 00:36:39.040 "state": "online", 00:36:39.040 "raid_level": "raid1", 00:36:39.040 "superblock": true, 00:36:39.040 "num_base_bdevs": 2, 00:36:39.040 "num_base_bdevs_discovered": 1, 00:36:39.040 "num_base_bdevs_operational": 1, 00:36:39.040 "base_bdevs_list": [ 00:36:39.040 { 00:36:39.040 "name": null, 00:36:39.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:39.040 "is_configured": false, 00:36:39.040 "data_offset": 256, 00:36:39.040 "data_size": 7936 00:36:39.040 }, 00:36:39.040 { 00:36:39.040 "name": "BaseBdev2", 00:36:39.040 "uuid": "9fd52403-006d-56bc-871a-868251fae58d", 00:36:39.040 "is_configured": true, 00:36:39.040 "data_offset": 256, 00:36:39.040 "data_size": 7936 00:36:39.040 } 00:36:39.040 ] 00:36:39.040 }' 00:36:39.040 23:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:39.040 23:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:39.040 23:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:39.040 23:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:39.040 23:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:36:39.297 [2024-07-13 23:22:28.469336] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:39.297 [2024-07-13 23:22:28.471739] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019c960 00:36:39.297 [2024-07-13 23:22:28.474166] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:39.297 23:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # sleep 1 00:36:40.231 23:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:40.231 23:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:40.231 23:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:40.231 23:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:40.231 23:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:40.231 23:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:40.231 23:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:40.489 23:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:40.489 "name": "raid_bdev1", 00:36:40.489 "uuid": "6af679f8-8ca5-4cfb-9b67-2cd61d8ff7d8", 00:36:40.489 "strip_size_kb": 0, 00:36:40.489 "state": "online", 00:36:40.489 "raid_level": "raid1", 00:36:40.489 "superblock": true, 00:36:40.489 "num_base_bdevs": 2, 00:36:40.489 "num_base_bdevs_discovered": 2, 00:36:40.489 "num_base_bdevs_operational": 2, 00:36:40.489 "process": { 00:36:40.489 "type": "rebuild", 00:36:40.489 "target": "spare", 00:36:40.489 "progress": { 00:36:40.489 "blocks": 3072, 00:36:40.489 "percent": 38 00:36:40.489 } 00:36:40.489 }, 00:36:40.489 "base_bdevs_list": [ 00:36:40.489 { 00:36:40.489 "name": "spare", 00:36:40.489 "uuid": "48d3f5eb-5e94-53d2-8988-eed5a5113ae0", 00:36:40.489 "is_configured": true, 00:36:40.489 "data_offset": 256, 00:36:40.489 "data_size": 7936 00:36:40.489 }, 00:36:40.489 { 00:36:40.489 "name": "BaseBdev2", 00:36:40.489 "uuid": "9fd52403-006d-56bc-871a-868251fae58d", 00:36:40.489 "is_configured": true, 00:36:40.489 "data_offset": 256, 00:36:40.489 "data_size": 7936 00:36:40.489 } 00:36:40.489 ] 00:36:40.489 }' 00:36:40.489 23:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:40.489 23:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:40.489 23:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:40.489 23:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:40.489 23:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:36:40.489 23:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:36:40.489 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:36:40.489 23:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:36:40.489 23:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:36:40.489 23:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:36:40.489 23:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@705 -- # local timeout=1387 00:36:40.489 23:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:40.489 23:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:40.489 23:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:40.489 23:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:40.490 23:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:40.490 23:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:40.490 23:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:40.490 23:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:40.748 23:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:40.748 "name": "raid_bdev1", 00:36:40.748 "uuid": "6af679f8-8ca5-4cfb-9b67-2cd61d8ff7d8", 00:36:40.748 "strip_size_kb": 0, 00:36:40.748 "state": "online", 00:36:40.748 "raid_level": "raid1", 00:36:40.748 "superblock": true, 00:36:40.748 "num_base_bdevs": 2, 00:36:40.748 "num_base_bdevs_discovered": 2, 00:36:40.748 "num_base_bdevs_operational": 2, 00:36:40.748 "process": { 00:36:40.748 "type": "rebuild", 00:36:40.748 "target": "spare", 00:36:40.748 "progress": { 00:36:40.748 "blocks": 3840, 00:36:40.748 "percent": 48 00:36:40.748 } 00:36:40.748 }, 00:36:40.748 "base_bdevs_list": [ 00:36:40.748 { 00:36:40.748 "name": "spare", 00:36:40.748 "uuid": "48d3f5eb-5e94-53d2-8988-eed5a5113ae0", 00:36:40.748 "is_configured": true, 00:36:40.748 "data_offset": 256, 00:36:40.748 "data_size": 7936 00:36:40.748 }, 00:36:40.748 { 00:36:40.748 "name": "BaseBdev2", 00:36:40.748 "uuid": "9fd52403-006d-56bc-871a-868251fae58d", 00:36:40.748 "is_configured": true, 00:36:40.748 "data_offset": 256, 00:36:40.748 "data_size": 7936 00:36:40.748 } 00:36:40.748 ] 00:36:40.748 }' 00:36:40.748 23:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:40.748 23:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:41.006 23:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:41.006 23:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:41.006 23:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:36:41.941 23:22:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:41.941 23:22:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:41.941 23:22:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:41.941 23:22:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:41.941 23:22:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:41.941 23:22:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:41.941 23:22:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:41.941 23:22:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:42.199 23:22:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:42.199 "name": "raid_bdev1", 00:36:42.199 "uuid": "6af679f8-8ca5-4cfb-9b67-2cd61d8ff7d8", 00:36:42.199 "strip_size_kb": 0, 00:36:42.199 "state": "online", 00:36:42.199 "raid_level": "raid1", 00:36:42.199 "superblock": true, 00:36:42.199 "num_base_bdevs": 2, 00:36:42.199 "num_base_bdevs_discovered": 2, 00:36:42.199 "num_base_bdevs_operational": 2, 00:36:42.199 "process": { 00:36:42.199 "type": "rebuild", 00:36:42.199 "target": "spare", 00:36:42.200 "progress": { 00:36:42.200 "blocks": 7424, 00:36:42.200 "percent": 93 00:36:42.200 } 00:36:42.200 }, 00:36:42.200 "base_bdevs_list": [ 00:36:42.200 { 00:36:42.200 "name": "spare", 00:36:42.200 "uuid": "48d3f5eb-5e94-53d2-8988-eed5a5113ae0", 00:36:42.200 "is_configured": true, 00:36:42.200 "data_offset": 256, 00:36:42.200 "data_size": 7936 00:36:42.200 }, 00:36:42.200 { 00:36:42.200 "name": "BaseBdev2", 00:36:42.200 "uuid": "9fd52403-006d-56bc-871a-868251fae58d", 00:36:42.200 "is_configured": true, 00:36:42.200 "data_offset": 256, 00:36:42.200 "data_size": 7936 00:36:42.200 } 00:36:42.200 ] 00:36:42.200 }' 00:36:42.200 23:22:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:42.200 23:22:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:42.200 23:22:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:42.200 23:22:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:42.200 23:22:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:36:42.200 [2024-07-13 23:22:31.592175] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:36:42.200 [2024-07-13 23:22:31.592429] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:36:42.200 [2024-07-13 23:22:31.592707] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:43.576 23:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:43.576 23:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:43.576 23:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:43.576 23:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:43.576 23:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:43.576 23:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:43.576 23:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:43.576 23:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:43.576 23:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:43.576 "name": "raid_bdev1", 00:36:43.576 "uuid": "6af679f8-8ca5-4cfb-9b67-2cd61d8ff7d8", 00:36:43.576 "strip_size_kb": 0, 00:36:43.576 "state": "online", 00:36:43.576 "raid_level": "raid1", 00:36:43.576 "superblock": true, 00:36:43.576 "num_base_bdevs": 2, 00:36:43.576 "num_base_bdevs_discovered": 2, 00:36:43.576 "num_base_bdevs_operational": 2, 00:36:43.576 "base_bdevs_list": [ 00:36:43.576 { 00:36:43.576 "name": "spare", 00:36:43.576 "uuid": "48d3f5eb-5e94-53d2-8988-eed5a5113ae0", 00:36:43.576 "is_configured": true, 00:36:43.576 "data_offset": 256, 00:36:43.576 "data_size": 7936 00:36:43.576 }, 00:36:43.576 { 00:36:43.576 "name": "BaseBdev2", 00:36:43.576 "uuid": "9fd52403-006d-56bc-871a-868251fae58d", 00:36:43.576 "is_configured": true, 00:36:43.576 "data_offset": 256, 00:36:43.576 "data_size": 7936 00:36:43.576 } 00:36:43.576 ] 00:36:43.576 }' 00:36:43.576 23:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:43.576 23:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:36:43.576 23:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:43.576 23:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:36:43.576 23:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # break 00:36:43.576 23:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:43.576 23:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:43.576 23:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:43.576 23:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:43.576 23:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:43.576 23:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:43.576 23:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:43.834 23:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:43.834 "name": "raid_bdev1", 00:36:43.834 "uuid": "6af679f8-8ca5-4cfb-9b67-2cd61d8ff7d8", 00:36:43.834 "strip_size_kb": 0, 00:36:43.834 "state": "online", 00:36:43.834 "raid_level": "raid1", 00:36:43.834 "superblock": true, 00:36:43.834 "num_base_bdevs": 2, 00:36:43.834 "num_base_bdevs_discovered": 2, 00:36:43.834 "num_base_bdevs_operational": 2, 00:36:43.834 "base_bdevs_list": [ 00:36:43.834 { 00:36:43.834 "name": "spare", 00:36:43.834 "uuid": "48d3f5eb-5e94-53d2-8988-eed5a5113ae0", 00:36:43.834 "is_configured": true, 00:36:43.834 "data_offset": 256, 00:36:43.834 "data_size": 7936 00:36:43.834 }, 00:36:43.834 { 00:36:43.834 "name": "BaseBdev2", 00:36:43.834 "uuid": "9fd52403-006d-56bc-871a-868251fae58d", 00:36:43.834 "is_configured": true, 00:36:43.834 "data_offset": 256, 00:36:43.834 "data_size": 7936 00:36:43.834 } 00:36:43.834 ] 00:36:43.834 }' 00:36:43.834 23:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:44.091 23:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:44.091 23:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:44.091 23:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:44.091 23:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:44.091 23:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:44.091 23:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:44.091 23:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:44.091 23:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:44.091 23:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:44.091 23:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:44.091 23:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:44.091 23:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:44.091 23:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:44.091 23:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:44.091 23:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:44.399 23:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:44.399 "name": "raid_bdev1", 00:36:44.399 "uuid": "6af679f8-8ca5-4cfb-9b67-2cd61d8ff7d8", 00:36:44.399 "strip_size_kb": 0, 00:36:44.399 "state": "online", 00:36:44.399 "raid_level": "raid1", 00:36:44.399 "superblock": true, 00:36:44.399 "num_base_bdevs": 2, 00:36:44.399 "num_base_bdevs_discovered": 2, 00:36:44.399 "num_base_bdevs_operational": 2, 00:36:44.399 "base_bdevs_list": [ 00:36:44.399 { 00:36:44.399 "name": "spare", 00:36:44.399 "uuid": "48d3f5eb-5e94-53d2-8988-eed5a5113ae0", 00:36:44.399 "is_configured": true, 00:36:44.399 "data_offset": 256, 00:36:44.399 "data_size": 7936 00:36:44.399 }, 00:36:44.399 { 00:36:44.399 "name": "BaseBdev2", 00:36:44.399 "uuid": "9fd52403-006d-56bc-871a-868251fae58d", 00:36:44.399 "is_configured": true, 00:36:44.399 "data_offset": 256, 00:36:44.399 "data_size": 7936 00:36:44.399 } 00:36:44.399 ] 00:36:44.399 }' 00:36:44.399 23:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:44.399 23:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:44.965 23:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:36:45.223 [2024-07-13 23:22:34.429357] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:45.223 [2024-07-13 23:22:34.429595] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:45.223 [2024-07-13 23:22:34.429825] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:45.223 [2024-07-13 23:22:34.430021] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:45.223 [2024-07-13 23:22:34.430135] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:36:45.223 23:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # jq length 00:36:45.223 23:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:45.479 23:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:36:45.479 23:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:36:45.479 23:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:36:45.479 23:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:36:45.479 23:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:45.479 23:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:36:45.479 23:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:36:45.479 23:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:36:45.479 23:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:36:45.479 23:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:36:45.479 23:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:36:45.479 23:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:45.479 23:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:36:45.737 /dev/nbd0 00:36:45.737 23:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:36:45.737 23:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:36:45.737 23:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:36:45.737 23:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:36:45.737 23:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:36:45.737 23:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:36:45.737 23:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:36:45.737 23:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:36:45.737 23:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:36:45.737 23:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:36:45.737 23:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:45.737 1+0 records in 00:36:45.737 1+0 records out 00:36:45.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000614218 s, 6.7 MB/s 00:36:45.737 23:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:45.737 23:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:36:45.737 23:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:45.737 23:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:36:45.737 23:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:36:45.737 23:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:45.737 23:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:45.737 23:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:36:45.995 /dev/nbd1 00:36:45.995 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:36:45.995 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:36:45.995 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:36:45.995 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:36:45.995 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:36:45.995 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:36:45.996 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:36:45.996 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:36:45.996 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:36:45.996 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:36:45.996 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:45.996 1+0 records in 00:36:45.996 1+0 records out 00:36:45.996 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000769669 s, 5.3 MB/s 00:36:45.996 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:45.996 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:36:45.996 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:45.996 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:36:45.996 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:36:45.996 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:45.996 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:45.996 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:36:45.996 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:36:45.996 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:45.996 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:36:45.996 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:45.996 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:36:45.996 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:45.996 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:36:46.254 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:46.254 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:46.254 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:46.254 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:46.254 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:46.254 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:46.254 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:36:46.254 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:36:46.254 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:46.255 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:36:46.514 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:36:46.514 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:36:46.514 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:36:46.514 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:46.514 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:46.514 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:36:46.514 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:36:46.514 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:36:46.514 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:36:46.514 23:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:36:46.773 23:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:36:47.032 [2024-07-13 23:22:36.354040] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:47.032 [2024-07-13 23:22:36.354306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:47.032 [2024-07-13 23:22:36.354462] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:36:47.032 [2024-07-13 23:22:36.354630] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:47.032 [2024-07-13 23:22:36.356790] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:47.032 [2024-07-13 23:22:36.357029] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:47.032 [2024-07-13 23:22:36.357300] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:36:47.032 [2024-07-13 23:22:36.357509] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:47.032 [2024-07-13 23:22:36.357796] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:47.032 spare 00:36:47.032 23:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:47.032 23:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:47.032 23:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:47.032 23:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:47.032 23:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:47.032 23:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:47.032 23:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:47.032 23:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:47.032 23:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:47.032 23:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:47.032 23:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:47.032 23:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:47.291 [2024-07-13 23:22:36.458013] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:36:47.291 [2024-07-13 23:22:36.458163] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:47.291 [2024-07-13 23:22:36.458362] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb350 00:36:47.291 [2024-07-13 23:22:36.458604] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:36:47.291 [2024-07-13 23:22:36.458711] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:36:47.291 [2024-07-13 23:22:36.458919] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:47.292 23:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:47.292 "name": "raid_bdev1", 00:36:47.292 "uuid": "6af679f8-8ca5-4cfb-9b67-2cd61d8ff7d8", 00:36:47.292 "strip_size_kb": 0, 00:36:47.292 "state": "online", 00:36:47.292 "raid_level": "raid1", 00:36:47.292 "superblock": true, 00:36:47.292 "num_base_bdevs": 2, 00:36:47.292 "num_base_bdevs_discovered": 2, 00:36:47.292 "num_base_bdevs_operational": 2, 00:36:47.292 "base_bdevs_list": [ 00:36:47.292 { 00:36:47.292 "name": "spare", 00:36:47.292 "uuid": "48d3f5eb-5e94-53d2-8988-eed5a5113ae0", 00:36:47.292 "is_configured": true, 00:36:47.292 "data_offset": 256, 00:36:47.292 "data_size": 7936 00:36:47.292 }, 00:36:47.292 { 00:36:47.292 "name": "BaseBdev2", 00:36:47.292 "uuid": "9fd52403-006d-56bc-871a-868251fae58d", 00:36:47.292 "is_configured": true, 00:36:47.292 "data_offset": 256, 00:36:47.292 "data_size": 7936 00:36:47.292 } 00:36:47.292 ] 00:36:47.292 }' 00:36:47.292 23:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:47.292 23:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:48.226 23:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:48.226 23:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:48.226 23:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:48.226 23:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:48.226 23:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:48.226 23:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:48.226 23:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:48.226 23:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:48.226 "name": "raid_bdev1", 00:36:48.226 "uuid": "6af679f8-8ca5-4cfb-9b67-2cd61d8ff7d8", 00:36:48.226 "strip_size_kb": 0, 00:36:48.226 "state": "online", 00:36:48.226 "raid_level": "raid1", 00:36:48.226 "superblock": true, 00:36:48.226 "num_base_bdevs": 2, 00:36:48.226 "num_base_bdevs_discovered": 2, 00:36:48.226 "num_base_bdevs_operational": 2, 00:36:48.226 "base_bdevs_list": [ 00:36:48.226 { 00:36:48.226 "name": "spare", 00:36:48.226 "uuid": "48d3f5eb-5e94-53d2-8988-eed5a5113ae0", 00:36:48.226 "is_configured": true, 00:36:48.226 "data_offset": 256, 00:36:48.226 "data_size": 7936 00:36:48.226 }, 00:36:48.226 { 00:36:48.226 "name": "BaseBdev2", 00:36:48.226 "uuid": "9fd52403-006d-56bc-871a-868251fae58d", 00:36:48.226 "is_configured": true, 00:36:48.226 "data_offset": 256, 00:36:48.226 "data_size": 7936 00:36:48.226 } 00:36:48.226 ] 00:36:48.226 }' 00:36:48.226 23:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:48.226 23:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:48.226 23:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:48.226 23:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:48.543 23:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:48.543 23:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:36:48.543 23:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:36:48.543 23:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:36:48.817 [2024-07-13 23:22:38.103459] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:48.817 23:22:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:48.817 23:22:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:48.817 23:22:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:48.817 23:22:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:48.817 23:22:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:48.817 23:22:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:48.817 23:22:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:48.817 23:22:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:48.817 23:22:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:48.817 23:22:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:48.817 23:22:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:48.817 23:22:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:49.076 23:22:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:49.076 "name": "raid_bdev1", 00:36:49.076 "uuid": "6af679f8-8ca5-4cfb-9b67-2cd61d8ff7d8", 00:36:49.076 "strip_size_kb": 0, 00:36:49.076 "state": "online", 00:36:49.076 "raid_level": "raid1", 00:36:49.076 "superblock": true, 00:36:49.076 "num_base_bdevs": 2, 00:36:49.076 "num_base_bdevs_discovered": 1, 00:36:49.076 "num_base_bdevs_operational": 1, 00:36:49.076 "base_bdevs_list": [ 00:36:49.076 { 00:36:49.076 "name": null, 00:36:49.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:49.076 "is_configured": false, 00:36:49.076 "data_offset": 256, 00:36:49.076 "data_size": 7936 00:36:49.076 }, 00:36:49.076 { 00:36:49.076 "name": "BaseBdev2", 00:36:49.076 "uuid": "9fd52403-006d-56bc-871a-868251fae58d", 00:36:49.076 "is_configured": true, 00:36:49.076 "data_offset": 256, 00:36:49.076 "data_size": 7936 00:36:49.076 } 00:36:49.076 ] 00:36:49.076 }' 00:36:49.076 23:22:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:49.076 23:22:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:49.642 23:22:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:36:49.900 [2024-07-13 23:22:39.195802] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:49.900 [2024-07-13 23:22:39.196311] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:36:49.900 [2024-07-13 23:22:39.196442] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:36:49.900 [2024-07-13 23:22:39.196568] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:49.900 [2024-07-13 23:22:39.198744] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb4f0 00:36:49.900 [2024-07-13 23:22:39.200864] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:49.900 23:22:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # sleep 1 00:36:50.833 23:22:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:50.833 23:22:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:50.833 23:22:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:50.833 23:22:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:50.833 23:22:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:50.833 23:22:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:50.833 23:22:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:51.092 23:22:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:51.092 "name": "raid_bdev1", 00:36:51.092 "uuid": "6af679f8-8ca5-4cfb-9b67-2cd61d8ff7d8", 00:36:51.092 "strip_size_kb": 0, 00:36:51.092 "state": "online", 00:36:51.092 "raid_level": "raid1", 00:36:51.092 "superblock": true, 00:36:51.092 "num_base_bdevs": 2, 00:36:51.092 "num_base_bdevs_discovered": 2, 00:36:51.092 "num_base_bdevs_operational": 2, 00:36:51.092 "process": { 00:36:51.092 "type": "rebuild", 00:36:51.092 "target": "spare", 00:36:51.092 "progress": { 00:36:51.092 "blocks": 3072, 00:36:51.092 "percent": 38 00:36:51.092 } 00:36:51.092 }, 00:36:51.092 "base_bdevs_list": [ 00:36:51.092 { 00:36:51.092 "name": "spare", 00:36:51.092 "uuid": "48d3f5eb-5e94-53d2-8988-eed5a5113ae0", 00:36:51.092 "is_configured": true, 00:36:51.092 "data_offset": 256, 00:36:51.092 "data_size": 7936 00:36:51.092 }, 00:36:51.092 { 00:36:51.092 "name": "BaseBdev2", 00:36:51.092 "uuid": "9fd52403-006d-56bc-871a-868251fae58d", 00:36:51.092 "is_configured": true, 00:36:51.092 "data_offset": 256, 00:36:51.092 "data_size": 7936 00:36:51.092 } 00:36:51.092 ] 00:36:51.092 }' 00:36:51.092 23:22:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:51.350 23:22:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:51.350 23:22:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:51.350 23:22:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:51.350 23:22:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:36:51.608 [2024-07-13 23:22:40.770272] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:51.608 [2024-07-13 23:22:40.810214] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:51.608 [2024-07-13 23:22:40.810439] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:51.608 [2024-07-13 23:22:40.810568] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:51.608 [2024-07-13 23:22:40.810615] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:51.608 23:22:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:51.608 23:22:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:51.608 23:22:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:51.608 23:22:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:51.608 23:22:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:51.609 23:22:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:51.609 23:22:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:51.609 23:22:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:51.609 23:22:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:51.609 23:22:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:51.609 23:22:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:51.609 23:22:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:51.866 23:22:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:51.866 "name": "raid_bdev1", 00:36:51.866 "uuid": "6af679f8-8ca5-4cfb-9b67-2cd61d8ff7d8", 00:36:51.866 "strip_size_kb": 0, 00:36:51.866 "state": "online", 00:36:51.866 "raid_level": "raid1", 00:36:51.866 "superblock": true, 00:36:51.866 "num_base_bdevs": 2, 00:36:51.866 "num_base_bdevs_discovered": 1, 00:36:51.866 "num_base_bdevs_operational": 1, 00:36:51.866 "base_bdevs_list": [ 00:36:51.866 { 00:36:51.866 "name": null, 00:36:51.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:51.866 "is_configured": false, 00:36:51.866 "data_offset": 256, 00:36:51.866 "data_size": 7936 00:36:51.866 }, 00:36:51.866 { 00:36:51.866 "name": "BaseBdev2", 00:36:51.866 "uuid": "9fd52403-006d-56bc-871a-868251fae58d", 00:36:51.866 "is_configured": true, 00:36:51.866 "data_offset": 256, 00:36:51.866 "data_size": 7936 00:36:51.866 } 00:36:51.866 ] 00:36:51.866 }' 00:36:51.866 23:22:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:51.866 23:22:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:52.432 23:22:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:36:52.690 [2024-07-13 23:22:42.006590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:52.690 [2024-07-13 23:22:42.006876] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:52.690 [2024-07-13 23:22:42.007045] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:36:52.690 [2024-07-13 23:22:42.007191] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:52.690 [2024-07-13 23:22:42.007559] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:52.690 [2024-07-13 23:22:42.007734] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:52.690 [2024-07-13 23:22:42.007984] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:36:52.690 [2024-07-13 23:22:42.008105] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:36:52.690 [2024-07-13 23:22:42.008241] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:36:52.690 [2024-07-13 23:22:42.008451] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:52.690 [2024-07-13 23:22:42.010865] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb830 00:36:52.690 spare 00:36:52.690 [2024-07-13 23:22:42.013103] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:52.690 23:22:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # sleep 1 00:36:53.623 23:22:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:53.623 23:22:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:53.623 23:22:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:53.881 23:22:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:53.881 23:22:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:53.882 23:22:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:53.882 23:22:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:53.882 23:22:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:53.882 "name": "raid_bdev1", 00:36:53.882 "uuid": "6af679f8-8ca5-4cfb-9b67-2cd61d8ff7d8", 00:36:53.882 "strip_size_kb": 0, 00:36:53.882 "state": "online", 00:36:53.882 "raid_level": "raid1", 00:36:53.882 "superblock": true, 00:36:53.882 "num_base_bdevs": 2, 00:36:53.882 "num_base_bdevs_discovered": 2, 00:36:53.882 "num_base_bdevs_operational": 2, 00:36:53.882 "process": { 00:36:53.882 "type": "rebuild", 00:36:53.882 "target": "spare", 00:36:53.882 "progress": { 00:36:53.882 "blocks": 3072, 00:36:53.882 "percent": 38 00:36:53.882 } 00:36:53.882 }, 00:36:53.882 "base_bdevs_list": [ 00:36:53.882 { 00:36:53.882 "name": "spare", 00:36:53.882 "uuid": "48d3f5eb-5e94-53d2-8988-eed5a5113ae0", 00:36:53.882 "is_configured": true, 00:36:53.882 "data_offset": 256, 00:36:53.882 "data_size": 7936 00:36:53.882 }, 00:36:53.882 { 00:36:53.882 "name": "BaseBdev2", 00:36:53.882 "uuid": "9fd52403-006d-56bc-871a-868251fae58d", 00:36:53.882 "is_configured": true, 00:36:53.882 "data_offset": 256, 00:36:53.882 "data_size": 7936 00:36:53.882 } 00:36:53.882 ] 00:36:53.882 }' 00:36:53.882 23:22:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:54.139 23:22:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:54.139 23:22:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:54.139 23:22:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:54.139 23:22:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:36:54.396 [2024-07-13 23:22:43.615911] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:54.396 [2024-07-13 23:22:43.622354] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:54.396 [2024-07-13 23:22:43.622571] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:54.396 [2024-07-13 23:22:43.622631] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:54.396 [2024-07-13 23:22:43.622744] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:54.396 23:22:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:54.396 23:22:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:54.396 23:22:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:54.396 23:22:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:54.396 23:22:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:54.396 23:22:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:54.396 23:22:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:54.396 23:22:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:54.396 23:22:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:54.396 23:22:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:54.396 23:22:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:54.396 23:22:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:54.654 23:22:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:54.654 "name": "raid_bdev1", 00:36:54.654 "uuid": "6af679f8-8ca5-4cfb-9b67-2cd61d8ff7d8", 00:36:54.654 "strip_size_kb": 0, 00:36:54.654 "state": "online", 00:36:54.654 "raid_level": "raid1", 00:36:54.654 "superblock": true, 00:36:54.654 "num_base_bdevs": 2, 00:36:54.654 "num_base_bdevs_discovered": 1, 00:36:54.654 "num_base_bdevs_operational": 1, 00:36:54.654 "base_bdevs_list": [ 00:36:54.654 { 00:36:54.654 "name": null, 00:36:54.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:54.655 "is_configured": false, 00:36:54.655 "data_offset": 256, 00:36:54.655 "data_size": 7936 00:36:54.655 }, 00:36:54.655 { 00:36:54.655 "name": "BaseBdev2", 00:36:54.655 "uuid": "9fd52403-006d-56bc-871a-868251fae58d", 00:36:54.655 "is_configured": true, 00:36:54.655 "data_offset": 256, 00:36:54.655 "data_size": 7936 00:36:54.655 } 00:36:54.655 ] 00:36:54.655 }' 00:36:54.655 23:22:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:54.655 23:22:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:55.220 23:22:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:55.220 23:22:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:55.220 23:22:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:55.220 23:22:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:55.220 23:22:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:55.220 23:22:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:55.220 23:22:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:55.478 23:22:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:55.478 "name": "raid_bdev1", 00:36:55.478 "uuid": "6af679f8-8ca5-4cfb-9b67-2cd61d8ff7d8", 00:36:55.478 "strip_size_kb": 0, 00:36:55.478 "state": "online", 00:36:55.478 "raid_level": "raid1", 00:36:55.478 "superblock": true, 00:36:55.478 "num_base_bdevs": 2, 00:36:55.478 "num_base_bdevs_discovered": 1, 00:36:55.478 "num_base_bdevs_operational": 1, 00:36:55.478 "base_bdevs_list": [ 00:36:55.478 { 00:36:55.478 "name": null, 00:36:55.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:55.478 "is_configured": false, 00:36:55.478 "data_offset": 256, 00:36:55.478 "data_size": 7936 00:36:55.478 }, 00:36:55.478 { 00:36:55.478 "name": "BaseBdev2", 00:36:55.478 "uuid": "9fd52403-006d-56bc-871a-868251fae58d", 00:36:55.478 "is_configured": true, 00:36:55.478 "data_offset": 256, 00:36:55.478 "data_size": 7936 00:36:55.478 } 00:36:55.478 ] 00:36:55.478 }' 00:36:55.478 23:22:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:55.478 23:22:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:55.478 23:22:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:55.736 23:22:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:55.736 23:22:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:36:55.736 23:22:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:36:55.994 [2024-07-13 23:22:45.374849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:36:55.994 [2024-07-13 23:22:45.375612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:55.994 [2024-07-13 23:22:45.375954] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:36:55.994 [2024-07-13 23:22:45.376219] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:55.994 [2024-07-13 23:22:45.376711] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:55.994 [2024-07-13 23:22:45.377025] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:55.994 [2024-07-13 23:22:45.377404] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:36:55.994 [2024-07-13 23:22:45.377588] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:36:55.994 [2024-07-13 23:22:45.377700] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:36:55.994 BaseBdev1 00:36:55.994 23:22:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # sleep 1 00:36:57.367 23:22:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:57.367 23:22:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:57.367 23:22:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:57.367 23:22:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:57.367 23:22:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:57.367 23:22:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:57.367 23:22:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:57.367 23:22:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:57.367 23:22:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:57.367 23:22:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:57.367 23:22:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:57.367 23:22:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:57.367 23:22:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:57.367 "name": "raid_bdev1", 00:36:57.367 "uuid": "6af679f8-8ca5-4cfb-9b67-2cd61d8ff7d8", 00:36:57.367 "strip_size_kb": 0, 00:36:57.367 "state": "online", 00:36:57.367 "raid_level": "raid1", 00:36:57.367 "superblock": true, 00:36:57.367 "num_base_bdevs": 2, 00:36:57.367 "num_base_bdevs_discovered": 1, 00:36:57.367 "num_base_bdevs_operational": 1, 00:36:57.367 "base_bdevs_list": [ 00:36:57.367 { 00:36:57.367 "name": null, 00:36:57.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:57.367 "is_configured": false, 00:36:57.367 "data_offset": 256, 00:36:57.367 "data_size": 7936 00:36:57.367 }, 00:36:57.367 { 00:36:57.367 "name": "BaseBdev2", 00:36:57.367 "uuid": "9fd52403-006d-56bc-871a-868251fae58d", 00:36:57.367 "is_configured": true, 00:36:57.367 "data_offset": 256, 00:36:57.367 "data_size": 7936 00:36:57.367 } 00:36:57.367 ] 00:36:57.367 }' 00:36:57.367 23:22:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:57.367 23:22:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:58.301 23:22:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:58.301 23:22:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:58.301 23:22:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:58.301 23:22:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:58.301 23:22:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:58.301 23:22:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:58.301 23:22:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:58.301 23:22:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:58.301 "name": "raid_bdev1", 00:36:58.301 "uuid": "6af679f8-8ca5-4cfb-9b67-2cd61d8ff7d8", 00:36:58.301 "strip_size_kb": 0, 00:36:58.301 "state": "online", 00:36:58.301 "raid_level": "raid1", 00:36:58.301 "superblock": true, 00:36:58.301 "num_base_bdevs": 2, 00:36:58.301 "num_base_bdevs_discovered": 1, 00:36:58.301 "num_base_bdevs_operational": 1, 00:36:58.301 "base_bdevs_list": [ 00:36:58.301 { 00:36:58.301 "name": null, 00:36:58.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:58.301 "is_configured": false, 00:36:58.301 "data_offset": 256, 00:36:58.301 "data_size": 7936 00:36:58.301 }, 00:36:58.301 { 00:36:58.301 "name": "BaseBdev2", 00:36:58.301 "uuid": "9fd52403-006d-56bc-871a-868251fae58d", 00:36:58.301 "is_configured": true, 00:36:58.301 "data_offset": 256, 00:36:58.301 "data_size": 7936 00:36:58.301 } 00:36:58.301 ] 00:36:58.301 }' 00:36:58.301 23:22:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:58.301 23:22:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:58.301 23:22:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:58.301 23:22:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:58.301 23:22:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:58.301 23:22:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:36:58.301 23:22:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:58.301 23:22:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:58.301 23:22:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:58.301 23:22:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:58.301 23:22:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:58.301 23:22:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:58.301 23:22:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:58.301 23:22:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:58.301 23:22:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:36:58.301 23:22:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:58.560 [2024-07-13 23:22:47.899537] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:58.560 [2024-07-13 23:22:47.899973] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:36:58.560 [2024-07-13 23:22:47.900098] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:36:58.560 request: 00:36:58.560 { 00:36:58.560 "base_bdev": "BaseBdev1", 00:36:58.560 "raid_bdev": "raid_bdev1", 00:36:58.560 "method": "bdev_raid_add_base_bdev", 00:36:58.560 "req_id": 1 00:36:58.560 } 00:36:58.560 Got JSON-RPC error response 00:36:58.560 response: 00:36:58.560 { 00:36:58.560 "code": -22, 00:36:58.560 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:36:58.560 } 00:36:58.560 23:22:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@651 -- # es=1 00:36:58.560 23:22:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:58.560 23:22:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:58.560 23:22:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:58.560 23:22:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # sleep 1 00:36:59.936 23:22:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:59.936 23:22:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:59.936 23:22:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:59.936 23:22:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:59.936 23:22:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:59.936 23:22:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:59.936 23:22:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:59.936 23:22:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:59.936 23:22:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:59.936 23:22:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:59.936 23:22:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:59.936 23:22:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:59.936 23:22:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:59.936 "name": "raid_bdev1", 00:36:59.936 "uuid": "6af679f8-8ca5-4cfb-9b67-2cd61d8ff7d8", 00:36:59.936 "strip_size_kb": 0, 00:36:59.936 "state": "online", 00:36:59.936 "raid_level": "raid1", 00:36:59.936 "superblock": true, 00:36:59.936 "num_base_bdevs": 2, 00:36:59.936 "num_base_bdevs_discovered": 1, 00:36:59.936 "num_base_bdevs_operational": 1, 00:36:59.936 "base_bdevs_list": [ 00:36:59.936 { 00:36:59.936 "name": null, 00:36:59.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:59.936 "is_configured": false, 00:36:59.936 "data_offset": 256, 00:36:59.936 "data_size": 7936 00:36:59.936 }, 00:36:59.936 { 00:36:59.936 "name": "BaseBdev2", 00:36:59.936 "uuid": "9fd52403-006d-56bc-871a-868251fae58d", 00:36:59.936 "is_configured": true, 00:36:59.936 "data_offset": 256, 00:36:59.936 "data_size": 7936 00:36:59.936 } 00:36:59.936 ] 00:36:59.936 }' 00:36:59.936 23:22:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:59.936 23:22:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:00.503 23:22:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:00.503 23:22:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:00.503 23:22:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:00.503 23:22:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:00.503 23:22:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:00.503 23:22:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:00.503 23:22:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:00.762 23:22:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:00.762 "name": "raid_bdev1", 00:37:00.762 "uuid": "6af679f8-8ca5-4cfb-9b67-2cd61d8ff7d8", 00:37:00.762 "strip_size_kb": 0, 00:37:00.762 "state": "online", 00:37:00.762 "raid_level": "raid1", 00:37:00.762 "superblock": true, 00:37:00.762 "num_base_bdevs": 2, 00:37:00.762 "num_base_bdevs_discovered": 1, 00:37:00.762 "num_base_bdevs_operational": 1, 00:37:00.762 "base_bdevs_list": [ 00:37:00.762 { 00:37:00.762 "name": null, 00:37:00.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:00.762 "is_configured": false, 00:37:00.762 "data_offset": 256, 00:37:00.762 "data_size": 7936 00:37:00.762 }, 00:37:00.762 { 00:37:00.762 "name": "BaseBdev2", 00:37:00.762 "uuid": "9fd52403-006d-56bc-871a-868251fae58d", 00:37:00.762 "is_configured": true, 00:37:00.762 "data_offset": 256, 00:37:00.762 "data_size": 7936 00:37:00.762 } 00:37:00.762 ] 00:37:00.762 }' 00:37:00.762 23:22:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:01.021 23:22:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:01.021 23:22:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:01.021 23:22:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:01.021 23:22:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@782 -- # killprocess 171044 00:37:01.021 23:22:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@948 -- # '[' -z 171044 ']' 00:37:01.021 23:22:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # kill -0 171044 00:37:01.021 23:22:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@953 -- # uname 00:37:01.021 23:22:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:01.021 23:22:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 171044 00:37:01.021 23:22:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:01.021 23:22:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:01.021 23:22:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 171044' 00:37:01.021 killing process with pid 171044 00:37:01.021 23:22:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@967 -- # kill 171044 00:37:01.021 Received shutdown signal, test time was about 60.000000 seconds 00:37:01.021 00:37:01.021 Latency(us) 00:37:01.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:01.021 =================================================================================================================== 00:37:01.021 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:01.021 23:22:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # wait 171044 00:37:01.021 [2024-07-13 23:22:50.254874] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:01.021 [2024-07-13 23:22:50.255135] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:01.021 [2024-07-13 23:22:50.255293] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:01.021 [2024-07-13 23:22:50.255430] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:37:01.021 [2024-07-13 23:22:50.285011] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:01.281 23:22:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # return 0 00:37:01.281 00:37:01.281 real 0m32.167s 00:37:01.281 user 0m52.205s 00:37:01.281 sys 0m3.715s 00:37:01.281 23:22:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:01.281 23:22:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:01.281 ************************************ 00:37:01.281 END TEST raid_rebuild_test_sb_md_separate 00:37:01.281 ************************************ 00:37:01.281 23:22:50 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:37:01.281 23:22:50 bdev_raid -- bdev/bdev_raid.sh@911 -- # base_malloc_params='-m 32 -i' 00:37:01.281 23:22:50 bdev_raid -- bdev/bdev_raid.sh@912 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:37:01.281 23:22:50 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:37:01.281 23:22:50 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:01.281 23:22:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:01.281 ************************************ 00:37:01.281 START TEST raid_state_function_test_sb_md_interleaved 00:37:01.281 ************************************ 00:37:01.281 23:22:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:37:01.281 23:22:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:37:01.281 23:22:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:37:01.281 23:22:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:37:01.281 23:22:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:37:01.281 23:22:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:37:01.281 23:22:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:37:01.281 23:22:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:37:01.281 23:22:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:37:01.281 23:22:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:37:01.281 23:22:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:37:01.281 23:22:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:37:01.281 23:22:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:37:01.281 23:22:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:37:01.281 23:22:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:37:01.281 23:22:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:37:01.281 23:22:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:37:01.281 23:22:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:37:01.281 23:22:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:37:01.281 23:22:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:37:01.281 23:22:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:37:01.281 23:22:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:37:01.281 23:22:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:37:01.281 23:22:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=171927 00:37:01.281 23:22:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 171927' 00:37:01.281 Process raid pid: 171927 00:37:01.281 23:22:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:37:01.281 23:22:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 171927 /var/tmp/spdk-raid.sock 00:37:01.281 23:22:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 171927 ']' 00:37:01.281 23:22:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:37:01.281 23:22:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:01.281 23:22:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:37:01.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:37:01.281 23:22:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:01.281 23:22:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:01.281 [2024-07-13 23:22:50.652427] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:37:01.281 [2024-07-13 23:22:50.652837] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:01.540 [2024-07-13 23:22:50.795208] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:01.540 [2024-07-13 23:22:50.883329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:01.540 [2024-07-13 23:22:50.936697] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:02.476 23:22:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:02.476 23:22:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:37:02.476 23:22:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:37:02.476 [2024-07-13 23:22:51.834886] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:02.476 [2024-07-13 23:22:51.835216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:02.476 [2024-07-13 23:22:51.835356] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:02.476 [2024-07-13 23:22:51.835420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:02.476 23:22:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:37:02.476 23:22:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:02.476 23:22:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:02.476 23:22:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:02.476 23:22:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:02.476 23:22:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:02.476 23:22:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:02.476 23:22:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:02.476 23:22:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:02.476 23:22:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:02.476 23:22:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:02.476 23:22:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:02.735 23:22:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:02.735 "name": "Existed_Raid", 00:37:02.735 "uuid": "c1d3b35b-282b-4812-b180-fbc04735babb", 00:37:02.735 "strip_size_kb": 0, 00:37:02.735 "state": "configuring", 00:37:02.735 "raid_level": "raid1", 00:37:02.735 "superblock": true, 00:37:02.735 "num_base_bdevs": 2, 00:37:02.735 "num_base_bdevs_discovered": 0, 00:37:02.735 "num_base_bdevs_operational": 2, 00:37:02.735 "base_bdevs_list": [ 00:37:02.735 { 00:37:02.735 "name": "BaseBdev1", 00:37:02.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:02.735 "is_configured": false, 00:37:02.735 "data_offset": 0, 00:37:02.735 "data_size": 0 00:37:02.735 }, 00:37:02.735 { 00:37:02.735 "name": "BaseBdev2", 00:37:02.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:02.735 "is_configured": false, 00:37:02.735 "data_offset": 0, 00:37:02.735 "data_size": 0 00:37:02.735 } 00:37:02.735 ] 00:37:02.735 }' 00:37:02.735 23:22:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:02.735 23:22:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:03.671 23:22:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:37:03.671 [2024-07-13 23:22:52.991073] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:03.671 [2024-07-13 23:22:52.991355] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:37:03.671 23:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:37:03.935 [2024-07-13 23:22:53.227112] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:03.935 [2024-07-13 23:22:53.227482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:03.935 [2024-07-13 23:22:53.227601] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:03.935 [2024-07-13 23:22:53.227688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:03.935 23:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:37:04.195 [2024-07-13 23:22:53.461921] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:04.195 BaseBdev1 00:37:04.195 23:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:37:04.195 23:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:37:04.195 23:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:37:04.195 23:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:37:04.195 23:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:37:04.195 23:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:37:04.195 23:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:04.453 23:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:37:04.712 [ 00:37:04.712 { 00:37:04.712 "name": "BaseBdev1", 00:37:04.712 "aliases": [ 00:37:04.712 "11c21e78-f586-4c75-910c-a68e2a45c572" 00:37:04.712 ], 00:37:04.712 "product_name": "Malloc disk", 00:37:04.712 "block_size": 4128, 00:37:04.712 "num_blocks": 8192, 00:37:04.712 "uuid": "11c21e78-f586-4c75-910c-a68e2a45c572", 00:37:04.712 "md_size": 32, 00:37:04.712 "md_interleave": true, 00:37:04.712 "dif_type": 0, 00:37:04.712 "assigned_rate_limits": { 00:37:04.712 "rw_ios_per_sec": 0, 00:37:04.712 "rw_mbytes_per_sec": 0, 00:37:04.712 "r_mbytes_per_sec": 0, 00:37:04.712 "w_mbytes_per_sec": 0 00:37:04.712 }, 00:37:04.712 "claimed": true, 00:37:04.712 "claim_type": "exclusive_write", 00:37:04.712 "zoned": false, 00:37:04.712 "supported_io_types": { 00:37:04.712 "read": true, 00:37:04.712 "write": true, 00:37:04.712 "unmap": true, 00:37:04.712 "flush": true, 00:37:04.712 "reset": true, 00:37:04.712 "nvme_admin": false, 00:37:04.712 "nvme_io": false, 00:37:04.712 "nvme_io_md": false, 00:37:04.712 "write_zeroes": true, 00:37:04.712 "zcopy": true, 00:37:04.712 "get_zone_info": false, 00:37:04.712 "zone_management": false, 00:37:04.712 "zone_append": false, 00:37:04.712 "compare": false, 00:37:04.712 "compare_and_write": false, 00:37:04.712 "abort": true, 00:37:04.712 "seek_hole": false, 00:37:04.712 "seek_data": false, 00:37:04.712 "copy": true, 00:37:04.712 "nvme_iov_md": false 00:37:04.712 }, 00:37:04.713 "memory_domains": [ 00:37:04.713 { 00:37:04.713 "dma_device_id": "system", 00:37:04.713 "dma_device_type": 1 00:37:04.713 }, 00:37:04.713 { 00:37:04.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:04.713 "dma_device_type": 2 00:37:04.713 } 00:37:04.713 ], 00:37:04.713 "driver_specific": {} 00:37:04.713 } 00:37:04.713 ] 00:37:04.713 23:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:37:04.713 23:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:37:04.713 23:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:04.713 23:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:04.713 23:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:04.713 23:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:04.713 23:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:04.713 23:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:04.713 23:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:04.713 23:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:04.713 23:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:04.713 23:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:04.713 23:22:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:04.971 23:22:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:04.971 "name": "Existed_Raid", 00:37:04.971 "uuid": "1aed6639-65b5-4f1f-b7af-030db336960f", 00:37:04.971 "strip_size_kb": 0, 00:37:04.971 "state": "configuring", 00:37:04.971 "raid_level": "raid1", 00:37:04.971 "superblock": true, 00:37:04.971 "num_base_bdevs": 2, 00:37:04.971 "num_base_bdevs_discovered": 1, 00:37:04.971 "num_base_bdevs_operational": 2, 00:37:04.971 "base_bdevs_list": [ 00:37:04.971 { 00:37:04.971 "name": "BaseBdev1", 00:37:04.971 "uuid": "11c21e78-f586-4c75-910c-a68e2a45c572", 00:37:04.971 "is_configured": true, 00:37:04.971 "data_offset": 256, 00:37:04.971 "data_size": 7936 00:37:04.971 }, 00:37:04.971 { 00:37:04.971 "name": "BaseBdev2", 00:37:04.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:04.971 "is_configured": false, 00:37:04.971 "data_offset": 0, 00:37:04.971 "data_size": 0 00:37:04.971 } 00:37:04.971 ] 00:37:04.971 }' 00:37:04.971 23:22:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:04.971 23:22:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:05.537 23:22:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:37:05.795 [2024-07-13 23:22:55.126363] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:05.795 [2024-07-13 23:22:55.126606] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:37:05.795 23:22:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:37:06.053 [2024-07-13 23:22:55.406466] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:06.053 [2024-07-13 23:22:55.408717] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:06.053 [2024-07-13 23:22:55.408934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:06.053 23:22:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:37:06.053 23:22:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:37:06.053 23:22:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:37:06.053 23:22:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:06.053 23:22:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:06.053 23:22:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:06.053 23:22:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:06.053 23:22:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:06.053 23:22:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:06.053 23:22:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:06.053 23:22:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:06.053 23:22:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:06.053 23:22:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:06.053 23:22:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:06.310 23:22:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:06.310 "name": "Existed_Raid", 00:37:06.310 "uuid": "4b9a78f0-0ce9-4ccf-982b-1e01f7896d13", 00:37:06.310 "strip_size_kb": 0, 00:37:06.310 "state": "configuring", 00:37:06.310 "raid_level": "raid1", 00:37:06.310 "superblock": true, 00:37:06.310 "num_base_bdevs": 2, 00:37:06.310 "num_base_bdevs_discovered": 1, 00:37:06.310 "num_base_bdevs_operational": 2, 00:37:06.310 "base_bdevs_list": [ 00:37:06.310 { 00:37:06.310 "name": "BaseBdev1", 00:37:06.310 "uuid": "11c21e78-f586-4c75-910c-a68e2a45c572", 00:37:06.310 "is_configured": true, 00:37:06.310 "data_offset": 256, 00:37:06.310 "data_size": 7936 00:37:06.310 }, 00:37:06.310 { 00:37:06.310 "name": "BaseBdev2", 00:37:06.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:06.310 "is_configured": false, 00:37:06.310 "data_offset": 0, 00:37:06.310 "data_size": 0 00:37:06.310 } 00:37:06.310 ] 00:37:06.310 }' 00:37:06.310 23:22:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:06.310 23:22:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:07.244 23:22:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:37:07.244 [2024-07-13 23:22:56.609229] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:07.244 [2024-07-13 23:22:56.609757] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:37:07.244 [2024-07-13 23:22:56.609889] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:37:07.244 [2024-07-13 23:22:56.610065] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:37:07.244 [2024-07-13 23:22:56.610293] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:37:07.244 [2024-07-13 23:22:56.610416] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:37:07.244 [2024-07-13 23:22:56.610595] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:07.244 BaseBdev2 00:37:07.244 23:22:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:37:07.244 23:22:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:37:07.244 23:22:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:37:07.244 23:22:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:37:07.244 23:22:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:37:07.244 23:22:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:37:07.245 23:22:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:07.812 23:22:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:37:07.812 [ 00:37:07.812 { 00:37:07.812 "name": "BaseBdev2", 00:37:07.812 "aliases": [ 00:37:07.812 "e03ab08e-d5fc-4682-a573-2fefd6a97543" 00:37:07.812 ], 00:37:07.812 "product_name": "Malloc disk", 00:37:07.812 "block_size": 4128, 00:37:07.812 "num_blocks": 8192, 00:37:07.812 "uuid": "e03ab08e-d5fc-4682-a573-2fefd6a97543", 00:37:07.812 "md_size": 32, 00:37:07.812 "md_interleave": true, 00:37:07.812 "dif_type": 0, 00:37:07.812 "assigned_rate_limits": { 00:37:07.812 "rw_ios_per_sec": 0, 00:37:07.812 "rw_mbytes_per_sec": 0, 00:37:07.812 "r_mbytes_per_sec": 0, 00:37:07.812 "w_mbytes_per_sec": 0 00:37:07.812 }, 00:37:07.812 "claimed": true, 00:37:07.812 "claim_type": "exclusive_write", 00:37:07.812 "zoned": false, 00:37:07.812 "supported_io_types": { 00:37:07.812 "read": true, 00:37:07.812 "write": true, 00:37:07.812 "unmap": true, 00:37:07.812 "flush": true, 00:37:07.812 "reset": true, 00:37:07.812 "nvme_admin": false, 00:37:07.812 "nvme_io": false, 00:37:07.812 "nvme_io_md": false, 00:37:07.812 "write_zeroes": true, 00:37:07.812 "zcopy": true, 00:37:07.812 "get_zone_info": false, 00:37:07.812 "zone_management": false, 00:37:07.812 "zone_append": false, 00:37:07.812 "compare": false, 00:37:07.812 "compare_and_write": false, 00:37:07.812 "abort": true, 00:37:07.812 "seek_hole": false, 00:37:07.812 "seek_data": false, 00:37:07.812 "copy": true, 00:37:07.812 "nvme_iov_md": false 00:37:07.812 }, 00:37:07.812 "memory_domains": [ 00:37:07.812 { 00:37:07.812 "dma_device_id": "system", 00:37:07.812 "dma_device_type": 1 00:37:07.812 }, 00:37:07.812 { 00:37:07.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:07.812 "dma_device_type": 2 00:37:07.812 } 00:37:07.812 ], 00:37:07.812 "driver_specific": {} 00:37:07.812 } 00:37:07.812 ] 00:37:07.812 23:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:37:07.812 23:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:37:07.812 23:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:37:07.812 23:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:37:07.812 23:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:07.812 23:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:07.812 23:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:07.812 23:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:07.812 23:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:07.812 23:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:07.812 23:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:07.812 23:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:07.812 23:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:07.812 23:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:07.812 23:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:08.070 23:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:08.071 "name": "Existed_Raid", 00:37:08.071 "uuid": "4b9a78f0-0ce9-4ccf-982b-1e01f7896d13", 00:37:08.071 "strip_size_kb": 0, 00:37:08.071 "state": "online", 00:37:08.071 "raid_level": "raid1", 00:37:08.071 "superblock": true, 00:37:08.071 "num_base_bdevs": 2, 00:37:08.071 "num_base_bdevs_discovered": 2, 00:37:08.071 "num_base_bdevs_operational": 2, 00:37:08.071 "base_bdevs_list": [ 00:37:08.071 { 00:37:08.071 "name": "BaseBdev1", 00:37:08.071 "uuid": "11c21e78-f586-4c75-910c-a68e2a45c572", 00:37:08.071 "is_configured": true, 00:37:08.071 "data_offset": 256, 00:37:08.071 "data_size": 7936 00:37:08.071 }, 00:37:08.071 { 00:37:08.071 "name": "BaseBdev2", 00:37:08.071 "uuid": "e03ab08e-d5fc-4682-a573-2fefd6a97543", 00:37:08.071 "is_configured": true, 00:37:08.071 "data_offset": 256, 00:37:08.071 "data_size": 7936 00:37:08.071 } 00:37:08.071 ] 00:37:08.071 }' 00:37:08.071 23:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:08.071 23:22:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:09.005 23:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:37:09.005 23:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:37:09.005 23:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:37:09.005 23:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:37:09.005 23:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:37:09.005 23:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:37:09.005 23:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:37:09.005 23:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:37:09.005 [2024-07-13 23:22:58.350169] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:09.005 23:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:37:09.006 "name": "Existed_Raid", 00:37:09.006 "aliases": [ 00:37:09.006 "4b9a78f0-0ce9-4ccf-982b-1e01f7896d13" 00:37:09.006 ], 00:37:09.006 "product_name": "Raid Volume", 00:37:09.006 "block_size": 4128, 00:37:09.006 "num_blocks": 7936, 00:37:09.006 "uuid": "4b9a78f0-0ce9-4ccf-982b-1e01f7896d13", 00:37:09.006 "md_size": 32, 00:37:09.006 "md_interleave": true, 00:37:09.006 "dif_type": 0, 00:37:09.006 "assigned_rate_limits": { 00:37:09.006 "rw_ios_per_sec": 0, 00:37:09.006 "rw_mbytes_per_sec": 0, 00:37:09.006 "r_mbytes_per_sec": 0, 00:37:09.006 "w_mbytes_per_sec": 0 00:37:09.006 }, 00:37:09.006 "claimed": false, 00:37:09.006 "zoned": false, 00:37:09.006 "supported_io_types": { 00:37:09.006 "read": true, 00:37:09.006 "write": true, 00:37:09.006 "unmap": false, 00:37:09.006 "flush": false, 00:37:09.006 "reset": true, 00:37:09.006 "nvme_admin": false, 00:37:09.006 "nvme_io": false, 00:37:09.006 "nvme_io_md": false, 00:37:09.006 "write_zeroes": true, 00:37:09.006 "zcopy": false, 00:37:09.006 "get_zone_info": false, 00:37:09.006 "zone_management": false, 00:37:09.006 "zone_append": false, 00:37:09.006 "compare": false, 00:37:09.006 "compare_and_write": false, 00:37:09.006 "abort": false, 00:37:09.006 "seek_hole": false, 00:37:09.006 "seek_data": false, 00:37:09.006 "copy": false, 00:37:09.006 "nvme_iov_md": false 00:37:09.006 }, 00:37:09.006 "memory_domains": [ 00:37:09.006 { 00:37:09.006 "dma_device_id": "system", 00:37:09.006 "dma_device_type": 1 00:37:09.006 }, 00:37:09.006 { 00:37:09.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:09.006 "dma_device_type": 2 00:37:09.006 }, 00:37:09.006 { 00:37:09.006 "dma_device_id": "system", 00:37:09.006 "dma_device_type": 1 00:37:09.006 }, 00:37:09.006 { 00:37:09.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:09.006 "dma_device_type": 2 00:37:09.006 } 00:37:09.006 ], 00:37:09.006 "driver_specific": { 00:37:09.006 "raid": { 00:37:09.006 "uuid": "4b9a78f0-0ce9-4ccf-982b-1e01f7896d13", 00:37:09.006 "strip_size_kb": 0, 00:37:09.006 "state": "online", 00:37:09.006 "raid_level": "raid1", 00:37:09.006 "superblock": true, 00:37:09.006 "num_base_bdevs": 2, 00:37:09.006 "num_base_bdevs_discovered": 2, 00:37:09.006 "num_base_bdevs_operational": 2, 00:37:09.006 "base_bdevs_list": [ 00:37:09.006 { 00:37:09.006 "name": "BaseBdev1", 00:37:09.006 "uuid": "11c21e78-f586-4c75-910c-a68e2a45c572", 00:37:09.006 "is_configured": true, 00:37:09.006 "data_offset": 256, 00:37:09.006 "data_size": 7936 00:37:09.006 }, 00:37:09.006 { 00:37:09.006 "name": "BaseBdev2", 00:37:09.006 "uuid": "e03ab08e-d5fc-4682-a573-2fefd6a97543", 00:37:09.006 "is_configured": true, 00:37:09.006 "data_offset": 256, 00:37:09.006 "data_size": 7936 00:37:09.006 } 00:37:09.006 ] 00:37:09.006 } 00:37:09.006 } 00:37:09.006 }' 00:37:09.006 23:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:09.264 23:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:37:09.264 BaseBdev2' 00:37:09.264 23:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:09.264 23:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:37:09.264 23:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:09.264 23:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:09.264 "name": "BaseBdev1", 00:37:09.264 "aliases": [ 00:37:09.264 "11c21e78-f586-4c75-910c-a68e2a45c572" 00:37:09.264 ], 00:37:09.264 "product_name": "Malloc disk", 00:37:09.264 "block_size": 4128, 00:37:09.264 "num_blocks": 8192, 00:37:09.264 "uuid": "11c21e78-f586-4c75-910c-a68e2a45c572", 00:37:09.264 "md_size": 32, 00:37:09.264 "md_interleave": true, 00:37:09.264 "dif_type": 0, 00:37:09.264 "assigned_rate_limits": { 00:37:09.264 "rw_ios_per_sec": 0, 00:37:09.264 "rw_mbytes_per_sec": 0, 00:37:09.264 "r_mbytes_per_sec": 0, 00:37:09.264 "w_mbytes_per_sec": 0 00:37:09.264 }, 00:37:09.264 "claimed": true, 00:37:09.264 "claim_type": "exclusive_write", 00:37:09.264 "zoned": false, 00:37:09.264 "supported_io_types": { 00:37:09.264 "read": true, 00:37:09.264 "write": true, 00:37:09.264 "unmap": true, 00:37:09.264 "flush": true, 00:37:09.265 "reset": true, 00:37:09.265 "nvme_admin": false, 00:37:09.265 "nvme_io": false, 00:37:09.265 "nvme_io_md": false, 00:37:09.265 "write_zeroes": true, 00:37:09.265 "zcopy": true, 00:37:09.265 "get_zone_info": false, 00:37:09.265 "zone_management": false, 00:37:09.265 "zone_append": false, 00:37:09.265 "compare": false, 00:37:09.265 "compare_and_write": false, 00:37:09.265 "abort": true, 00:37:09.265 "seek_hole": false, 00:37:09.265 "seek_data": false, 00:37:09.265 "copy": true, 00:37:09.265 "nvme_iov_md": false 00:37:09.265 }, 00:37:09.265 "memory_domains": [ 00:37:09.265 { 00:37:09.265 "dma_device_id": "system", 00:37:09.265 "dma_device_type": 1 00:37:09.265 }, 00:37:09.265 { 00:37:09.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:09.265 "dma_device_type": 2 00:37:09.265 } 00:37:09.265 ], 00:37:09.265 "driver_specific": {} 00:37:09.265 }' 00:37:09.265 23:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:09.523 23:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:09.523 23:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:37:09.523 23:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:09.523 23:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:09.523 23:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:37:09.523 23:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:09.523 23:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:09.523 23:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:37:09.782 23:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:09.782 23:22:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:09.782 23:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:37:09.782 23:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:09.782 23:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:37:09.782 23:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:10.040 23:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:10.040 "name": "BaseBdev2", 00:37:10.040 "aliases": [ 00:37:10.040 "e03ab08e-d5fc-4682-a573-2fefd6a97543" 00:37:10.041 ], 00:37:10.041 "product_name": "Malloc disk", 00:37:10.041 "block_size": 4128, 00:37:10.041 "num_blocks": 8192, 00:37:10.041 "uuid": "e03ab08e-d5fc-4682-a573-2fefd6a97543", 00:37:10.041 "md_size": 32, 00:37:10.041 "md_interleave": true, 00:37:10.041 "dif_type": 0, 00:37:10.041 "assigned_rate_limits": { 00:37:10.041 "rw_ios_per_sec": 0, 00:37:10.041 "rw_mbytes_per_sec": 0, 00:37:10.041 "r_mbytes_per_sec": 0, 00:37:10.041 "w_mbytes_per_sec": 0 00:37:10.041 }, 00:37:10.041 "claimed": true, 00:37:10.041 "claim_type": "exclusive_write", 00:37:10.041 "zoned": false, 00:37:10.041 "supported_io_types": { 00:37:10.041 "read": true, 00:37:10.041 "write": true, 00:37:10.041 "unmap": true, 00:37:10.041 "flush": true, 00:37:10.041 "reset": true, 00:37:10.041 "nvme_admin": false, 00:37:10.041 "nvme_io": false, 00:37:10.041 "nvme_io_md": false, 00:37:10.041 "write_zeroes": true, 00:37:10.041 "zcopy": true, 00:37:10.041 "get_zone_info": false, 00:37:10.041 "zone_management": false, 00:37:10.041 "zone_append": false, 00:37:10.041 "compare": false, 00:37:10.041 "compare_and_write": false, 00:37:10.041 "abort": true, 00:37:10.041 "seek_hole": false, 00:37:10.041 "seek_data": false, 00:37:10.041 "copy": true, 00:37:10.041 "nvme_iov_md": false 00:37:10.041 }, 00:37:10.041 "memory_domains": [ 00:37:10.041 { 00:37:10.041 "dma_device_id": "system", 00:37:10.041 "dma_device_type": 1 00:37:10.041 }, 00:37:10.041 { 00:37:10.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:10.041 "dma_device_type": 2 00:37:10.041 } 00:37:10.041 ], 00:37:10.041 "driver_specific": {} 00:37:10.041 }' 00:37:10.041 23:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:10.041 23:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:10.041 23:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:37:10.041 23:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:10.041 23:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:10.299 23:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:37:10.299 23:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:10.299 23:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:10.299 23:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:37:10.299 23:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:10.300 23:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:10.300 23:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:37:10.300 23:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:37:10.558 [2024-07-13 23:22:59.858329] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:10.558 23:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:37:10.558 23:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:37:10.558 23:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:37:10.558 23:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:37:10.558 23:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:37:10.558 23:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:37:10.558 23:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:10.558 23:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:10.558 23:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:10.558 23:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:10.558 23:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:10.558 23:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:10.558 23:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:10.558 23:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:10.558 23:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:10.558 23:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:10.558 23:22:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:10.817 23:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:10.817 "name": "Existed_Raid", 00:37:10.817 "uuid": "4b9a78f0-0ce9-4ccf-982b-1e01f7896d13", 00:37:10.817 "strip_size_kb": 0, 00:37:10.817 "state": "online", 00:37:10.817 "raid_level": "raid1", 00:37:10.817 "superblock": true, 00:37:10.817 "num_base_bdevs": 2, 00:37:10.817 "num_base_bdevs_discovered": 1, 00:37:10.817 "num_base_bdevs_operational": 1, 00:37:10.817 "base_bdevs_list": [ 00:37:10.817 { 00:37:10.817 "name": null, 00:37:10.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:10.817 "is_configured": false, 00:37:10.817 "data_offset": 256, 00:37:10.817 "data_size": 7936 00:37:10.817 }, 00:37:10.817 { 00:37:10.817 "name": "BaseBdev2", 00:37:10.817 "uuid": "e03ab08e-d5fc-4682-a573-2fefd6a97543", 00:37:10.817 "is_configured": true, 00:37:10.817 "data_offset": 256, 00:37:10.817 "data_size": 7936 00:37:10.817 } 00:37:10.817 ] 00:37:10.817 }' 00:37:10.817 23:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:10.817 23:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:11.753 23:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:37:11.753 23:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:37:11.753 23:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:11.753 23:23:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:37:11.753 23:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:37:11.753 23:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:37:11.753 23:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:37:12.012 [2024-07-13 23:23:01.301625] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:37:12.012 [2024-07-13 23:23:01.301975] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:12.012 [2024-07-13 23:23:01.314614] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:12.012 [2024-07-13 23:23:01.314931] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:12.012 [2024-07-13 23:23:01.315048] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:37:12.012 23:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:37:12.012 23:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:37:12.012 23:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:12.012 23:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:37:12.272 23:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:37:12.272 23:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:37:12.272 23:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:37:12.272 23:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 171927 00:37:12.273 23:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 171927 ']' 00:37:12.273 23:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 171927 00:37:12.273 23:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:37:12.273 23:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:12.273 23:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 171927 00:37:12.273 23:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:12.273 23:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:12.273 23:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 171927' 00:37:12.273 killing process with pid 171927 00:37:12.273 23:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 171927 00:37:12.273 23:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 171927 00:37:12.273 [2024-07-13 23:23:01.581760] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:12.273 [2024-07-13 23:23:01.581840] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:12.531 23:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:37:12.531 00:37:12.531 real 0m11.220s 00:37:12.531 user 0m20.651s 00:37:12.531 sys 0m1.456s 00:37:12.531 23:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:12.531 23:23:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:12.531 ************************************ 00:37:12.531 END TEST raid_state_function_test_sb_md_interleaved 00:37:12.531 ************************************ 00:37:12.531 23:23:01 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:37:12.531 23:23:01 bdev_raid -- bdev/bdev_raid.sh@913 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:37:12.531 23:23:01 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:37:12.531 23:23:01 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:12.531 23:23:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:12.531 ************************************ 00:37:12.531 START TEST raid_superblock_test_md_interleaved 00:37:12.531 ************************************ 00:37:12.531 23:23:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:37:12.531 23:23:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:37:12.531 23:23:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:37:12.531 23:23:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:37:12.531 23:23:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:37:12.531 23:23:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:37:12.531 23:23:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:37:12.531 23:23:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:37:12.531 23:23:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:37:12.531 23:23:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:37:12.531 23:23:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local strip_size 00:37:12.531 23:23:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:37:12.532 23:23:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:37:12.532 23:23:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:37:12.532 23:23:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:37:12.532 23:23:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:37:12.532 23:23:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # raid_pid=172291 00:37:12.532 23:23:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # waitforlisten 172291 /var/tmp/spdk-raid.sock 00:37:12.532 23:23:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:37:12.532 23:23:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 172291 ']' 00:37:12.532 23:23:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:37:12.532 23:23:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:12.532 23:23:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:37:12.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:37:12.532 23:23:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:12.532 23:23:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:12.532 [2024-07-13 23:23:01.932666] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:37:12.532 [2024-07-13 23:23:01.933113] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172291 ] 00:37:12.789 [2024-07-13 23:23:02.079981] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:12.789 [2024-07-13 23:23:02.150585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:13.047 [2024-07-13 23:23:02.204776] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:13.047 23:23:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:13.047 23:23:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:37:13.047 23:23:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:37:13.047 23:23:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:37:13.047 23:23:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:37:13.047 23:23:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:37:13.047 23:23:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:37:13.047 23:23:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:13.047 23:23:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:37:13.047 23:23:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:13.047 23:23:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:37:13.307 malloc1 00:37:13.307 23:23:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:13.565 [2024-07-13 23:23:02.793699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:13.565 [2024-07-13 23:23:02.794094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:13.565 [2024-07-13 23:23:02.794260] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:37:13.565 [2024-07-13 23:23:02.794412] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:13.565 [2024-07-13 23:23:02.796808] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:13.565 [2024-07-13 23:23:02.797017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:13.565 pt1 00:37:13.565 23:23:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:37:13.565 23:23:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:37:13.565 23:23:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:37:13.565 23:23:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:37:13.565 23:23:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:37:13.565 23:23:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:13.565 23:23:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:37:13.565 23:23:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:13.565 23:23:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:37:13.824 malloc2 00:37:13.824 23:23:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:14.083 [2024-07-13 23:23:03.279812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:14.083 [2024-07-13 23:23:03.280128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:14.083 [2024-07-13 23:23:03.280289] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:37:14.083 [2024-07-13 23:23:03.280438] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:14.083 [2024-07-13 23:23:03.282656] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:14.083 [2024-07-13 23:23:03.282828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:14.083 pt2 00:37:14.083 23:23:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:37:14.083 23:23:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:37:14.083 23:23:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:37:14.341 [2024-07-13 23:23:03.495978] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:14.341 [2024-07-13 23:23:03.498231] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:14.341 [2024-07-13 23:23:03.498619] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006c80 00:37:14.341 [2024-07-13 23:23:03.498748] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:37:14.341 [2024-07-13 23:23:03.498947] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:37:14.341 [2024-07-13 23:23:03.499152] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006c80 00:37:14.341 [2024-07-13 23:23:03.499257] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000006c80 00:37:14.341 [2024-07-13 23:23:03.499438] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:14.341 23:23:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:14.341 23:23:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:14.341 23:23:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:14.341 23:23:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:14.341 23:23:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:14.341 23:23:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:14.341 23:23:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:14.341 23:23:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:14.341 23:23:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:14.341 23:23:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:14.341 23:23:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:14.341 23:23:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:14.341 23:23:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:14.341 "name": "raid_bdev1", 00:37:14.341 "uuid": "93029245-c56d-43dd-b2fe-fb527c276c88", 00:37:14.341 "strip_size_kb": 0, 00:37:14.342 "state": "online", 00:37:14.342 "raid_level": "raid1", 00:37:14.342 "superblock": true, 00:37:14.342 "num_base_bdevs": 2, 00:37:14.342 "num_base_bdevs_discovered": 2, 00:37:14.342 "num_base_bdevs_operational": 2, 00:37:14.342 "base_bdevs_list": [ 00:37:14.342 { 00:37:14.342 "name": "pt1", 00:37:14.342 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:14.342 "is_configured": true, 00:37:14.342 "data_offset": 256, 00:37:14.342 "data_size": 7936 00:37:14.342 }, 00:37:14.342 { 00:37:14.342 "name": "pt2", 00:37:14.342 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:14.342 "is_configured": true, 00:37:14.342 "data_offset": 256, 00:37:14.342 "data_size": 7936 00:37:14.342 } 00:37:14.342 ] 00:37:14.342 }' 00:37:14.342 23:23:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:14.342 23:23:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:15.276 23:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:37:15.276 23:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:37:15.276 23:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:37:15.276 23:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:37:15.276 23:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:37:15.276 23:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:37:15.276 23:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:15.276 23:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:37:15.276 [2024-07-13 23:23:04.576472] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:15.276 23:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:37:15.276 "name": "raid_bdev1", 00:37:15.276 "aliases": [ 00:37:15.276 "93029245-c56d-43dd-b2fe-fb527c276c88" 00:37:15.276 ], 00:37:15.276 "product_name": "Raid Volume", 00:37:15.276 "block_size": 4128, 00:37:15.276 "num_blocks": 7936, 00:37:15.276 "uuid": "93029245-c56d-43dd-b2fe-fb527c276c88", 00:37:15.276 "md_size": 32, 00:37:15.276 "md_interleave": true, 00:37:15.276 "dif_type": 0, 00:37:15.276 "assigned_rate_limits": { 00:37:15.276 "rw_ios_per_sec": 0, 00:37:15.276 "rw_mbytes_per_sec": 0, 00:37:15.276 "r_mbytes_per_sec": 0, 00:37:15.276 "w_mbytes_per_sec": 0 00:37:15.276 }, 00:37:15.276 "claimed": false, 00:37:15.276 "zoned": false, 00:37:15.276 "supported_io_types": { 00:37:15.276 "read": true, 00:37:15.276 "write": true, 00:37:15.276 "unmap": false, 00:37:15.276 "flush": false, 00:37:15.276 "reset": true, 00:37:15.276 "nvme_admin": false, 00:37:15.276 "nvme_io": false, 00:37:15.276 "nvme_io_md": false, 00:37:15.276 "write_zeroes": true, 00:37:15.276 "zcopy": false, 00:37:15.276 "get_zone_info": false, 00:37:15.276 "zone_management": false, 00:37:15.276 "zone_append": false, 00:37:15.276 "compare": false, 00:37:15.276 "compare_and_write": false, 00:37:15.276 "abort": false, 00:37:15.276 "seek_hole": false, 00:37:15.276 "seek_data": false, 00:37:15.276 "copy": false, 00:37:15.276 "nvme_iov_md": false 00:37:15.276 }, 00:37:15.276 "memory_domains": [ 00:37:15.276 { 00:37:15.276 "dma_device_id": "system", 00:37:15.276 "dma_device_type": 1 00:37:15.276 }, 00:37:15.276 { 00:37:15.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:15.276 "dma_device_type": 2 00:37:15.276 }, 00:37:15.276 { 00:37:15.276 "dma_device_id": "system", 00:37:15.276 "dma_device_type": 1 00:37:15.276 }, 00:37:15.276 { 00:37:15.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:15.276 "dma_device_type": 2 00:37:15.276 } 00:37:15.276 ], 00:37:15.276 "driver_specific": { 00:37:15.276 "raid": { 00:37:15.276 "uuid": "93029245-c56d-43dd-b2fe-fb527c276c88", 00:37:15.276 "strip_size_kb": 0, 00:37:15.276 "state": "online", 00:37:15.276 "raid_level": "raid1", 00:37:15.276 "superblock": true, 00:37:15.276 "num_base_bdevs": 2, 00:37:15.276 "num_base_bdevs_discovered": 2, 00:37:15.276 "num_base_bdevs_operational": 2, 00:37:15.276 "base_bdevs_list": [ 00:37:15.276 { 00:37:15.276 "name": "pt1", 00:37:15.276 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:15.276 "is_configured": true, 00:37:15.276 "data_offset": 256, 00:37:15.276 "data_size": 7936 00:37:15.276 }, 00:37:15.276 { 00:37:15.276 "name": "pt2", 00:37:15.276 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:15.276 "is_configured": true, 00:37:15.276 "data_offset": 256, 00:37:15.276 "data_size": 7936 00:37:15.276 } 00:37:15.276 ] 00:37:15.276 } 00:37:15.276 } 00:37:15.276 }' 00:37:15.276 23:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:15.276 23:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:37:15.276 pt2' 00:37:15.276 23:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:15.276 23:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:15.276 23:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:37:15.534 23:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:15.534 "name": "pt1", 00:37:15.534 "aliases": [ 00:37:15.534 "00000000-0000-0000-0000-000000000001" 00:37:15.534 ], 00:37:15.534 "product_name": "passthru", 00:37:15.534 "block_size": 4128, 00:37:15.534 "num_blocks": 8192, 00:37:15.534 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:15.534 "md_size": 32, 00:37:15.534 "md_interleave": true, 00:37:15.534 "dif_type": 0, 00:37:15.534 "assigned_rate_limits": { 00:37:15.534 "rw_ios_per_sec": 0, 00:37:15.534 "rw_mbytes_per_sec": 0, 00:37:15.534 "r_mbytes_per_sec": 0, 00:37:15.534 "w_mbytes_per_sec": 0 00:37:15.534 }, 00:37:15.534 "claimed": true, 00:37:15.534 "claim_type": "exclusive_write", 00:37:15.534 "zoned": false, 00:37:15.534 "supported_io_types": { 00:37:15.534 "read": true, 00:37:15.534 "write": true, 00:37:15.534 "unmap": true, 00:37:15.534 "flush": true, 00:37:15.534 "reset": true, 00:37:15.534 "nvme_admin": false, 00:37:15.534 "nvme_io": false, 00:37:15.534 "nvme_io_md": false, 00:37:15.534 "write_zeroes": true, 00:37:15.534 "zcopy": true, 00:37:15.534 "get_zone_info": false, 00:37:15.534 "zone_management": false, 00:37:15.534 "zone_append": false, 00:37:15.534 "compare": false, 00:37:15.534 "compare_and_write": false, 00:37:15.534 "abort": true, 00:37:15.534 "seek_hole": false, 00:37:15.535 "seek_data": false, 00:37:15.535 "copy": true, 00:37:15.535 "nvme_iov_md": false 00:37:15.535 }, 00:37:15.535 "memory_domains": [ 00:37:15.535 { 00:37:15.535 "dma_device_id": "system", 00:37:15.535 "dma_device_type": 1 00:37:15.535 }, 00:37:15.535 { 00:37:15.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:15.535 "dma_device_type": 2 00:37:15.535 } 00:37:15.535 ], 00:37:15.535 "driver_specific": { 00:37:15.535 "passthru": { 00:37:15.535 "name": "pt1", 00:37:15.535 "base_bdev_name": "malloc1" 00:37:15.535 } 00:37:15.535 } 00:37:15.535 }' 00:37:15.535 23:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:15.535 23:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:15.792 23:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:37:15.792 23:23:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:15.792 23:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:15.792 23:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:37:15.792 23:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:15.792 23:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:15.792 23:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:37:15.792 23:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:16.050 23:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:16.050 23:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:37:16.050 23:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:16.050 23:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:16.050 23:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:37:16.307 23:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:16.308 "name": "pt2", 00:37:16.308 "aliases": [ 00:37:16.308 "00000000-0000-0000-0000-000000000002" 00:37:16.308 ], 00:37:16.308 "product_name": "passthru", 00:37:16.308 "block_size": 4128, 00:37:16.308 "num_blocks": 8192, 00:37:16.308 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:16.308 "md_size": 32, 00:37:16.308 "md_interleave": true, 00:37:16.308 "dif_type": 0, 00:37:16.308 "assigned_rate_limits": { 00:37:16.308 "rw_ios_per_sec": 0, 00:37:16.308 "rw_mbytes_per_sec": 0, 00:37:16.308 "r_mbytes_per_sec": 0, 00:37:16.308 "w_mbytes_per_sec": 0 00:37:16.308 }, 00:37:16.308 "claimed": true, 00:37:16.308 "claim_type": "exclusive_write", 00:37:16.308 "zoned": false, 00:37:16.308 "supported_io_types": { 00:37:16.308 "read": true, 00:37:16.308 "write": true, 00:37:16.308 "unmap": true, 00:37:16.308 "flush": true, 00:37:16.308 "reset": true, 00:37:16.308 "nvme_admin": false, 00:37:16.308 "nvme_io": false, 00:37:16.308 "nvme_io_md": false, 00:37:16.308 "write_zeroes": true, 00:37:16.308 "zcopy": true, 00:37:16.308 "get_zone_info": false, 00:37:16.308 "zone_management": false, 00:37:16.308 "zone_append": false, 00:37:16.308 "compare": false, 00:37:16.308 "compare_and_write": false, 00:37:16.308 "abort": true, 00:37:16.308 "seek_hole": false, 00:37:16.308 "seek_data": false, 00:37:16.308 "copy": true, 00:37:16.308 "nvme_iov_md": false 00:37:16.308 }, 00:37:16.308 "memory_domains": [ 00:37:16.308 { 00:37:16.308 "dma_device_id": "system", 00:37:16.308 "dma_device_type": 1 00:37:16.308 }, 00:37:16.308 { 00:37:16.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:16.308 "dma_device_type": 2 00:37:16.308 } 00:37:16.308 ], 00:37:16.308 "driver_specific": { 00:37:16.308 "passthru": { 00:37:16.308 "name": "pt2", 00:37:16.308 "base_bdev_name": "malloc2" 00:37:16.308 } 00:37:16.308 } 00:37:16.308 }' 00:37:16.308 23:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:16.308 23:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:16.308 23:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:37:16.308 23:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:16.308 23:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:16.565 23:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:37:16.565 23:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:16.565 23:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:16.565 23:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:37:16.565 23:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:16.565 23:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:16.565 23:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:37:16.565 23:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:16.565 23:23:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:37:16.823 [2024-07-13 23:23:06.188815] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:16.823 23:23:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=93029245-c56d-43dd-b2fe-fb527c276c88 00:37:16.823 23:23:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # '[' -z 93029245-c56d-43dd-b2fe-fb527c276c88 ']' 00:37:16.823 23:23:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:17.081 [2024-07-13 23:23:06.476608] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:17.082 [2024-07-13 23:23:06.476813] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:17.082 [2024-07-13 23:23:06.477040] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:17.082 [2024-07-13 23:23:06.477244] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:17.082 [2024-07-13 23:23:06.477376] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name raid_bdev1, state offline 00:37:17.340 23:23:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:17.340 23:23:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:37:17.598 23:23:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:37:17.598 23:23:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:37:17.598 23:23:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:37:17.598 23:23:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:37:17.857 23:23:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:37:17.857 23:23:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:37:17.857 23:23:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:37:17.857 23:23:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:37:18.116 23:23:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:37:18.116 23:23:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:37:18.116 23:23:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:37:18.116 23:23:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:37:18.116 23:23:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:18.116 23:23:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:18.116 23:23:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:18.116 23:23:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:18.116 23:23:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:18.116 23:23:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:18.116 23:23:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:18.116 23:23:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:37:18.116 23:23:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:37:18.375 [2024-07-13 23:23:07.668848] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:37:18.375 [2024-07-13 23:23:07.671232] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:37:18.375 [2024-07-13 23:23:07.671461] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:37:18.375 [2024-07-13 23:23:07.671662] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:37:18.375 [2024-07-13 23:23:07.671813] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:18.375 [2024-07-13 23:23:07.671915] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state configuring 00:37:18.375 request: 00:37:18.375 { 00:37:18.375 "name": "raid_bdev1", 00:37:18.375 "raid_level": "raid1", 00:37:18.375 "base_bdevs": [ 00:37:18.375 "malloc1", 00:37:18.375 "malloc2" 00:37:18.375 ], 00:37:18.375 "superblock": false, 00:37:18.375 "method": "bdev_raid_create", 00:37:18.375 "req_id": 1 00:37:18.375 } 00:37:18.375 Got JSON-RPC error response 00:37:18.375 response: 00:37:18.375 { 00:37:18.375 "code": -17, 00:37:18.375 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:37:18.375 } 00:37:18.375 23:23:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:37:18.375 23:23:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:18.375 23:23:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:18.375 23:23:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:18.375 23:23:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:18.375 23:23:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:37:18.634 23:23:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:37:18.634 23:23:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:37:18.634 23:23:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:18.893 [2024-07-13 23:23:08.168896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:18.893 [2024-07-13 23:23:08.169208] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:18.893 [2024-07-13 23:23:08.169415] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:37:18.893 [2024-07-13 23:23:08.169574] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:18.893 [2024-07-13 23:23:08.171751] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:18.893 [2024-07-13 23:23:08.171935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:18.893 [2024-07-13 23:23:08.172151] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:37:18.893 [2024-07-13 23:23:08.172334] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:18.893 pt1 00:37:18.893 23:23:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:37:18.893 23:23:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:18.893 23:23:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:18.893 23:23:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:18.893 23:23:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:18.893 23:23:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:18.893 23:23:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:18.893 23:23:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:18.893 23:23:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:18.893 23:23:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:18.893 23:23:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:18.893 23:23:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:19.151 23:23:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:19.151 "name": "raid_bdev1", 00:37:19.151 "uuid": "93029245-c56d-43dd-b2fe-fb527c276c88", 00:37:19.151 "strip_size_kb": 0, 00:37:19.151 "state": "configuring", 00:37:19.151 "raid_level": "raid1", 00:37:19.151 "superblock": true, 00:37:19.151 "num_base_bdevs": 2, 00:37:19.151 "num_base_bdevs_discovered": 1, 00:37:19.151 "num_base_bdevs_operational": 2, 00:37:19.151 "base_bdevs_list": [ 00:37:19.151 { 00:37:19.151 "name": "pt1", 00:37:19.152 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:19.152 "is_configured": true, 00:37:19.152 "data_offset": 256, 00:37:19.152 "data_size": 7936 00:37:19.152 }, 00:37:19.152 { 00:37:19.152 "name": null, 00:37:19.152 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:19.152 "is_configured": false, 00:37:19.152 "data_offset": 256, 00:37:19.152 "data_size": 7936 00:37:19.152 } 00:37:19.152 ] 00:37:19.152 }' 00:37:19.152 23:23:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:19.152 23:23:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:19.719 23:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:37:19.719 23:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:37:19.719 23:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:37:19.719 23:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:19.978 [2024-07-13 23:23:09.357246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:19.979 [2024-07-13 23:23:09.357546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:19.979 [2024-07-13 23:23:09.357746] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:37:19.979 [2024-07-13 23:23:09.357918] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:19.979 [2024-07-13 23:23:09.358228] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:19.979 [2024-07-13 23:23:09.358410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:19.979 [2024-07-13 23:23:09.358582] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:37:19.979 [2024-07-13 23:23:09.358702] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:19.979 [2024-07-13 23:23:09.358859] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:37:19.979 [2024-07-13 23:23:09.358969] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:37:19.979 [2024-07-13 23:23:09.359093] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:37:19.979 [2024-07-13 23:23:09.359304] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:37:19.979 [2024-07-13 23:23:09.359423] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:37:19.979 [2024-07-13 23:23:09.359579] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:19.979 pt2 00:37:19.979 23:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:37:19.979 23:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:37:19.979 23:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:19.979 23:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:19.979 23:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:19.979 23:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:19.979 23:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:19.979 23:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:19.979 23:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:19.979 23:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:19.979 23:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:19.979 23:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:19.979 23:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:19.979 23:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:20.238 23:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:20.238 "name": "raid_bdev1", 00:37:20.238 "uuid": "93029245-c56d-43dd-b2fe-fb527c276c88", 00:37:20.238 "strip_size_kb": 0, 00:37:20.238 "state": "online", 00:37:20.238 "raid_level": "raid1", 00:37:20.238 "superblock": true, 00:37:20.238 "num_base_bdevs": 2, 00:37:20.238 "num_base_bdevs_discovered": 2, 00:37:20.238 "num_base_bdevs_operational": 2, 00:37:20.238 "base_bdevs_list": [ 00:37:20.238 { 00:37:20.238 "name": "pt1", 00:37:20.238 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:20.238 "is_configured": true, 00:37:20.238 "data_offset": 256, 00:37:20.238 "data_size": 7936 00:37:20.238 }, 00:37:20.238 { 00:37:20.238 "name": "pt2", 00:37:20.238 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:20.238 "is_configured": true, 00:37:20.238 "data_offset": 256, 00:37:20.238 "data_size": 7936 00:37:20.238 } 00:37:20.238 ] 00:37:20.238 }' 00:37:20.238 23:23:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:20.238 23:23:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:21.174 23:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:37:21.174 23:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:37:21.174 23:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:37:21.174 23:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:37:21.174 23:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:37:21.174 23:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:37:21.174 23:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:21.174 23:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:37:21.174 [2024-07-13 23:23:10.477914] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:21.174 23:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:37:21.175 "name": "raid_bdev1", 00:37:21.175 "aliases": [ 00:37:21.175 "93029245-c56d-43dd-b2fe-fb527c276c88" 00:37:21.175 ], 00:37:21.175 "product_name": "Raid Volume", 00:37:21.175 "block_size": 4128, 00:37:21.175 "num_blocks": 7936, 00:37:21.175 "uuid": "93029245-c56d-43dd-b2fe-fb527c276c88", 00:37:21.175 "md_size": 32, 00:37:21.175 "md_interleave": true, 00:37:21.175 "dif_type": 0, 00:37:21.175 "assigned_rate_limits": { 00:37:21.175 "rw_ios_per_sec": 0, 00:37:21.175 "rw_mbytes_per_sec": 0, 00:37:21.175 "r_mbytes_per_sec": 0, 00:37:21.175 "w_mbytes_per_sec": 0 00:37:21.175 }, 00:37:21.175 "claimed": false, 00:37:21.175 "zoned": false, 00:37:21.175 "supported_io_types": { 00:37:21.175 "read": true, 00:37:21.175 "write": true, 00:37:21.175 "unmap": false, 00:37:21.175 "flush": false, 00:37:21.175 "reset": true, 00:37:21.175 "nvme_admin": false, 00:37:21.175 "nvme_io": false, 00:37:21.175 "nvme_io_md": false, 00:37:21.175 "write_zeroes": true, 00:37:21.175 "zcopy": false, 00:37:21.175 "get_zone_info": false, 00:37:21.175 "zone_management": false, 00:37:21.175 "zone_append": false, 00:37:21.175 "compare": false, 00:37:21.175 "compare_and_write": false, 00:37:21.175 "abort": false, 00:37:21.175 "seek_hole": false, 00:37:21.175 "seek_data": false, 00:37:21.175 "copy": false, 00:37:21.175 "nvme_iov_md": false 00:37:21.175 }, 00:37:21.175 "memory_domains": [ 00:37:21.175 { 00:37:21.175 "dma_device_id": "system", 00:37:21.175 "dma_device_type": 1 00:37:21.175 }, 00:37:21.175 { 00:37:21.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:21.175 "dma_device_type": 2 00:37:21.175 }, 00:37:21.175 { 00:37:21.175 "dma_device_id": "system", 00:37:21.175 "dma_device_type": 1 00:37:21.175 }, 00:37:21.175 { 00:37:21.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:21.175 "dma_device_type": 2 00:37:21.175 } 00:37:21.175 ], 00:37:21.175 "driver_specific": { 00:37:21.175 "raid": { 00:37:21.175 "uuid": "93029245-c56d-43dd-b2fe-fb527c276c88", 00:37:21.175 "strip_size_kb": 0, 00:37:21.175 "state": "online", 00:37:21.175 "raid_level": "raid1", 00:37:21.175 "superblock": true, 00:37:21.175 "num_base_bdevs": 2, 00:37:21.175 "num_base_bdevs_discovered": 2, 00:37:21.175 "num_base_bdevs_operational": 2, 00:37:21.175 "base_bdevs_list": [ 00:37:21.175 { 00:37:21.175 "name": "pt1", 00:37:21.175 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:21.175 "is_configured": true, 00:37:21.175 "data_offset": 256, 00:37:21.175 "data_size": 7936 00:37:21.175 }, 00:37:21.175 { 00:37:21.175 "name": "pt2", 00:37:21.175 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:21.175 "is_configured": true, 00:37:21.175 "data_offset": 256, 00:37:21.175 "data_size": 7936 00:37:21.175 } 00:37:21.175 ] 00:37:21.175 } 00:37:21.175 } 00:37:21.175 }' 00:37:21.175 23:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:21.175 23:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:37:21.175 pt2' 00:37:21.175 23:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:21.175 23:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:37:21.175 23:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:21.450 23:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:21.450 "name": "pt1", 00:37:21.450 "aliases": [ 00:37:21.450 "00000000-0000-0000-0000-000000000001" 00:37:21.450 ], 00:37:21.450 "product_name": "passthru", 00:37:21.450 "block_size": 4128, 00:37:21.450 "num_blocks": 8192, 00:37:21.450 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:21.450 "md_size": 32, 00:37:21.450 "md_interleave": true, 00:37:21.450 "dif_type": 0, 00:37:21.450 "assigned_rate_limits": { 00:37:21.450 "rw_ios_per_sec": 0, 00:37:21.450 "rw_mbytes_per_sec": 0, 00:37:21.450 "r_mbytes_per_sec": 0, 00:37:21.450 "w_mbytes_per_sec": 0 00:37:21.450 }, 00:37:21.450 "claimed": true, 00:37:21.450 "claim_type": "exclusive_write", 00:37:21.450 "zoned": false, 00:37:21.450 "supported_io_types": { 00:37:21.450 "read": true, 00:37:21.450 "write": true, 00:37:21.450 "unmap": true, 00:37:21.450 "flush": true, 00:37:21.450 "reset": true, 00:37:21.450 "nvme_admin": false, 00:37:21.450 "nvme_io": false, 00:37:21.450 "nvme_io_md": false, 00:37:21.450 "write_zeroes": true, 00:37:21.450 "zcopy": true, 00:37:21.450 "get_zone_info": false, 00:37:21.450 "zone_management": false, 00:37:21.450 "zone_append": false, 00:37:21.450 "compare": false, 00:37:21.450 "compare_and_write": false, 00:37:21.450 "abort": true, 00:37:21.450 "seek_hole": false, 00:37:21.450 "seek_data": false, 00:37:21.450 "copy": true, 00:37:21.450 "nvme_iov_md": false 00:37:21.450 }, 00:37:21.450 "memory_domains": [ 00:37:21.450 { 00:37:21.450 "dma_device_id": "system", 00:37:21.450 "dma_device_type": 1 00:37:21.450 }, 00:37:21.450 { 00:37:21.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:21.450 "dma_device_type": 2 00:37:21.450 } 00:37:21.450 ], 00:37:21.450 "driver_specific": { 00:37:21.450 "passthru": { 00:37:21.450 "name": "pt1", 00:37:21.450 "base_bdev_name": "malloc1" 00:37:21.450 } 00:37:21.450 } 00:37:21.450 }' 00:37:21.450 23:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:21.450 23:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:21.732 23:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:37:21.732 23:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:21.732 23:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:21.732 23:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:37:21.732 23:23:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:21.732 23:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:21.732 23:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:37:21.732 23:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:21.732 23:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:21.995 23:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:37:21.995 23:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:21.995 23:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:37:21.995 23:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:22.253 23:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:22.253 "name": "pt2", 00:37:22.253 "aliases": [ 00:37:22.253 "00000000-0000-0000-0000-000000000002" 00:37:22.253 ], 00:37:22.253 "product_name": "passthru", 00:37:22.253 "block_size": 4128, 00:37:22.253 "num_blocks": 8192, 00:37:22.253 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:22.253 "md_size": 32, 00:37:22.253 "md_interleave": true, 00:37:22.253 "dif_type": 0, 00:37:22.253 "assigned_rate_limits": { 00:37:22.253 "rw_ios_per_sec": 0, 00:37:22.253 "rw_mbytes_per_sec": 0, 00:37:22.253 "r_mbytes_per_sec": 0, 00:37:22.253 "w_mbytes_per_sec": 0 00:37:22.253 }, 00:37:22.253 "claimed": true, 00:37:22.253 "claim_type": "exclusive_write", 00:37:22.253 "zoned": false, 00:37:22.253 "supported_io_types": { 00:37:22.253 "read": true, 00:37:22.253 "write": true, 00:37:22.253 "unmap": true, 00:37:22.253 "flush": true, 00:37:22.253 "reset": true, 00:37:22.253 "nvme_admin": false, 00:37:22.253 "nvme_io": false, 00:37:22.253 "nvme_io_md": false, 00:37:22.253 "write_zeroes": true, 00:37:22.253 "zcopy": true, 00:37:22.253 "get_zone_info": false, 00:37:22.253 "zone_management": false, 00:37:22.253 "zone_append": false, 00:37:22.253 "compare": false, 00:37:22.253 "compare_and_write": false, 00:37:22.253 "abort": true, 00:37:22.253 "seek_hole": false, 00:37:22.253 "seek_data": false, 00:37:22.253 "copy": true, 00:37:22.253 "nvme_iov_md": false 00:37:22.253 }, 00:37:22.253 "memory_domains": [ 00:37:22.253 { 00:37:22.253 "dma_device_id": "system", 00:37:22.253 "dma_device_type": 1 00:37:22.253 }, 00:37:22.253 { 00:37:22.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:22.253 "dma_device_type": 2 00:37:22.253 } 00:37:22.253 ], 00:37:22.253 "driver_specific": { 00:37:22.253 "passthru": { 00:37:22.253 "name": "pt2", 00:37:22.253 "base_bdev_name": "malloc2" 00:37:22.253 } 00:37:22.253 } 00:37:22.253 }' 00:37:22.253 23:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:22.253 23:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:22.253 23:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:37:22.253 23:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:22.253 23:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:22.512 23:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:37:22.512 23:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:22.512 23:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:22.512 23:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:37:22.512 23:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:22.512 23:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:22.512 23:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:37:22.512 23:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:22.512 23:23:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:37:22.770 [2024-07-13 23:23:12.074341] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:22.770 23:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # '[' 93029245-c56d-43dd-b2fe-fb527c276c88 '!=' 93029245-c56d-43dd-b2fe-fb527c276c88 ']' 00:37:22.770 23:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:37:22.770 23:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:37:22.770 23:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:37:22.770 23:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:37:23.028 [2024-07-13 23:23:12.334148] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:37:23.028 23:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:23.028 23:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:23.028 23:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:23.028 23:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:23.028 23:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:23.028 23:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:23.028 23:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:23.028 23:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:23.028 23:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:23.028 23:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:23.028 23:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:23.028 23:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:23.286 23:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:23.286 "name": "raid_bdev1", 00:37:23.286 "uuid": "93029245-c56d-43dd-b2fe-fb527c276c88", 00:37:23.286 "strip_size_kb": 0, 00:37:23.286 "state": "online", 00:37:23.286 "raid_level": "raid1", 00:37:23.286 "superblock": true, 00:37:23.286 "num_base_bdevs": 2, 00:37:23.286 "num_base_bdevs_discovered": 1, 00:37:23.286 "num_base_bdevs_operational": 1, 00:37:23.286 "base_bdevs_list": [ 00:37:23.286 { 00:37:23.286 "name": null, 00:37:23.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:23.286 "is_configured": false, 00:37:23.286 "data_offset": 256, 00:37:23.286 "data_size": 7936 00:37:23.286 }, 00:37:23.286 { 00:37:23.286 "name": "pt2", 00:37:23.286 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:23.286 "is_configured": true, 00:37:23.286 "data_offset": 256, 00:37:23.286 "data_size": 7936 00:37:23.286 } 00:37:23.286 ] 00:37:23.286 }' 00:37:23.286 23:23:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:23.286 23:23:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:23.852 23:23:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:24.110 [2024-07-13 23:23:13.462364] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:24.110 [2024-07-13 23:23:13.462575] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:24.110 [2024-07-13 23:23:13.462787] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:24.110 [2024-07-13 23:23:13.462954] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:24.110 [2024-07-13 23:23:13.463072] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:37:24.110 23:23:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:24.110 23:23:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:37:24.366 23:23:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:37:24.366 23:23:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:37:24.366 23:23:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:37:24.366 23:23:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:37:24.366 23:23:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:37:24.623 23:23:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:37:24.623 23:23:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:37:24.623 23:23:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:37:24.623 23:23:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:37:24.623 23:23:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@518 -- # i=1 00:37:24.623 23:23:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:24.880 [2024-07-13 23:23:14.230516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:24.880 [2024-07-13 23:23:14.230799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:24.880 [2024-07-13 23:23:14.230970] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:37:24.880 [2024-07-13 23:23:14.231104] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:24.880 [2024-07-13 23:23:14.233226] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:24.880 [2024-07-13 23:23:14.233444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:24.880 [2024-07-13 23:23:14.233624] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:37:24.880 [2024-07-13 23:23:14.233758] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:24.880 [2024-07-13 23:23:14.233895] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:37:24.880 [2024-07-13 23:23:14.234011] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:37:24.880 [2024-07-13 23:23:14.234144] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:37:24.880 [2024-07-13 23:23:14.234319] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:37:24.880 [2024-07-13 23:23:14.234462] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:37:24.880 [2024-07-13 23:23:14.234654] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:24.880 pt2 00:37:24.880 23:23:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:24.880 23:23:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:24.881 23:23:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:24.881 23:23:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:24.881 23:23:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:24.881 23:23:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:24.881 23:23:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:24.881 23:23:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:24.881 23:23:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:24.881 23:23:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:24.881 23:23:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:24.881 23:23:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:25.139 23:23:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:25.139 "name": "raid_bdev1", 00:37:25.139 "uuid": "93029245-c56d-43dd-b2fe-fb527c276c88", 00:37:25.139 "strip_size_kb": 0, 00:37:25.139 "state": "online", 00:37:25.139 "raid_level": "raid1", 00:37:25.139 "superblock": true, 00:37:25.139 "num_base_bdevs": 2, 00:37:25.139 "num_base_bdevs_discovered": 1, 00:37:25.139 "num_base_bdevs_operational": 1, 00:37:25.139 "base_bdevs_list": [ 00:37:25.139 { 00:37:25.139 "name": null, 00:37:25.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:25.139 "is_configured": false, 00:37:25.139 "data_offset": 256, 00:37:25.139 "data_size": 7936 00:37:25.139 }, 00:37:25.139 { 00:37:25.139 "name": "pt2", 00:37:25.139 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:25.139 "is_configured": true, 00:37:25.139 "data_offset": 256, 00:37:25.139 "data_size": 7936 00:37:25.139 } 00:37:25.139 ] 00:37:25.139 }' 00:37:25.139 23:23:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:25.139 23:23:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:26.072 23:23:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:26.072 [2024-07-13 23:23:15.422956] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:26.072 [2024-07-13 23:23:15.423185] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:26.072 [2024-07-13 23:23:15.423367] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:26.072 [2024-07-13 23:23:15.423538] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:26.072 [2024-07-13 23:23:15.423642] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:37:26.072 23:23:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:26.072 23:23:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:37:26.330 23:23:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:37:26.330 23:23:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:37:26.330 23:23:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:37:26.330 23:23:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:26.589 [2024-07-13 23:23:15.915071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:26.589 [2024-07-13 23:23:15.915425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:26.589 [2024-07-13 23:23:15.915594] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:37:26.589 [2024-07-13 23:23:15.915729] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:26.589 [2024-07-13 23:23:15.918173] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:26.589 [2024-07-13 23:23:15.918381] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:26.589 [2024-07-13 23:23:15.918580] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:37:26.589 [2024-07-13 23:23:15.918718] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:26.589 [2024-07-13 23:23:15.918983] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:37:26.589 [2024-07-13 23:23:15.919108] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:26.589 [2024-07-13 23:23:15.919175] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state configuring 00:37:26.589 [2024-07-13 23:23:15.919408] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:26.589 [2024-07-13 23:23:15.919693] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:37:26.589 [2024-07-13 23:23:15.919804] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:37:26.589 [2024-07-13 23:23:15.919939] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:37:26.589 [2024-07-13 23:23:15.920138] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:37:26.589 pt1 00:37:26.589 [2024-07-13 23:23:15.920240] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:37:26.589 [2024-07-13 23:23:15.920322] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:26.589 23:23:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:37:26.589 23:23:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:26.589 23:23:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:26.589 23:23:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:26.589 23:23:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:26.589 23:23:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:26.589 23:23:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:26.589 23:23:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:26.589 23:23:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:26.589 23:23:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:26.589 23:23:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:26.589 23:23:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:26.589 23:23:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:26.847 23:23:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:26.847 "name": "raid_bdev1", 00:37:26.847 "uuid": "93029245-c56d-43dd-b2fe-fb527c276c88", 00:37:26.847 "strip_size_kb": 0, 00:37:26.847 "state": "online", 00:37:26.847 "raid_level": "raid1", 00:37:26.847 "superblock": true, 00:37:26.847 "num_base_bdevs": 2, 00:37:26.847 "num_base_bdevs_discovered": 1, 00:37:26.847 "num_base_bdevs_operational": 1, 00:37:26.847 "base_bdevs_list": [ 00:37:26.847 { 00:37:26.847 "name": null, 00:37:26.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:26.847 "is_configured": false, 00:37:26.847 "data_offset": 256, 00:37:26.847 "data_size": 7936 00:37:26.847 }, 00:37:26.847 { 00:37:26.847 "name": "pt2", 00:37:26.847 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:26.847 "is_configured": true, 00:37:26.847 "data_offset": 256, 00:37:26.847 "data_size": 7936 00:37:26.847 } 00:37:26.847 ] 00:37:26.847 }' 00:37:26.847 23:23:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:26.847 23:23:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:27.413 23:23:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:37:27.413 23:23:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:37:27.671 23:23:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:37:27.671 23:23:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:37:27.671 23:23:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:27.929 [2024-07-13 23:23:17.235650] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:27.929 23:23:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' 93029245-c56d-43dd-b2fe-fb527c276c88 '!=' 93029245-c56d-43dd-b2fe-fb527c276c88 ']' 00:37:27.929 23:23:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@562 -- # killprocess 172291 00:37:27.929 23:23:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 172291 ']' 00:37:27.929 23:23:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 172291 00:37:27.929 23:23:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:37:27.929 23:23:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:27.929 23:23:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 172291 00:37:27.929 23:23:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:27.929 23:23:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:27.929 23:23:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 172291' 00:37:27.929 killing process with pid 172291 00:37:27.929 23:23:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@967 -- # kill 172291 00:37:27.929 [2024-07-13 23:23:17.280239] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:27.929 23:23:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # wait 172291 00:37:27.929 [2024-07-13 23:23:17.280507] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:27.929 [2024-07-13 23:23:17.280682] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:27.929 [2024-07-13 23:23:17.280789] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:37:27.929 [2024-07-13 23:23:17.302062] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:28.187 23:23:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@564 -- # return 0 00:37:28.187 00:37:28.187 real 0m15.663s 00:37:28.187 user 0m29.908s 00:37:28.187 sys 0m1.926s 00:37:28.187 23:23:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:28.187 23:23:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:28.187 ************************************ 00:37:28.187 END TEST raid_superblock_test_md_interleaved 00:37:28.187 ************************************ 00:37:28.187 23:23:17 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:37:28.187 23:23:17 bdev_raid -- bdev/bdev_raid.sh@914 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:37:28.187 23:23:17 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:37:28.187 23:23:17 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:28.187 23:23:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:28.446 ************************************ 00:37:28.446 START TEST raid_rebuild_test_sb_md_interleaved 00:37:28.446 ************************************ 00:37:28.446 23:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false false 00:37:28.446 23:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:37:28.446 23:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:37:28.446 23:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:37:28.446 23:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:37:28.446 23:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local verify=false 00:37:28.446 23:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:37:28.446 23:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:37:28.446 23:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:37:28.446 23:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:37:28.446 23:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:37:28.446 23:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:37:28.446 23:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:37:28.446 23:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:37:28.446 23:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:37:28.446 23:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:37:28.446 23:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:37:28.446 23:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local strip_size 00:37:28.446 23:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local create_arg 00:37:28.446 23:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:37:28.446 23:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local data_offset 00:37:28.446 23:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:37:28.446 23:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:37:28.446 23:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:37:28.446 23:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:37:28.446 23:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # raid_pid=172803 00:37:28.446 23:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # waitforlisten 172803 /var/tmp/spdk-raid.sock 00:37:28.446 23:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:37:28.446 23:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 172803 ']' 00:37:28.446 23:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:37:28.446 23:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:28.446 23:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:37:28.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:37:28.446 23:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:28.446 23:23:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:28.446 [2024-07-13 23:23:17.663266] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:37:28.446 [2024-07-13 23:23:17.664256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172803 ] 00:37:28.446 I/O size of 3145728 is greater than zero copy threshold (65536). 00:37:28.446 Zero copy mechanism will not be used. 00:37:28.446 [2024-07-13 23:23:17.804918] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:28.705 [2024-07-13 23:23:17.887447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:28.705 [2024-07-13 23:23:17.940862] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:29.272 23:23:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:29.272 23:23:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:37:29.272 23:23:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:37:29.272 23:23:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:37:29.530 BaseBdev1_malloc 00:37:29.530 23:23:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:37:29.788 [2024-07-13 23:23:19.097873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:37:29.788 [2024-07-13 23:23:19.098201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:29.788 [2024-07-13 23:23:19.098399] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:37:29.788 [2024-07-13 23:23:19.098577] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:29.788 [2024-07-13 23:23:19.101142] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:29.788 [2024-07-13 23:23:19.101331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:29.788 BaseBdev1 00:37:29.788 23:23:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:37:29.788 23:23:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:37:30.046 BaseBdev2_malloc 00:37:30.046 23:23:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:37:30.305 [2024-07-13 23:23:19.568648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:37:30.305 [2024-07-13 23:23:19.568963] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:30.305 [2024-07-13 23:23:19.569057] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:37:30.305 [2024-07-13 23:23:19.569412] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:30.305 [2024-07-13 23:23:19.571670] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:30.305 [2024-07-13 23:23:19.571839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:37:30.305 BaseBdev2 00:37:30.305 23:23:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:37:30.563 spare_malloc 00:37:30.563 23:23:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:37:30.822 spare_delay 00:37:30.822 23:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:37:31.080 [2024-07-13 23:23:20.255806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:31.080 [2024-07-13 23:23:20.256103] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:31.080 [2024-07-13 23:23:20.256194] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:37:31.080 [2024-07-13 23:23:20.256516] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:31.080 [2024-07-13 23:23:20.259041] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:31.080 [2024-07-13 23:23:20.259240] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:31.080 spare 00:37:31.080 23:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:37:31.338 [2024-07-13 23:23:20.532037] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:31.338 [2024-07-13 23:23:20.534366] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:31.338 [2024-07-13 23:23:20.534731] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:37:31.338 [2024-07-13 23:23:20.534855] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:37:31.338 [2024-07-13 23:23:20.535035] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:37:31.338 [2024-07-13 23:23:20.535273] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:37:31.338 [2024-07-13 23:23:20.535403] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:37:31.338 [2024-07-13 23:23:20.535578] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:31.338 23:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:31.338 23:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:31.338 23:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:31.338 23:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:31.338 23:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:31.338 23:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:31.338 23:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:31.338 23:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:31.338 23:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:31.338 23:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:31.338 23:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:31.338 23:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:31.597 23:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:31.597 "name": "raid_bdev1", 00:37:31.597 "uuid": "293cd599-313b-4145-a6eb-c09375759667", 00:37:31.597 "strip_size_kb": 0, 00:37:31.597 "state": "online", 00:37:31.597 "raid_level": "raid1", 00:37:31.597 "superblock": true, 00:37:31.597 "num_base_bdevs": 2, 00:37:31.597 "num_base_bdevs_discovered": 2, 00:37:31.597 "num_base_bdevs_operational": 2, 00:37:31.597 "base_bdevs_list": [ 00:37:31.597 { 00:37:31.597 "name": "BaseBdev1", 00:37:31.597 "uuid": "bcd23705-aa30-59e1-b64f-67ef08fb8461", 00:37:31.597 "is_configured": true, 00:37:31.597 "data_offset": 256, 00:37:31.597 "data_size": 7936 00:37:31.597 }, 00:37:31.597 { 00:37:31.597 "name": "BaseBdev2", 00:37:31.597 "uuid": "b8e7ced7-b4c7-5754-be1a-1665db7c6e2e", 00:37:31.597 "is_configured": true, 00:37:31.597 "data_offset": 256, 00:37:31.597 "data_size": 7936 00:37:31.597 } 00:37:31.597 ] 00:37:31.597 }' 00:37:31.597 23:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:31.597 23:23:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:32.181 23:23:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:32.181 23:23:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:37:32.447 [2024-07-13 23:23:21.612508] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:32.447 23:23:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:37:32.447 23:23:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:32.447 23:23:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:37:32.704 23:23:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:37:32.704 23:23:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:37:32.704 23:23:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # '[' false = true ']' 00:37:32.704 23:23:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:37:32.961 [2024-07-13 23:23:22.112325] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:32.961 23:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:32.961 23:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:32.961 23:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:32.961 23:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:32.961 23:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:32.961 23:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:32.961 23:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:32.961 23:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:32.962 23:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:32.962 23:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:32.962 23:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:32.962 23:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:32.962 23:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:32.962 "name": "raid_bdev1", 00:37:32.962 "uuid": "293cd599-313b-4145-a6eb-c09375759667", 00:37:32.962 "strip_size_kb": 0, 00:37:32.962 "state": "online", 00:37:32.962 "raid_level": "raid1", 00:37:32.962 "superblock": true, 00:37:32.962 "num_base_bdevs": 2, 00:37:32.962 "num_base_bdevs_discovered": 1, 00:37:32.962 "num_base_bdevs_operational": 1, 00:37:32.962 "base_bdevs_list": [ 00:37:32.962 { 00:37:32.962 "name": null, 00:37:32.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:32.962 "is_configured": false, 00:37:32.962 "data_offset": 256, 00:37:32.962 "data_size": 7936 00:37:32.962 }, 00:37:32.962 { 00:37:32.962 "name": "BaseBdev2", 00:37:32.962 "uuid": "b8e7ced7-b4c7-5754-be1a-1665db7c6e2e", 00:37:32.962 "is_configured": true, 00:37:32.962 "data_offset": 256, 00:37:32.962 "data_size": 7936 00:37:32.962 } 00:37:32.962 ] 00:37:32.962 }' 00:37:32.962 23:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:32.962 23:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:33.895 23:23:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:37:33.895 [2024-07-13 23:23:23.208555] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:33.895 [2024-07-13 23:23:23.212538] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:37:33.895 [2024-07-13 23:23:23.214894] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:33.895 23:23:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # sleep 1 00:37:35.267 23:23:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:35.267 23:23:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:35.267 23:23:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:35.267 23:23:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:35.267 23:23:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:35.267 23:23:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:35.267 23:23:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:35.267 23:23:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:35.267 "name": "raid_bdev1", 00:37:35.267 "uuid": "293cd599-313b-4145-a6eb-c09375759667", 00:37:35.267 "strip_size_kb": 0, 00:37:35.267 "state": "online", 00:37:35.267 "raid_level": "raid1", 00:37:35.267 "superblock": true, 00:37:35.267 "num_base_bdevs": 2, 00:37:35.267 "num_base_bdevs_discovered": 2, 00:37:35.267 "num_base_bdevs_operational": 2, 00:37:35.267 "process": { 00:37:35.267 "type": "rebuild", 00:37:35.267 "target": "spare", 00:37:35.267 "progress": { 00:37:35.267 "blocks": 3072, 00:37:35.267 "percent": 38 00:37:35.267 } 00:37:35.267 }, 00:37:35.267 "base_bdevs_list": [ 00:37:35.267 { 00:37:35.267 "name": "spare", 00:37:35.267 "uuid": "5342e2ff-e99e-506c-8101-d88bab9a74bb", 00:37:35.267 "is_configured": true, 00:37:35.267 "data_offset": 256, 00:37:35.267 "data_size": 7936 00:37:35.267 }, 00:37:35.267 { 00:37:35.267 "name": "BaseBdev2", 00:37:35.267 "uuid": "b8e7ced7-b4c7-5754-be1a-1665db7c6e2e", 00:37:35.267 "is_configured": true, 00:37:35.267 "data_offset": 256, 00:37:35.267 "data_size": 7936 00:37:35.267 } 00:37:35.267 ] 00:37:35.267 }' 00:37:35.267 23:23:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:35.267 23:23:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:35.267 23:23:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:35.267 23:23:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:35.267 23:23:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:37:35.525 [2024-07-13 23:23:24.858399] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:35.525 [2024-07-13 23:23:24.926131] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:35.525 [2024-07-13 23:23:24.926406] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:35.525 [2024-07-13 23:23:24.926536] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:35.525 [2024-07-13 23:23:24.926582] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:35.784 23:23:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:35.784 23:23:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:35.784 23:23:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:35.784 23:23:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:35.784 23:23:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:35.784 23:23:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:35.784 23:23:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:35.784 23:23:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:35.784 23:23:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:35.784 23:23:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:35.784 23:23:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:35.784 23:23:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:36.042 23:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:36.042 "name": "raid_bdev1", 00:37:36.042 "uuid": "293cd599-313b-4145-a6eb-c09375759667", 00:37:36.042 "strip_size_kb": 0, 00:37:36.042 "state": "online", 00:37:36.042 "raid_level": "raid1", 00:37:36.042 "superblock": true, 00:37:36.042 "num_base_bdevs": 2, 00:37:36.042 "num_base_bdevs_discovered": 1, 00:37:36.042 "num_base_bdevs_operational": 1, 00:37:36.042 "base_bdevs_list": [ 00:37:36.042 { 00:37:36.042 "name": null, 00:37:36.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:36.042 "is_configured": false, 00:37:36.042 "data_offset": 256, 00:37:36.042 "data_size": 7936 00:37:36.042 }, 00:37:36.042 { 00:37:36.042 "name": "BaseBdev2", 00:37:36.042 "uuid": "b8e7ced7-b4c7-5754-be1a-1665db7c6e2e", 00:37:36.042 "is_configured": true, 00:37:36.042 "data_offset": 256, 00:37:36.042 "data_size": 7936 00:37:36.042 } 00:37:36.042 ] 00:37:36.042 }' 00:37:36.042 23:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:36.042 23:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:36.609 23:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:36.609 23:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:36.609 23:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:36.609 23:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:36.609 23:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:36.609 23:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:36.609 23:23:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:36.867 23:23:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:36.867 "name": "raid_bdev1", 00:37:36.867 "uuid": "293cd599-313b-4145-a6eb-c09375759667", 00:37:36.867 "strip_size_kb": 0, 00:37:36.867 "state": "online", 00:37:36.867 "raid_level": "raid1", 00:37:36.867 "superblock": true, 00:37:36.867 "num_base_bdevs": 2, 00:37:36.867 "num_base_bdevs_discovered": 1, 00:37:36.867 "num_base_bdevs_operational": 1, 00:37:36.867 "base_bdevs_list": [ 00:37:36.867 { 00:37:36.867 "name": null, 00:37:36.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:36.867 "is_configured": false, 00:37:36.867 "data_offset": 256, 00:37:36.867 "data_size": 7936 00:37:36.867 }, 00:37:36.867 { 00:37:36.867 "name": "BaseBdev2", 00:37:36.867 "uuid": "b8e7ced7-b4c7-5754-be1a-1665db7c6e2e", 00:37:36.867 "is_configured": true, 00:37:36.867 "data_offset": 256, 00:37:36.867 "data_size": 7936 00:37:36.867 } 00:37:36.867 ] 00:37:36.867 }' 00:37:36.867 23:23:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:36.867 23:23:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:36.867 23:23:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:36.867 23:23:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:36.867 23:23:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:37:37.126 [2024-07-13 23:23:26.448097] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:37.126 [2024-07-13 23:23:26.451886] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:37:37.126 [2024-07-13 23:23:26.454082] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:37.126 23:23:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:37:38.500 23:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:38.500 23:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:38.500 23:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:38.500 23:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:38.500 23:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:38.500 23:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:38.500 23:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:38.500 23:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:38.500 "name": "raid_bdev1", 00:37:38.500 "uuid": "293cd599-313b-4145-a6eb-c09375759667", 00:37:38.500 "strip_size_kb": 0, 00:37:38.500 "state": "online", 00:37:38.500 "raid_level": "raid1", 00:37:38.500 "superblock": true, 00:37:38.500 "num_base_bdevs": 2, 00:37:38.500 "num_base_bdevs_discovered": 2, 00:37:38.500 "num_base_bdevs_operational": 2, 00:37:38.500 "process": { 00:37:38.500 "type": "rebuild", 00:37:38.500 "target": "spare", 00:37:38.500 "progress": { 00:37:38.500 "blocks": 3072, 00:37:38.500 "percent": 38 00:37:38.500 } 00:37:38.500 }, 00:37:38.500 "base_bdevs_list": [ 00:37:38.500 { 00:37:38.500 "name": "spare", 00:37:38.500 "uuid": "5342e2ff-e99e-506c-8101-d88bab9a74bb", 00:37:38.500 "is_configured": true, 00:37:38.500 "data_offset": 256, 00:37:38.500 "data_size": 7936 00:37:38.500 }, 00:37:38.500 { 00:37:38.500 "name": "BaseBdev2", 00:37:38.500 "uuid": "b8e7ced7-b4c7-5754-be1a-1665db7c6e2e", 00:37:38.500 "is_configured": true, 00:37:38.500 "data_offset": 256, 00:37:38.500 "data_size": 7936 00:37:38.500 } 00:37:38.500 ] 00:37:38.500 }' 00:37:38.500 23:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:38.500 23:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:38.500 23:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:38.500 23:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:38.500 23:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:37:38.500 23:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:37:38.500 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:37:38.500 23:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:37:38.500 23:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:37:38.500 23:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:37:38.500 23:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@705 -- # local timeout=1445 00:37:38.500 23:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:37:38.500 23:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:38.501 23:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:38.501 23:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:38.501 23:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:38.501 23:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:38.501 23:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:38.501 23:23:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:38.759 23:23:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:38.759 "name": "raid_bdev1", 00:37:38.759 "uuid": "293cd599-313b-4145-a6eb-c09375759667", 00:37:38.759 "strip_size_kb": 0, 00:37:38.759 "state": "online", 00:37:38.759 "raid_level": "raid1", 00:37:38.759 "superblock": true, 00:37:38.759 "num_base_bdevs": 2, 00:37:38.759 "num_base_bdevs_discovered": 2, 00:37:38.759 "num_base_bdevs_operational": 2, 00:37:38.759 "process": { 00:37:38.759 "type": "rebuild", 00:37:38.759 "target": "spare", 00:37:38.759 "progress": { 00:37:38.759 "blocks": 4096, 00:37:38.759 "percent": 51 00:37:38.759 } 00:37:38.759 }, 00:37:38.759 "base_bdevs_list": [ 00:37:38.759 { 00:37:38.759 "name": "spare", 00:37:38.759 "uuid": "5342e2ff-e99e-506c-8101-d88bab9a74bb", 00:37:38.759 "is_configured": true, 00:37:38.759 "data_offset": 256, 00:37:38.759 "data_size": 7936 00:37:38.759 }, 00:37:38.759 { 00:37:38.759 "name": "BaseBdev2", 00:37:38.759 "uuid": "b8e7ced7-b4c7-5754-be1a-1665db7c6e2e", 00:37:38.759 "is_configured": true, 00:37:38.759 "data_offset": 256, 00:37:38.759 "data_size": 7936 00:37:38.759 } 00:37:38.759 ] 00:37:38.759 }' 00:37:38.759 23:23:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:38.759 23:23:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:38.759 23:23:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:39.017 23:23:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:39.017 23:23:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:37:39.952 23:23:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:37:39.952 23:23:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:39.952 23:23:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:39.952 23:23:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:39.952 23:23:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:39.952 23:23:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:39.952 23:23:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:39.952 23:23:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:40.212 23:23:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:40.212 "name": "raid_bdev1", 00:37:40.212 "uuid": "293cd599-313b-4145-a6eb-c09375759667", 00:37:40.212 "strip_size_kb": 0, 00:37:40.212 "state": "online", 00:37:40.212 "raid_level": "raid1", 00:37:40.212 "superblock": true, 00:37:40.212 "num_base_bdevs": 2, 00:37:40.212 "num_base_bdevs_discovered": 2, 00:37:40.212 "num_base_bdevs_operational": 2, 00:37:40.212 "process": { 00:37:40.212 "type": "rebuild", 00:37:40.212 "target": "spare", 00:37:40.212 "progress": { 00:37:40.212 "blocks": 7424, 00:37:40.212 "percent": 93 00:37:40.212 } 00:37:40.212 }, 00:37:40.212 "base_bdevs_list": [ 00:37:40.212 { 00:37:40.212 "name": "spare", 00:37:40.212 "uuid": "5342e2ff-e99e-506c-8101-d88bab9a74bb", 00:37:40.212 "is_configured": true, 00:37:40.212 "data_offset": 256, 00:37:40.212 "data_size": 7936 00:37:40.212 }, 00:37:40.212 { 00:37:40.212 "name": "BaseBdev2", 00:37:40.212 "uuid": "b8e7ced7-b4c7-5754-be1a-1665db7c6e2e", 00:37:40.212 "is_configured": true, 00:37:40.212 "data_offset": 256, 00:37:40.212 "data_size": 7936 00:37:40.212 } 00:37:40.212 ] 00:37:40.212 }' 00:37:40.212 23:23:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:40.212 23:23:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:40.212 23:23:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:40.212 [2024-07-13 23:23:29.572376] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:37:40.212 [2024-07-13 23:23:29.572621] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:37:40.212 [2024-07-13 23:23:29.572910] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:40.212 23:23:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:40.212 23:23:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:37:41.588 23:23:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:37:41.588 23:23:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:41.588 23:23:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:41.588 23:23:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:41.588 23:23:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:41.588 23:23:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:41.588 23:23:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:41.588 23:23:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:41.588 23:23:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:41.588 "name": "raid_bdev1", 00:37:41.588 "uuid": "293cd599-313b-4145-a6eb-c09375759667", 00:37:41.588 "strip_size_kb": 0, 00:37:41.588 "state": "online", 00:37:41.588 "raid_level": "raid1", 00:37:41.588 "superblock": true, 00:37:41.588 "num_base_bdevs": 2, 00:37:41.588 "num_base_bdevs_discovered": 2, 00:37:41.588 "num_base_bdevs_operational": 2, 00:37:41.588 "base_bdevs_list": [ 00:37:41.588 { 00:37:41.588 "name": "spare", 00:37:41.588 "uuid": "5342e2ff-e99e-506c-8101-d88bab9a74bb", 00:37:41.588 "is_configured": true, 00:37:41.588 "data_offset": 256, 00:37:41.588 "data_size": 7936 00:37:41.588 }, 00:37:41.588 { 00:37:41.588 "name": "BaseBdev2", 00:37:41.588 "uuid": "b8e7ced7-b4c7-5754-be1a-1665db7c6e2e", 00:37:41.588 "is_configured": true, 00:37:41.588 "data_offset": 256, 00:37:41.588 "data_size": 7936 00:37:41.588 } 00:37:41.588 ] 00:37:41.588 }' 00:37:41.588 23:23:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:41.588 23:23:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:37:41.588 23:23:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:41.588 23:23:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:37:41.588 23:23:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # break 00:37:41.588 23:23:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:41.588 23:23:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:41.588 23:23:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:41.588 23:23:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:41.588 23:23:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:41.588 23:23:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:41.588 23:23:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:41.847 23:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:41.847 "name": "raid_bdev1", 00:37:41.847 "uuid": "293cd599-313b-4145-a6eb-c09375759667", 00:37:41.847 "strip_size_kb": 0, 00:37:41.847 "state": "online", 00:37:41.847 "raid_level": "raid1", 00:37:41.847 "superblock": true, 00:37:41.847 "num_base_bdevs": 2, 00:37:41.847 "num_base_bdevs_discovered": 2, 00:37:41.847 "num_base_bdevs_operational": 2, 00:37:41.847 "base_bdevs_list": [ 00:37:41.847 { 00:37:41.847 "name": "spare", 00:37:41.847 "uuid": "5342e2ff-e99e-506c-8101-d88bab9a74bb", 00:37:41.847 "is_configured": true, 00:37:41.847 "data_offset": 256, 00:37:41.847 "data_size": 7936 00:37:41.847 }, 00:37:41.847 { 00:37:41.847 "name": "BaseBdev2", 00:37:41.847 "uuid": "b8e7ced7-b4c7-5754-be1a-1665db7c6e2e", 00:37:41.847 "is_configured": true, 00:37:41.847 "data_offset": 256, 00:37:41.847 "data_size": 7936 00:37:41.847 } 00:37:41.847 ] 00:37:41.847 }' 00:37:41.847 23:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:42.106 23:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:42.106 23:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:42.106 23:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:42.106 23:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:42.106 23:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:42.106 23:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:42.106 23:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:42.106 23:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:42.106 23:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:42.106 23:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:42.106 23:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:42.106 23:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:42.106 23:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:42.106 23:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:42.107 23:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:42.365 23:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:42.365 "name": "raid_bdev1", 00:37:42.365 "uuid": "293cd599-313b-4145-a6eb-c09375759667", 00:37:42.365 "strip_size_kb": 0, 00:37:42.365 "state": "online", 00:37:42.365 "raid_level": "raid1", 00:37:42.365 "superblock": true, 00:37:42.365 "num_base_bdevs": 2, 00:37:42.365 "num_base_bdevs_discovered": 2, 00:37:42.365 "num_base_bdevs_operational": 2, 00:37:42.365 "base_bdevs_list": [ 00:37:42.365 { 00:37:42.365 "name": "spare", 00:37:42.365 "uuid": "5342e2ff-e99e-506c-8101-d88bab9a74bb", 00:37:42.365 "is_configured": true, 00:37:42.365 "data_offset": 256, 00:37:42.365 "data_size": 7936 00:37:42.365 }, 00:37:42.365 { 00:37:42.365 "name": "BaseBdev2", 00:37:42.365 "uuid": "b8e7ced7-b4c7-5754-be1a-1665db7c6e2e", 00:37:42.365 "is_configured": true, 00:37:42.365 "data_offset": 256, 00:37:42.365 "data_size": 7936 00:37:42.365 } 00:37:42.365 ] 00:37:42.365 }' 00:37:42.365 23:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:42.365 23:23:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:42.933 23:23:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:43.221 [2024-07-13 23:23:32.418319] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:43.221 [2024-07-13 23:23:32.418520] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:43.221 [2024-07-13 23:23:32.418734] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:43.221 [2024-07-13 23:23:32.418925] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:43.221 [2024-07-13 23:23:32.419033] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:37:43.221 23:23:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:43.221 23:23:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # jq length 00:37:43.480 23:23:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:37:43.480 23:23:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # '[' false = true ']' 00:37:43.480 23:23:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:37:43.480 23:23:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:37:43.739 23:23:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:37:44.006 [2024-07-13 23:23:33.182453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:44.006 [2024-07-13 23:23:33.182762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:44.006 [2024-07-13 23:23:33.182924] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:37:44.006 [2024-07-13 23:23:33.183051] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:44.006 [2024-07-13 23:23:33.185509] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:44.006 [2024-07-13 23:23:33.185702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:44.006 [2024-07-13 23:23:33.185904] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:37:44.006 [2024-07-13 23:23:33.186078] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:44.006 [2024-07-13 23:23:33.186351] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:44.006 spare 00:37:44.006 23:23:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:44.006 23:23:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:44.006 23:23:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:44.006 23:23:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:44.006 23:23:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:44.006 23:23:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:44.006 23:23:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:44.006 23:23:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:44.006 23:23:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:44.006 23:23:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:44.006 23:23:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:44.006 23:23:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:44.006 [2024-07-13 23:23:33.286670] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:37:44.006 [2024-07-13 23:23:33.286879] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:37:44.006 [2024-07-13 23:23:33.287093] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:37:44.006 [2024-07-13 23:23:33.287368] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:37:44.006 [2024-07-13 23:23:33.287471] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:37:44.006 [2024-07-13 23:23:33.287657] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:44.265 23:23:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:44.265 "name": "raid_bdev1", 00:37:44.265 "uuid": "293cd599-313b-4145-a6eb-c09375759667", 00:37:44.265 "strip_size_kb": 0, 00:37:44.265 "state": "online", 00:37:44.265 "raid_level": "raid1", 00:37:44.265 "superblock": true, 00:37:44.265 "num_base_bdevs": 2, 00:37:44.265 "num_base_bdevs_discovered": 2, 00:37:44.265 "num_base_bdevs_operational": 2, 00:37:44.265 "base_bdevs_list": [ 00:37:44.265 { 00:37:44.265 "name": "spare", 00:37:44.265 "uuid": "5342e2ff-e99e-506c-8101-d88bab9a74bb", 00:37:44.265 "is_configured": true, 00:37:44.265 "data_offset": 256, 00:37:44.265 "data_size": 7936 00:37:44.265 }, 00:37:44.265 { 00:37:44.265 "name": "BaseBdev2", 00:37:44.265 "uuid": "b8e7ced7-b4c7-5754-be1a-1665db7c6e2e", 00:37:44.265 "is_configured": true, 00:37:44.265 "data_offset": 256, 00:37:44.265 "data_size": 7936 00:37:44.265 } 00:37:44.265 ] 00:37:44.265 }' 00:37:44.265 23:23:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:44.265 23:23:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:44.831 23:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:44.831 23:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:44.831 23:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:44.831 23:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:44.831 23:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:44.831 23:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:44.831 23:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:45.089 23:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:45.089 "name": "raid_bdev1", 00:37:45.089 "uuid": "293cd599-313b-4145-a6eb-c09375759667", 00:37:45.089 "strip_size_kb": 0, 00:37:45.089 "state": "online", 00:37:45.089 "raid_level": "raid1", 00:37:45.089 "superblock": true, 00:37:45.089 "num_base_bdevs": 2, 00:37:45.089 "num_base_bdevs_discovered": 2, 00:37:45.089 "num_base_bdevs_operational": 2, 00:37:45.089 "base_bdevs_list": [ 00:37:45.089 { 00:37:45.089 "name": "spare", 00:37:45.089 "uuid": "5342e2ff-e99e-506c-8101-d88bab9a74bb", 00:37:45.089 "is_configured": true, 00:37:45.089 "data_offset": 256, 00:37:45.089 "data_size": 7936 00:37:45.089 }, 00:37:45.089 { 00:37:45.089 "name": "BaseBdev2", 00:37:45.089 "uuid": "b8e7ced7-b4c7-5754-be1a-1665db7c6e2e", 00:37:45.089 "is_configured": true, 00:37:45.089 "data_offset": 256, 00:37:45.089 "data_size": 7936 00:37:45.089 } 00:37:45.089 ] 00:37:45.089 }' 00:37:45.089 23:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:45.089 23:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:45.089 23:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:45.089 23:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:45.089 23:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:45.089 23:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:37:45.348 23:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:37:45.348 23:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:37:45.606 [2024-07-13 23:23:34.854948] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:45.606 23:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:45.606 23:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:45.606 23:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:45.606 23:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:45.606 23:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:45.606 23:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:45.606 23:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:45.606 23:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:45.606 23:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:45.606 23:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:45.606 23:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:45.606 23:23:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:45.864 23:23:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:45.864 "name": "raid_bdev1", 00:37:45.864 "uuid": "293cd599-313b-4145-a6eb-c09375759667", 00:37:45.864 "strip_size_kb": 0, 00:37:45.864 "state": "online", 00:37:45.864 "raid_level": "raid1", 00:37:45.864 "superblock": true, 00:37:45.864 "num_base_bdevs": 2, 00:37:45.864 "num_base_bdevs_discovered": 1, 00:37:45.864 "num_base_bdevs_operational": 1, 00:37:45.864 "base_bdevs_list": [ 00:37:45.864 { 00:37:45.864 "name": null, 00:37:45.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:45.864 "is_configured": false, 00:37:45.864 "data_offset": 256, 00:37:45.864 "data_size": 7936 00:37:45.864 }, 00:37:45.864 { 00:37:45.864 "name": "BaseBdev2", 00:37:45.864 "uuid": "b8e7ced7-b4c7-5754-be1a-1665db7c6e2e", 00:37:45.864 "is_configured": true, 00:37:45.864 "data_offset": 256, 00:37:45.864 "data_size": 7936 00:37:45.864 } 00:37:45.864 ] 00:37:45.864 }' 00:37:45.864 23:23:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:45.864 23:23:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:46.431 23:23:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:37:46.690 [2024-07-13 23:23:36.015218] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:46.690 [2024-07-13 23:23:36.015655] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:37:46.690 [2024-07-13 23:23:36.015791] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:37:46.690 [2024-07-13 23:23:36.015915] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:46.690 [2024-07-13 23:23:36.019577] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:37:46.690 [2024-07-13 23:23:36.021948] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:46.690 23:23:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # sleep 1 00:37:48.068 23:23:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:48.068 23:23:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:48.068 23:23:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:48.068 23:23:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:48.068 23:23:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:48.068 23:23:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:48.068 23:23:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:48.068 23:23:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:48.068 "name": "raid_bdev1", 00:37:48.068 "uuid": "293cd599-313b-4145-a6eb-c09375759667", 00:37:48.068 "strip_size_kb": 0, 00:37:48.068 "state": "online", 00:37:48.068 "raid_level": "raid1", 00:37:48.068 "superblock": true, 00:37:48.068 "num_base_bdevs": 2, 00:37:48.068 "num_base_bdevs_discovered": 2, 00:37:48.068 "num_base_bdevs_operational": 2, 00:37:48.068 "process": { 00:37:48.068 "type": "rebuild", 00:37:48.068 "target": "spare", 00:37:48.068 "progress": { 00:37:48.068 "blocks": 3072, 00:37:48.068 "percent": 38 00:37:48.068 } 00:37:48.068 }, 00:37:48.068 "base_bdevs_list": [ 00:37:48.068 { 00:37:48.068 "name": "spare", 00:37:48.068 "uuid": "5342e2ff-e99e-506c-8101-d88bab9a74bb", 00:37:48.068 "is_configured": true, 00:37:48.068 "data_offset": 256, 00:37:48.068 "data_size": 7936 00:37:48.068 }, 00:37:48.068 { 00:37:48.068 "name": "BaseBdev2", 00:37:48.068 "uuid": "b8e7ced7-b4c7-5754-be1a-1665db7c6e2e", 00:37:48.068 "is_configured": true, 00:37:48.068 "data_offset": 256, 00:37:48.068 "data_size": 7936 00:37:48.068 } 00:37:48.068 ] 00:37:48.068 }' 00:37:48.068 23:23:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:48.068 23:23:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:48.068 23:23:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:48.068 23:23:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:48.068 23:23:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:37:48.326 [2024-07-13 23:23:37.632909] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:48.326 [2024-07-13 23:23:37.732206] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:48.326 [2024-07-13 23:23:37.732435] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:48.326 [2024-07-13 23:23:37.732495] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:48.585 [2024-07-13 23:23:37.732642] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:48.585 23:23:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:48.585 23:23:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:48.585 23:23:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:48.585 23:23:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:48.585 23:23:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:48.585 23:23:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:48.585 23:23:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:48.585 23:23:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:48.585 23:23:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:48.585 23:23:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:48.585 23:23:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:48.585 23:23:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:48.585 23:23:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:48.585 "name": "raid_bdev1", 00:37:48.585 "uuid": "293cd599-313b-4145-a6eb-c09375759667", 00:37:48.585 "strip_size_kb": 0, 00:37:48.585 "state": "online", 00:37:48.585 "raid_level": "raid1", 00:37:48.585 "superblock": true, 00:37:48.585 "num_base_bdevs": 2, 00:37:48.585 "num_base_bdevs_discovered": 1, 00:37:48.585 "num_base_bdevs_operational": 1, 00:37:48.585 "base_bdevs_list": [ 00:37:48.585 { 00:37:48.585 "name": null, 00:37:48.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:48.585 "is_configured": false, 00:37:48.585 "data_offset": 256, 00:37:48.585 "data_size": 7936 00:37:48.585 }, 00:37:48.585 { 00:37:48.585 "name": "BaseBdev2", 00:37:48.585 "uuid": "b8e7ced7-b4c7-5754-be1a-1665db7c6e2e", 00:37:48.585 "is_configured": true, 00:37:48.585 "data_offset": 256, 00:37:48.585 "data_size": 7936 00:37:48.586 } 00:37:48.586 ] 00:37:48.586 }' 00:37:48.586 23:23:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:48.586 23:23:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:49.519 23:23:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:37:49.519 [2024-07-13 23:23:38.913622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:49.519 [2024-07-13 23:23:38.913869] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:49.519 [2024-07-13 23:23:38.913971] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:37:49.519 [2024-07-13 23:23:38.914197] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:49.519 [2024-07-13 23:23:38.914461] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:49.519 [2024-07-13 23:23:38.914625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:49.519 [2024-07-13 23:23:38.914831] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:37:49.519 [2024-07-13 23:23:38.914953] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:37:49.519 [2024-07-13 23:23:38.915057] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:37:49.519 [2024-07-13 23:23:38.915152] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:49.519 [2024-07-13 23:23:38.918938] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002d50 00:37:49.519 spare 00:37:49.519 [2024-07-13 23:23:38.921413] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:49.777 23:23:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # sleep 1 00:37:50.712 23:23:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:50.712 23:23:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:50.712 23:23:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:50.712 23:23:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:50.712 23:23:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:50.712 23:23:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:50.712 23:23:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:50.970 23:23:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:50.970 "name": "raid_bdev1", 00:37:50.970 "uuid": "293cd599-313b-4145-a6eb-c09375759667", 00:37:50.970 "strip_size_kb": 0, 00:37:50.970 "state": "online", 00:37:50.970 "raid_level": "raid1", 00:37:50.970 "superblock": true, 00:37:50.970 "num_base_bdevs": 2, 00:37:50.971 "num_base_bdevs_discovered": 2, 00:37:50.971 "num_base_bdevs_operational": 2, 00:37:50.971 "process": { 00:37:50.971 "type": "rebuild", 00:37:50.971 "target": "spare", 00:37:50.971 "progress": { 00:37:50.971 "blocks": 3072, 00:37:50.971 "percent": 38 00:37:50.971 } 00:37:50.971 }, 00:37:50.971 "base_bdevs_list": [ 00:37:50.971 { 00:37:50.971 "name": "spare", 00:37:50.971 "uuid": "5342e2ff-e99e-506c-8101-d88bab9a74bb", 00:37:50.971 "is_configured": true, 00:37:50.971 "data_offset": 256, 00:37:50.971 "data_size": 7936 00:37:50.971 }, 00:37:50.971 { 00:37:50.971 "name": "BaseBdev2", 00:37:50.971 "uuid": "b8e7ced7-b4c7-5754-be1a-1665db7c6e2e", 00:37:50.971 "is_configured": true, 00:37:50.971 "data_offset": 256, 00:37:50.971 "data_size": 7936 00:37:50.971 } 00:37:50.971 ] 00:37:50.971 }' 00:37:50.971 23:23:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:50.971 23:23:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:50.971 23:23:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:50.971 23:23:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:50.971 23:23:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:37:51.230 [2024-07-13 23:23:40.555376] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:51.230 [2024-07-13 23:23:40.631020] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:51.230 [2024-07-13 23:23:40.631233] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:51.230 [2024-07-13 23:23:40.631362] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:51.230 [2024-07-13 23:23:40.631408] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:51.488 23:23:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:51.488 23:23:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:51.488 23:23:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:51.488 23:23:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:51.488 23:23:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:51.488 23:23:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:51.488 23:23:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:51.488 23:23:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:51.488 23:23:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:51.488 23:23:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:51.488 23:23:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:51.488 23:23:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:51.746 23:23:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:51.746 "name": "raid_bdev1", 00:37:51.746 "uuid": "293cd599-313b-4145-a6eb-c09375759667", 00:37:51.746 "strip_size_kb": 0, 00:37:51.746 "state": "online", 00:37:51.746 "raid_level": "raid1", 00:37:51.746 "superblock": true, 00:37:51.746 "num_base_bdevs": 2, 00:37:51.746 "num_base_bdevs_discovered": 1, 00:37:51.746 "num_base_bdevs_operational": 1, 00:37:51.746 "base_bdevs_list": [ 00:37:51.746 { 00:37:51.746 "name": null, 00:37:51.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:51.746 "is_configured": false, 00:37:51.746 "data_offset": 256, 00:37:51.746 "data_size": 7936 00:37:51.746 }, 00:37:51.746 { 00:37:51.746 "name": "BaseBdev2", 00:37:51.746 "uuid": "b8e7ced7-b4c7-5754-be1a-1665db7c6e2e", 00:37:51.746 "is_configured": true, 00:37:51.746 "data_offset": 256, 00:37:51.746 "data_size": 7936 00:37:51.746 } 00:37:51.746 ] 00:37:51.746 }' 00:37:51.746 23:23:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:51.747 23:23:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:52.313 23:23:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:52.313 23:23:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:52.313 23:23:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:52.313 23:23:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:52.313 23:23:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:52.313 23:23:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:52.313 23:23:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:52.571 23:23:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:52.571 "name": "raid_bdev1", 00:37:52.571 "uuid": "293cd599-313b-4145-a6eb-c09375759667", 00:37:52.571 "strip_size_kb": 0, 00:37:52.571 "state": "online", 00:37:52.571 "raid_level": "raid1", 00:37:52.571 "superblock": true, 00:37:52.571 "num_base_bdevs": 2, 00:37:52.571 "num_base_bdevs_discovered": 1, 00:37:52.571 "num_base_bdevs_operational": 1, 00:37:52.571 "base_bdevs_list": [ 00:37:52.571 { 00:37:52.571 "name": null, 00:37:52.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:52.572 "is_configured": false, 00:37:52.572 "data_offset": 256, 00:37:52.572 "data_size": 7936 00:37:52.572 }, 00:37:52.572 { 00:37:52.572 "name": "BaseBdev2", 00:37:52.572 "uuid": "b8e7ced7-b4c7-5754-be1a-1665db7c6e2e", 00:37:52.572 "is_configured": true, 00:37:52.572 "data_offset": 256, 00:37:52.572 "data_size": 7936 00:37:52.572 } 00:37:52.572 ] 00:37:52.572 }' 00:37:52.572 23:23:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:52.572 23:23:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:52.572 23:23:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:52.572 23:23:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:52.572 23:23:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:37:53.137 23:23:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:37:53.137 [2024-07-13 23:23:42.436541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:37:53.138 [2024-07-13 23:23:42.436827] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:53.138 [2024-07-13 23:23:42.436940] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:37:53.138 [2024-07-13 23:23:42.437117] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:53.138 [2024-07-13 23:23:42.437353] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:53.138 [2024-07-13 23:23:42.437492] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:53.138 [2024-07-13 23:23:42.437639] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:37:53.138 [2024-07-13 23:23:42.437757] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:37:53.138 [2024-07-13 23:23:42.437872] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:37:53.138 BaseBdev1 00:37:53.138 23:23:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # sleep 1 00:37:54.081 23:23:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:54.081 23:23:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:54.081 23:23:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:54.081 23:23:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:54.081 23:23:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:54.081 23:23:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:54.081 23:23:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:54.081 23:23:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:54.081 23:23:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:54.081 23:23:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:54.081 23:23:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:54.081 23:23:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:54.337 23:23:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:54.337 "name": "raid_bdev1", 00:37:54.337 "uuid": "293cd599-313b-4145-a6eb-c09375759667", 00:37:54.337 "strip_size_kb": 0, 00:37:54.337 "state": "online", 00:37:54.337 "raid_level": "raid1", 00:37:54.337 "superblock": true, 00:37:54.337 "num_base_bdevs": 2, 00:37:54.337 "num_base_bdevs_discovered": 1, 00:37:54.337 "num_base_bdevs_operational": 1, 00:37:54.337 "base_bdevs_list": [ 00:37:54.337 { 00:37:54.337 "name": null, 00:37:54.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:54.337 "is_configured": false, 00:37:54.337 "data_offset": 256, 00:37:54.337 "data_size": 7936 00:37:54.337 }, 00:37:54.337 { 00:37:54.337 "name": "BaseBdev2", 00:37:54.337 "uuid": "b8e7ced7-b4c7-5754-be1a-1665db7c6e2e", 00:37:54.337 "is_configured": true, 00:37:54.337 "data_offset": 256, 00:37:54.337 "data_size": 7936 00:37:54.337 } 00:37:54.337 ] 00:37:54.337 }' 00:37:54.337 23:23:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:54.337 23:23:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:55.270 23:23:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:55.270 23:23:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:55.270 23:23:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:55.270 23:23:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:55.270 23:23:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:55.270 23:23:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:55.270 23:23:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:55.270 23:23:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:55.270 "name": "raid_bdev1", 00:37:55.270 "uuid": "293cd599-313b-4145-a6eb-c09375759667", 00:37:55.270 "strip_size_kb": 0, 00:37:55.270 "state": "online", 00:37:55.270 "raid_level": "raid1", 00:37:55.270 "superblock": true, 00:37:55.270 "num_base_bdevs": 2, 00:37:55.270 "num_base_bdevs_discovered": 1, 00:37:55.270 "num_base_bdevs_operational": 1, 00:37:55.270 "base_bdevs_list": [ 00:37:55.270 { 00:37:55.270 "name": null, 00:37:55.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:55.270 "is_configured": false, 00:37:55.270 "data_offset": 256, 00:37:55.270 "data_size": 7936 00:37:55.270 }, 00:37:55.270 { 00:37:55.270 "name": "BaseBdev2", 00:37:55.270 "uuid": "b8e7ced7-b4c7-5754-be1a-1665db7c6e2e", 00:37:55.270 "is_configured": true, 00:37:55.270 "data_offset": 256, 00:37:55.270 "data_size": 7936 00:37:55.270 } 00:37:55.270 ] 00:37:55.270 }' 00:37:55.270 23:23:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:55.529 23:23:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:55.529 23:23:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:55.529 23:23:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:55.529 23:23:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:37:55.529 23:23:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:37:55.529 23:23:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:37:55.529 23:23:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:55.529 23:23:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:55.529 23:23:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:55.529 23:23:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:55.529 23:23:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:55.529 23:23:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:55.529 23:23:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:55.529 23:23:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:37:55.529 23:23:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:37:55.787 [2024-07-13 23:23:44.970167] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:55.787 [2024-07-13 23:23:44.970579] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:37:55.787 [2024-07-13 23:23:44.970705] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:37:55.787 request: 00:37:55.787 { 00:37:55.787 "base_bdev": "BaseBdev1", 00:37:55.787 "raid_bdev": "raid_bdev1", 00:37:55.787 "method": "bdev_raid_add_base_bdev", 00:37:55.787 "req_id": 1 00:37:55.787 } 00:37:55.787 Got JSON-RPC error response 00:37:55.787 response: 00:37:55.787 { 00:37:55.787 "code": -22, 00:37:55.787 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:37:55.787 } 00:37:55.787 23:23:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:37:55.787 23:23:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:55.787 23:23:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:55.787 23:23:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:55.787 23:23:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # sleep 1 00:37:56.722 23:23:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:56.722 23:23:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:56.722 23:23:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:56.722 23:23:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:56.722 23:23:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:56.722 23:23:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:56.722 23:23:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:56.722 23:23:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:56.722 23:23:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:56.722 23:23:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:56.722 23:23:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:56.722 23:23:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:56.980 23:23:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:56.980 "name": "raid_bdev1", 00:37:56.980 "uuid": "293cd599-313b-4145-a6eb-c09375759667", 00:37:56.980 "strip_size_kb": 0, 00:37:56.980 "state": "online", 00:37:56.980 "raid_level": "raid1", 00:37:56.980 "superblock": true, 00:37:56.980 "num_base_bdevs": 2, 00:37:56.980 "num_base_bdevs_discovered": 1, 00:37:56.980 "num_base_bdevs_operational": 1, 00:37:56.980 "base_bdevs_list": [ 00:37:56.980 { 00:37:56.980 "name": null, 00:37:56.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:56.980 "is_configured": false, 00:37:56.980 "data_offset": 256, 00:37:56.980 "data_size": 7936 00:37:56.980 }, 00:37:56.980 { 00:37:56.980 "name": "BaseBdev2", 00:37:56.980 "uuid": "b8e7ced7-b4c7-5754-be1a-1665db7c6e2e", 00:37:56.980 "is_configured": true, 00:37:56.980 "data_offset": 256, 00:37:56.980 "data_size": 7936 00:37:56.980 } 00:37:56.980 ] 00:37:56.980 }' 00:37:56.980 23:23:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:56.980 23:23:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:57.545 23:23:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:57.545 23:23:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:57.545 23:23:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:57.545 23:23:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:57.545 23:23:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:57.545 23:23:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:57.545 23:23:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:57.803 23:23:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:57.803 "name": "raid_bdev1", 00:37:57.803 "uuid": "293cd599-313b-4145-a6eb-c09375759667", 00:37:57.803 "strip_size_kb": 0, 00:37:57.803 "state": "online", 00:37:57.803 "raid_level": "raid1", 00:37:57.803 "superblock": true, 00:37:57.803 "num_base_bdevs": 2, 00:37:57.803 "num_base_bdevs_discovered": 1, 00:37:57.803 "num_base_bdevs_operational": 1, 00:37:57.803 "base_bdevs_list": [ 00:37:57.803 { 00:37:57.803 "name": null, 00:37:57.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:57.803 "is_configured": false, 00:37:57.803 "data_offset": 256, 00:37:57.803 "data_size": 7936 00:37:57.803 }, 00:37:57.803 { 00:37:57.803 "name": "BaseBdev2", 00:37:57.803 "uuid": "b8e7ced7-b4c7-5754-be1a-1665db7c6e2e", 00:37:57.803 "is_configured": true, 00:37:57.803 "data_offset": 256, 00:37:57.803 "data_size": 7936 00:37:57.803 } 00:37:57.803 ] 00:37:57.803 }' 00:37:57.803 23:23:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:58.062 23:23:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:58.062 23:23:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:58.062 23:23:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:58.062 23:23:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # killprocess 172803 00:37:58.062 23:23:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 172803 ']' 00:37:58.062 23:23:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 172803 00:37:58.062 23:23:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:37:58.062 23:23:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:58.062 23:23:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 172803 00:37:58.062 killing process with pid 172803 00:37:58.062 Received shutdown signal, test time was about 60.000000 seconds 00:37:58.062 00:37:58.062 Latency(us) 00:37:58.062 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:58.062 =================================================================================================================== 00:37:58.062 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:58.062 23:23:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:58.062 23:23:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:58.062 23:23:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 172803' 00:37:58.062 23:23:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 172803 00:37:58.062 23:23:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 172803 00:37:58.062 [2024-07-13 23:23:47.292097] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:58.062 [2024-07-13 23:23:47.292289] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:58.062 [2024-07-13 23:23:47.292415] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:58.062 [2024-07-13 23:23:47.292676] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:37:58.062 [2024-07-13 23:23:47.322633] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:58.321 23:23:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # return 0 00:37:58.321 00:37:58.321 real 0m29.958s 00:37:58.321 user 0m49.473s 00:37:58.321 sys 0m2.735s 00:37:58.321 23:23:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:58.321 ************************************ 00:37:58.321 END TEST raid_rebuild_test_sb_md_interleaved 00:37:58.321 ************************************ 00:37:58.321 23:23:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:58.321 23:23:47 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:37:58.321 23:23:47 bdev_raid -- bdev/bdev_raid.sh@916 -- # trap - EXIT 00:37:58.321 23:23:47 bdev_raid -- bdev/bdev_raid.sh@917 -- # cleanup 00:37:58.321 23:23:47 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 172803 ']' 00:37:58.321 23:23:47 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 172803 00:37:58.321 23:23:47 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:37:58.321 ************************************ 00:37:58.321 END TEST bdev_raid 00:37:58.321 ************************************ 00:37:58.321 00:37:58.321 real 23m54.823s 00:37:58.321 user 41m58.027s 00:37:58.321 sys 3m1.085s 00:37:58.321 23:23:47 bdev_raid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:58.321 23:23:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:58.321 23:23:47 -- common/autotest_common.sh@1142 -- # return 0 00:37:58.321 23:23:47 -- spdk/autotest.sh@191 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:37:58.321 23:23:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:58.321 23:23:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:58.321 23:23:47 -- common/autotest_common.sh@10 -- # set +x 00:37:58.321 ************************************ 00:37:58.321 START TEST bdevperf_config 00:37:58.321 ************************************ 00:37:58.321 23:23:47 bdevperf_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:37:58.580 * Looking for test storage... 00:37:58.580 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:37:58.580 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:37:58.580 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:37:58.580 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:37:58.580 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:37:58.580 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:37:58.580 23:23:47 bdevperf_config -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:38:01.863 23:23:50 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-13 23:23:47.841647] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:01.863 [2024-07-13 23:23:47.842542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173645 ] 00:38:01.863 Using job config with 4 jobs 00:38:01.863 [2024-07-13 23:23:47.988515] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:01.863 [2024-07-13 23:23:48.072404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:01.863 cpumask for '\''job0'\'' is too big 00:38:01.863 cpumask for '\''job1'\'' is too big 00:38:01.863 cpumask for '\''job2'\'' is too big 00:38:01.863 cpumask for '\''job3'\'' is too big 00:38:01.863 Running I/O for 2 seconds... 00:38:01.863 00:38:01.863 Latency(us) 00:38:01.863 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:01.863 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:38:01.863 Malloc0 : 2.02 27805.34 27.15 0.00 0.00 9197.81 1809.69 14656.23 00:38:01.863 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:38:01.863 Malloc0 : 2.02 27783.93 27.13 0.00 0.00 9186.20 1899.05 12690.15 00:38:01.863 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:38:01.863 Malloc0 : 2.02 27764.48 27.11 0.00 0.00 9173.09 1899.05 10664.49 00:38:01.863 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:38:01.863 Malloc0 : 2.02 27743.34 27.09 0.00 0.00 9161.66 1660.74 10843.23 00:38:01.863 =================================================================================================================== 00:38:01.863 Total : 111097.10 108.49 0.00 0.00 9179.69 1660.74 14656.23' 00:38:01.863 23:23:50 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-13 23:23:47.841647] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:01.863 [2024-07-13 23:23:47.842542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173645 ] 00:38:01.863 Using job config with 4 jobs 00:38:01.863 [2024-07-13 23:23:47.988515] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:01.863 [2024-07-13 23:23:48.072404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:01.863 cpumask for '\''job0'\'' is too big 00:38:01.863 cpumask for '\''job1'\'' is too big 00:38:01.863 cpumask for '\''job2'\'' is too big 00:38:01.863 cpumask for '\''job3'\'' is too big 00:38:01.863 Running I/O for 2 seconds... 00:38:01.863 00:38:01.863 Latency(us) 00:38:01.863 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:01.863 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:38:01.863 Malloc0 : 2.02 27805.34 27.15 0.00 0.00 9197.81 1809.69 14656.23 00:38:01.863 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:38:01.863 Malloc0 : 2.02 27783.93 27.13 0.00 0.00 9186.20 1899.05 12690.15 00:38:01.863 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:38:01.863 Malloc0 : 2.02 27764.48 27.11 0.00 0.00 9173.09 1899.05 10664.49 00:38:01.863 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:38:01.863 Malloc0 : 2.02 27743.34 27.09 0.00 0.00 9161.66 1660.74 10843.23 00:38:01.863 =================================================================================================================== 00:38:01.863 Total : 111097.10 108.49 0.00 0.00 9179.69 1660.74 14656.23' 00:38:01.864 23:23:50 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-13 23:23:47.841647] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:01.864 [2024-07-13 23:23:47.842542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173645 ] 00:38:01.864 Using job config with 4 jobs 00:38:01.864 [2024-07-13 23:23:47.988515] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:01.864 [2024-07-13 23:23:48.072404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:01.864 cpumask for '\''job0'\'' is too big 00:38:01.864 cpumask for '\''job1'\'' is too big 00:38:01.864 cpumask for '\''job2'\'' is too big 00:38:01.864 cpumask for '\''job3'\'' is too big 00:38:01.864 Running I/O for 2 seconds... 00:38:01.864 00:38:01.864 Latency(us) 00:38:01.864 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:01.864 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:38:01.864 Malloc0 : 2.02 27805.34 27.15 0.00 0.00 9197.81 1809.69 14656.23 00:38:01.864 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:38:01.864 Malloc0 : 2.02 27783.93 27.13 0.00 0.00 9186.20 1899.05 12690.15 00:38:01.864 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:38:01.864 Malloc0 : 2.02 27764.48 27.11 0.00 0.00 9173.09 1899.05 10664.49 00:38:01.864 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:38:01.864 Malloc0 : 2.02 27743.34 27.09 0.00 0.00 9161.66 1660.74 10843.23 00:38:01.864 =================================================================================================================== 00:38:01.864 Total : 111097.10 108.49 0.00 0.00 9179.69 1660.74 14656.23' 00:38:01.864 23:23:50 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:38:01.864 23:23:50 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:38:01.864 23:23:50 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:38:01.864 23:23:50 bdevperf_config -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:38:01.864 [2024-07-13 23:23:50.604869] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:01.864 [2024-07-13 23:23:50.605888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173681 ] 00:38:01.864 [2024-07-13 23:23:50.752468] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:01.864 [2024-07-13 23:23:50.835743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:01.864 cpumask for 'job0' is too big 00:38:01.864 cpumask for 'job1' is too big 00:38:01.864 cpumask for 'job2' is too big 00:38:01.864 cpumask for 'job3' is too big 00:38:04.391 23:23:53 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:38:04.391 Running I/O for 2 seconds... 00:38:04.391 00:38:04.391 Latency(us) 00:38:04.391 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:04.391 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:38:04.392 Malloc0 : 2.01 28022.19 27.37 0.00 0.00 9125.98 1854.37 14298.76 00:38:04.392 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:38:04.392 Malloc0 : 2.02 28037.83 27.38 0.00 0.00 9102.19 1705.43 12213.53 00:38:04.392 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:38:04.392 Malloc0 : 2.02 28018.24 27.36 0.00 0.00 9089.92 1601.16 10724.07 00:38:04.392 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:38:04.392 Malloc0 : 2.02 27998.38 27.34 0.00 0.00 9078.59 1563.93 10902.81 00:38:04.392 =================================================================================================================== 00:38:04.392 Total : 112076.64 109.45 0.00 0.00 9099.14 1563.93 14298.76' 00:38:04.392 23:23:53 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:38:04.392 23:23:53 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:38:04.392 00:38:04.392 23:23:53 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:38:04.392 23:23:53 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:38:04.392 23:23:53 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:38:04.392 23:23:53 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:38:04.392 23:23:53 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:38:04.392 23:23:53 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:38:04.392 23:23:53 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:38:04.392 23:23:53 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:38:04.392 00:38:04.392 23:23:53 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:38:04.392 23:23:53 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:38:04.392 23:23:53 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:38:04.392 23:23:53 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:38:04.392 23:23:53 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:38:04.392 23:23:53 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:38:04.392 23:23:53 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:38:04.392 23:23:53 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:38:04.392 23:23:53 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:38:04.392 23:23:53 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:38:04.392 23:23:53 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:38:04.392 23:23:53 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:38:04.392 23:23:53 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:38:04.392 23:23:53 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:38:04.392 23:23:53 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:38:04.392 00:38:04.392 23:23:53 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:38:04.392 23:23:53 bdevperf_config -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-13 23:23:53.392321] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:06.925 [2024-07-13 23:23:53.392656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173720 ] 00:38:06.925 Using job config with 3 jobs 00:38:06.925 [2024-07-13 23:23:53.539287] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:06.925 [2024-07-13 23:23:53.631145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:06.925 cpumask for '\''job0'\'' is too big 00:38:06.925 cpumask for '\''job1'\'' is too big 00:38:06.925 cpumask for '\''job2'\'' is too big 00:38:06.925 Running I/O for 2 seconds... 00:38:06.925 00:38:06.925 Latency(us) 00:38:06.925 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:06.925 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:38:06.925 Malloc0 : 2.01 37255.30 36.38 0.00 0.00 6863.73 1675.64 11260.28 00:38:06.925 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:38:06.925 Malloc0 : 2.01 37226.90 36.35 0.00 0.00 6855.51 1980.97 9175.04 00:38:06.925 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:38:06.925 Malloc0 : 2.02 37197.51 36.33 0.00 0.00 6845.06 1794.79 7983.48 00:38:06.925 =================================================================================================================== 00:38:06.925 Total : 111679.71 109.06 0.00 0.00 6854.77 1675.64 11260.28' 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-13 23:23:53.392321] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:06.925 [2024-07-13 23:23:53.392656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173720 ] 00:38:06.925 Using job config with 3 jobs 00:38:06.925 [2024-07-13 23:23:53.539287] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:06.925 [2024-07-13 23:23:53.631145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:06.925 cpumask for '\''job0'\'' is too big 00:38:06.925 cpumask for '\''job1'\'' is too big 00:38:06.925 cpumask for '\''job2'\'' is too big 00:38:06.925 Running I/O for 2 seconds... 00:38:06.925 00:38:06.925 Latency(us) 00:38:06.925 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:06.925 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:38:06.925 Malloc0 : 2.01 37255.30 36.38 0.00 0.00 6863.73 1675.64 11260.28 00:38:06.925 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:38:06.925 Malloc0 : 2.01 37226.90 36.35 0.00 0.00 6855.51 1980.97 9175.04 00:38:06.925 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:38:06.925 Malloc0 : 2.02 37197.51 36.33 0.00 0.00 6845.06 1794.79 7983.48 00:38:06.925 =================================================================================================================== 00:38:06.925 Total : 111679.71 109.06 0.00 0.00 6854.77 1675.64 11260.28' 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-13 23:23:53.392321] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:06.925 [2024-07-13 23:23:53.392656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173720 ] 00:38:06.925 Using job config with 3 jobs 00:38:06.925 [2024-07-13 23:23:53.539287] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:06.925 [2024-07-13 23:23:53.631145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:06.925 cpumask for '\''job0'\'' is too big 00:38:06.925 cpumask for '\''job1'\'' is too big 00:38:06.925 cpumask for '\''job2'\'' is too big 00:38:06.925 Running I/O for 2 seconds... 00:38:06.925 00:38:06.925 Latency(us) 00:38:06.925 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:06.925 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:38:06.925 Malloc0 : 2.01 37255.30 36.38 0.00 0.00 6863.73 1675.64 11260.28 00:38:06.925 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:38:06.925 Malloc0 : 2.01 37226.90 36.35 0.00 0.00 6855.51 1980.97 9175.04 00:38:06.925 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:38:06.925 Malloc0 : 2.02 37197.51 36.33 0.00 0.00 6845.06 1794.79 7983.48 00:38:06.925 =================================================================================================================== 00:38:06.925 Total : 111679.71 109.06 0.00 0.00 6854.77 1675.64 11260.28' 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:38:06.925 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:38:06.925 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:38:06.925 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:38:06.925 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:38:06.925 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:38:06.925 23:23:56 bdevperf_config -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:38:10.215 23:23:58 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-13 23:23:56.179877] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:10.215 [2024-07-13 23:23:56.180117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173766 ] 00:38:10.215 Using job config with 4 jobs 00:38:10.215 [2024-07-13 23:23:56.321191] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:10.215 [2024-07-13 23:23:56.419869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:10.215 cpumask for '\''job0'\'' is too big 00:38:10.215 cpumask for '\''job1'\'' is too big 00:38:10.215 cpumask for '\''job2'\'' is too big 00:38:10.215 cpumask for '\''job3'\'' is too big 00:38:10.215 Running I/O for 2 seconds... 00:38:10.215 00:38:10.216 Latency(us) 00:38:10.216 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:10.216 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:10.216 Malloc0 : 2.02 13936.93 13.61 0.00 0.00 18353.36 3485.32 28955.00 00:38:10.216 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:10.216 Malloc1 : 2.04 13944.60 13.62 0.00 0.00 18326.90 4259.84 28716.68 00:38:10.216 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:10.216 Malloc0 : 2.04 13934.86 13.61 0.00 0.00 18280.83 3664.06 24784.52 00:38:10.216 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:10.216 Malloc1 : 2.04 13924.60 13.60 0.00 0.00 18277.87 4021.53 24784.52 00:38:10.216 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:10.216 Malloc0 : 2.04 13915.20 13.59 0.00 0.00 18233.57 3425.75 21448.15 00:38:10.216 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:10.216 Malloc1 : 2.04 13905.53 13.58 0.00 0.00 18231.40 3902.37 21567.30 00:38:10.216 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:10.216 Malloc0 : 2.04 13895.39 13.57 0.00 0.00 18189.11 3485.32 20733.21 00:38:10.216 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:10.216 Malloc1 : 2.05 13885.52 13.56 0.00 0.00 18187.04 4021.53 20733.21 00:38:10.216 =================================================================================================================== 00:38:10.216 Total : 111342.63 108.73 0.00 0.00 18259.90 3425.75 28955.00' 00:38:10.216 23:23:58 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-13 23:23:56.179877] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:10.216 [2024-07-13 23:23:56.180117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173766 ] 00:38:10.216 Using job config with 4 jobs 00:38:10.216 [2024-07-13 23:23:56.321191] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:10.216 [2024-07-13 23:23:56.419869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:10.216 cpumask for '\''job0'\'' is too big 00:38:10.216 cpumask for '\''job1'\'' is too big 00:38:10.216 cpumask for '\''job2'\'' is too big 00:38:10.216 cpumask for '\''job3'\'' is too big 00:38:10.216 Running I/O for 2 seconds... 00:38:10.216 00:38:10.216 Latency(us) 00:38:10.216 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:10.216 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:10.216 Malloc0 : 2.02 13936.93 13.61 0.00 0.00 18353.36 3485.32 28955.00 00:38:10.216 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:10.216 Malloc1 : 2.04 13944.60 13.62 0.00 0.00 18326.90 4259.84 28716.68 00:38:10.216 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:10.216 Malloc0 : 2.04 13934.86 13.61 0.00 0.00 18280.83 3664.06 24784.52 00:38:10.216 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:10.216 Malloc1 : 2.04 13924.60 13.60 0.00 0.00 18277.87 4021.53 24784.52 00:38:10.216 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:10.216 Malloc0 : 2.04 13915.20 13.59 0.00 0.00 18233.57 3425.75 21448.15 00:38:10.216 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:10.216 Malloc1 : 2.04 13905.53 13.58 0.00 0.00 18231.40 3902.37 21567.30 00:38:10.216 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:10.216 Malloc0 : 2.04 13895.39 13.57 0.00 0.00 18189.11 3485.32 20733.21 00:38:10.216 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:10.216 Malloc1 : 2.05 13885.52 13.56 0.00 0.00 18187.04 4021.53 20733.21 00:38:10.216 =================================================================================================================== 00:38:10.216 Total : 111342.63 108.73 0.00 0.00 18259.90 3425.75 28955.00' 00:38:10.216 23:23:58 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-13 23:23:56.179877] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:10.216 [2024-07-13 23:23:56.180117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173766 ] 00:38:10.216 Using job config with 4 jobs 00:38:10.216 [2024-07-13 23:23:56.321191] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:10.216 [2024-07-13 23:23:56.419869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:10.216 cpumask for '\''job0'\'' is too big 00:38:10.216 cpumask for '\''job1'\'' is too big 00:38:10.216 cpumask for '\''job2'\'' is too big 00:38:10.216 cpumask for '\''job3'\'' is too big 00:38:10.216 Running I/O for 2 seconds... 00:38:10.216 00:38:10.216 Latency(us) 00:38:10.216 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:10.216 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:10.216 Malloc0 : 2.02 13936.93 13.61 0.00 0.00 18353.36 3485.32 28955.00 00:38:10.216 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:10.216 Malloc1 : 2.04 13944.60 13.62 0.00 0.00 18326.90 4259.84 28716.68 00:38:10.216 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:10.216 Malloc0 : 2.04 13934.86 13.61 0.00 0.00 18280.83 3664.06 24784.52 00:38:10.216 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:10.216 Malloc1 : 2.04 13924.60 13.60 0.00 0.00 18277.87 4021.53 24784.52 00:38:10.216 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:10.216 Malloc0 : 2.04 13915.20 13.59 0.00 0.00 18233.57 3425.75 21448.15 00:38:10.216 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:10.216 Malloc1 : 2.04 13905.53 13.58 0.00 0.00 18231.40 3902.37 21567.30 00:38:10.216 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:10.216 Malloc0 : 2.04 13895.39 13.57 0.00 0.00 18189.11 3485.32 20733.21 00:38:10.216 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:10.216 Malloc1 : 2.05 13885.52 13.56 0.00 0.00 18187.04 4021.53 20733.21 00:38:10.216 =================================================================================================================== 00:38:10.216 Total : 111342.63 108.73 0.00 0.00 18259.90 3425.75 28955.00' 00:38:10.216 23:23:58 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:38:10.216 23:23:58 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:38:10.216 23:23:58 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:38:10.216 23:23:58 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:38:10.216 23:23:58 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:38:10.216 23:23:58 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:38:10.216 ************************************ 00:38:10.216 END TEST bdevperf_config 00:38:10.216 ************************************ 00:38:10.216 00:38:10.216 real 0m11.251s 00:38:10.216 user 0m9.748s 00:38:10.216 sys 0m0.948s 00:38:10.216 23:23:58 bdevperf_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:10.216 23:23:58 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:38:10.216 23:23:58 -- common/autotest_common.sh@1142 -- # return 0 00:38:10.216 23:23:58 -- spdk/autotest.sh@192 -- # uname -s 00:38:10.216 23:23:58 -- spdk/autotest.sh@192 -- # [[ Linux == Linux ]] 00:38:10.216 23:23:58 -- spdk/autotest.sh@193 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:38:10.216 23:23:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:10.216 23:23:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:10.216 23:23:58 -- common/autotest_common.sh@10 -- # set +x 00:38:10.216 ************************************ 00:38:10.216 START TEST reactor_set_interrupt 00:38:10.216 ************************************ 00:38:10.216 23:23:58 reactor_set_interrupt -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:38:10.216 * Looking for test storage... 00:38:10.216 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:38:10.216 23:23:59 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:38:10.216 23:23:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:38:10.216 23:23:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:38:10.216 23:23:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:38:10.216 23:23:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:38:10.216 23:23:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:38:10.216 23:23:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:38:10.216 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:38:10.216 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@34 -- # set -e 00:38:10.216 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:38:10.216 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@36 -- # shopt -s extglob 00:38:10.216 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:38:10.216 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:38:10.216 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:38:10.216 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@22 -- # CONFIG_CET=n 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@70 -- # CONFIG_FC=n 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@73 -- # CONFIG_RAID5F=y 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:38:10.216 23:23:59 reactor_set_interrupt -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:38:10.217 23:23:59 reactor_set_interrupt -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:38:10.217 23:23:59 reactor_set_interrupt -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:38:10.217 23:23:59 reactor_set_interrupt -- common/build_config.sh@83 -- # CONFIG_URING=n 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:38:10.217 23:23:59 reactor_set_interrupt -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:38:10.217 23:23:59 reactor_set_interrupt -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:38:10.217 23:23:59 reactor_set_interrupt -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:38:10.217 23:23:59 reactor_set_interrupt -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:38:10.217 23:23:59 reactor_set_interrupt -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:38:10.217 23:23:59 reactor_set_interrupt -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:38:10.217 23:23:59 reactor_set_interrupt -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:38:10.217 23:23:59 reactor_set_interrupt -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:38:10.217 23:23:59 reactor_set_interrupt -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:38:10.217 23:23:59 reactor_set_interrupt -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:38:10.217 23:23:59 reactor_set_interrupt -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:38:10.217 23:23:59 reactor_set_interrupt -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:38:10.217 23:23:59 reactor_set_interrupt -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:38:10.217 23:23:59 reactor_set_interrupt -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:38:10.217 23:23:59 reactor_set_interrupt -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:38:10.217 #define SPDK_CONFIG_H 00:38:10.217 #define SPDK_CONFIG_APPS 1 00:38:10.217 #define SPDK_CONFIG_ARCH native 00:38:10.217 #define SPDK_CONFIG_ASAN 1 00:38:10.217 #undef SPDK_CONFIG_AVAHI 00:38:10.217 #undef SPDK_CONFIG_CET 00:38:10.217 #define SPDK_CONFIG_COVERAGE 1 00:38:10.217 #define SPDK_CONFIG_CROSS_PREFIX 00:38:10.217 #undef SPDK_CONFIG_CRYPTO 00:38:10.217 #undef SPDK_CONFIG_CRYPTO_MLX5 00:38:10.217 #undef SPDK_CONFIG_CUSTOMOCF 00:38:10.217 #undef SPDK_CONFIG_DAOS 00:38:10.217 #define SPDK_CONFIG_DAOS_DIR 00:38:10.217 #define SPDK_CONFIG_DEBUG 1 00:38:10.217 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:38:10.217 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:38:10.217 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:38:10.217 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:38:10.217 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:38:10.217 #undef SPDK_CONFIG_DPDK_UADK 00:38:10.217 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:38:10.217 #define SPDK_CONFIG_EXAMPLES 1 00:38:10.217 #undef SPDK_CONFIG_FC 00:38:10.217 #define SPDK_CONFIG_FC_PATH 00:38:10.217 #define SPDK_CONFIG_FIO_PLUGIN 1 00:38:10.217 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:38:10.217 #undef SPDK_CONFIG_FUSE 00:38:10.217 #undef SPDK_CONFIG_FUZZER 00:38:10.217 #define SPDK_CONFIG_FUZZER_LIB 00:38:10.217 #undef SPDK_CONFIG_GOLANG 00:38:10.217 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:38:10.217 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:38:10.217 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:38:10.217 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:38:10.217 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:38:10.217 #undef SPDK_CONFIG_HAVE_LIBBSD 00:38:10.217 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:38:10.217 #define SPDK_CONFIG_IDXD 1 00:38:10.217 #undef SPDK_CONFIG_IDXD_KERNEL 00:38:10.217 #undef SPDK_CONFIG_IPSEC_MB 00:38:10.217 #define SPDK_CONFIG_IPSEC_MB_DIR 00:38:10.217 #define SPDK_CONFIG_ISAL 1 00:38:10.217 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:38:10.217 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:38:10.217 #define SPDK_CONFIG_LIBDIR 00:38:10.217 #undef SPDK_CONFIG_LTO 00:38:10.217 #define SPDK_CONFIG_MAX_LCORES 128 00:38:10.217 #define SPDK_CONFIG_NVME_CUSE 1 00:38:10.217 #undef SPDK_CONFIG_OCF 00:38:10.217 #define SPDK_CONFIG_OCF_PATH 00:38:10.217 #define SPDK_CONFIG_OPENSSL_PATH 00:38:10.217 #undef SPDK_CONFIG_PGO_CAPTURE 00:38:10.217 #define SPDK_CONFIG_PGO_DIR 00:38:10.217 #undef SPDK_CONFIG_PGO_USE 00:38:10.217 #define SPDK_CONFIG_PREFIX /usr/local 00:38:10.217 #define SPDK_CONFIG_RAID5F 1 00:38:10.217 #undef SPDK_CONFIG_RBD 00:38:10.217 #define SPDK_CONFIG_RDMA 1 00:38:10.217 #define SPDK_CONFIG_RDMA_PROV verbs 00:38:10.217 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:38:10.217 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:38:10.217 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:38:10.217 #undef SPDK_CONFIG_SHARED 00:38:10.217 #undef SPDK_CONFIG_SMA 00:38:10.217 #define SPDK_CONFIG_TESTS 1 00:38:10.217 #undef SPDK_CONFIG_TSAN 00:38:10.217 #undef SPDK_CONFIG_UBLK 00:38:10.217 #define SPDK_CONFIG_UBSAN 1 00:38:10.217 #define SPDK_CONFIG_UNIT_TESTS 1 00:38:10.217 #undef SPDK_CONFIG_URING 00:38:10.217 #define SPDK_CONFIG_URING_PATH 00:38:10.217 #undef SPDK_CONFIG_URING_ZNS 00:38:10.217 #undef SPDK_CONFIG_USDT 00:38:10.217 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:38:10.217 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:38:10.217 #undef SPDK_CONFIG_VFIO_USER 00:38:10.217 #define SPDK_CONFIG_VFIO_USER_DIR 00:38:10.217 #define SPDK_CONFIG_VHOST 1 00:38:10.217 #define SPDK_CONFIG_VIRTIO 1 00:38:10.217 #undef SPDK_CONFIG_VTUNE 00:38:10.217 #define SPDK_CONFIG_VTUNE_DIR 00:38:10.217 #define SPDK_CONFIG_WERROR 1 00:38:10.217 #define SPDK_CONFIG_WPDK_DIR 00:38:10.217 #undef SPDK_CONFIG_XNVME 00:38:10.217 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:38:10.217 23:23:59 reactor_set_interrupt -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:10.217 23:23:59 reactor_set_interrupt -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:10.217 23:23:59 reactor_set_interrupt -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:10.217 23:23:59 reactor_set_interrupt -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:10.217 23:23:59 reactor_set_interrupt -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:10.217 23:23:59 reactor_set_interrupt -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:10.217 23:23:59 reactor_set_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:10.217 23:23:59 reactor_set_interrupt -- paths/export.sh@5 -- # export PATH 00:38:10.217 23:23:59 reactor_set_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:38:10.217 23:23:59 reactor_set_interrupt -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:38:10.217 23:23:59 reactor_set_interrupt -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:38:10.217 23:23:59 reactor_set_interrupt -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:38:10.217 23:23:59 reactor_set_interrupt -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:38:10.217 23:23:59 reactor_set_interrupt -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:38:10.217 23:23:59 reactor_set_interrupt -- pm/common@64 -- # TEST_TAG=N/A 00:38:10.217 23:23:59 reactor_set_interrupt -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:38:10.217 23:23:59 reactor_set_interrupt -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:38:10.217 23:23:59 reactor_set_interrupt -- pm/common@68 -- # uname -s 00:38:10.217 23:23:59 reactor_set_interrupt -- pm/common@68 -- # PM_OS=Linux 00:38:10.217 23:23:59 reactor_set_interrupt -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:38:10.217 23:23:59 reactor_set_interrupt -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:38:10.217 23:23:59 reactor_set_interrupt -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:38:10.217 23:23:59 reactor_set_interrupt -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:38:10.217 23:23:59 reactor_set_interrupt -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:38:10.217 23:23:59 reactor_set_interrupt -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:38:10.217 23:23:59 reactor_set_interrupt -- pm/common@76 -- # SUDO[0]= 00:38:10.217 23:23:59 reactor_set_interrupt -- pm/common@76 -- # SUDO[1]='sudo -E' 00:38:10.217 23:23:59 reactor_set_interrupt -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:38:10.217 23:23:59 reactor_set_interrupt -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:38:10.217 23:23:59 reactor_set_interrupt -- pm/common@81 -- # [[ Linux == Linux ]] 00:38:10.217 23:23:59 reactor_set_interrupt -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:38:10.217 23:23:59 reactor_set_interrupt -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@58 -- # : 1 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@62 -- # : 0 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@64 -- # : 0 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@66 -- # : 1 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@68 -- # : 1 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@70 -- # : 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@72 -- # : 0 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@74 -- # : 0 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@76 -- # : 0 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@78 -- # : 0 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@80 -- # : 1 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@82 -- # : 0 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@84 -- # : 0 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@86 -- # : 0 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@88 -- # : 0 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@90 -- # : 0 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@92 -- # : 0 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@94 -- # : 0 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@96 -- # : 0 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@98 -- # : 0 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@100 -- # : 0 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@102 -- # : rdma 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@104 -- # : 0 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@106 -- # : 0 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@108 -- # : 1 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@110 -- # : 0 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@112 -- # : 0 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@114 -- # : 0 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@116 -- # : 0 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@118 -- # : 0 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@120 -- # : 1 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@122 -- # : 1 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@124 -- # : /home/vagrant/spdk_repo/dpdk/build 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@126 -- # : 0 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@128 -- # : 0 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@130 -- # : 0 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@132 -- # : 0 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@134 -- # : 0 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@136 -- # : 0 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@138 -- # : v22.11.4 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@140 -- # : true 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@142 -- # : 1 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@144 -- # : 0 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@146 -- # : 0 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@148 -- # : 0 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:38:10.217 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@150 -- # : 0 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@152 -- # : 0 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@154 -- # : 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@156 -- # : 0 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@158 -- # : 0 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@160 -- # : 0 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@162 -- # : 0 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@164 -- # : 0 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@167 -- # : 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@169 -- # : 0 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@171 -- # : 0 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@200 -- # cat 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@253 -- # export QEMU_BIN= 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@253 -- # QEMU_BIN= 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@254 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@263 -- # export valgrind= 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@263 -- # valgrind= 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@269 -- # uname -s 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@279 -- # MAKE=make 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@299 -- # TEST_MODE= 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@318 -- # [[ -z 173842 ]] 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@318 -- # kill -0 173842 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@331 -- # local mount target_dir 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.L9xrPx 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.L9xrPx/tests/interrupt /tmp/spdk.L9xrPx 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@327 -- # df -T 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=1248956416 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253683200 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4726784 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda1 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=9132384256 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=20616794112 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=11467632640 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=6265028608 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=6268403712 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=5242880 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=5242880 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda15 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=103061504 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=109395968 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=6334464 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=1253675008 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253679104 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=95921922048 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=3780857856 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:38:10.218 * Looking for test storage... 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@368 -- # local target_space new_size 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@372 -- # mount=/ 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@374 -- # target_space=9132384256 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ ext4 == tmpfs ]] 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ ext4 == ramfs ]] 00:38:10.218 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:38:10.219 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@381 -- # new_size=13682225152 00:38:10.219 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:38:10.219 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:38:10.219 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:38:10.219 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:38:10.219 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:38:10.219 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@389 -- # return 0 00:38:10.219 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@1682 -- # set -o errtrace 00:38:10.219 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:38:10.219 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:38:10.219 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:38:10.219 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@1687 -- # true 00:38:10.219 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@1689 -- # xtrace_fd 00:38:10.219 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:38:10.219 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:38:10.219 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@27 -- # exec 00:38:10.219 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@29 -- # exec 00:38:10.219 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@31 -- # xtrace_restore 00:38:10.219 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:38:10.219 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:38:10.219 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@18 -- # set -x 00:38:10.219 23:23:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:38:10.219 23:23:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:10.219 23:23:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:38:10.219 23:23:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:38:10.219 23:23:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:38:10.219 23:23:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:38:10.219 23:23:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:38:10.219 23:23:59 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:38:10.219 23:23:59 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:38:10.219 23:23:59 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:38:10.219 23:23:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:10.219 23:23:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:38:10.219 23:23:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=173885 00:38:10.219 23:23:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:38:10.219 23:23:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:38:10.219 23:23:59 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 173885 /var/tmp/spdk.sock 00:38:10.219 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@829 -- # '[' -z 173885 ']' 00:38:10.219 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:10.219 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:10.219 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:10.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:10.219 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:10.219 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:10.219 [2024-07-13 23:23:59.294586] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:10.219 [2024-07-13 23:23:59.294926] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173885 ] 00:38:10.219 [2024-07-13 23:23:59.442514] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:10.219 [2024-07-13 23:23:59.501926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:10.219 [2024-07-13 23:23:59.502066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:10.219 [2024-07-13 23:23:59.502066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:10.219 [2024-07-13 23:23:59.579725] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:10.219 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:10.219 23:23:59 reactor_set_interrupt -- common/autotest_common.sh@862 -- # return 0 00:38:10.219 23:23:59 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:38:10.219 23:23:59 reactor_set_interrupt -- interrupt/common.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:10.782 Malloc0 00:38:10.782 Malloc1 00:38:10.782 Malloc2 00:38:10.782 23:23:59 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:38:10.782 23:23:59 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:38:10.782 23:23:59 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:38:10.782 23:23:59 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:38:10.782 5000+0 records in 00:38:10.782 5000+0 records out 00:38:10.782 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0240165 s, 426 MB/s 00:38:10.782 23:23:59 reactor_set_interrupt -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:38:11.040 AIO0 00:38:11.040 23:24:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 173885 00:38:11.040 23:24:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 173885 without_thd 00:38:11.040 23:24:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=173885 00:38:11.040 23:24:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:38:11.040 23:24:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:38:11.040 23:24:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:38:11.040 23:24:00 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:38:11.040 23:24:00 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:38:11.040 23:24:00 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:38:11.040 23:24:00 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:38:11.040 23:24:00 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:38:11.040 23:24:00 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:38:11.299 23:24:00 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:38:11.299 23:24:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:38:11.299 23:24:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:38:11.299 23:24:00 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:38:11.299 23:24:00 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:38:11.299 23:24:00 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:38:11.299 23:24:00 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:38:11.299 23:24:00 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:38:11.299 23:24:00 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:38:11.299 23:24:00 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:38:11.299 23:24:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:38:11.299 23:24:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:38:11.299 spdk_thread ids are 1 on reactor0. 00:38:11.299 23:24:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:38:11.299 23:24:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 173885 0 00:38:11.299 23:24:00 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 173885 0 idle 00:38:11.299 23:24:00 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=173885 00:38:11.299 23:24:00 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:38:11.299 23:24:00 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:11.299 23:24:00 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:38:11.299 23:24:00 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:38:11.299 23:24:00 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:38:11.299 23:24:00 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:38:11.299 23:24:00 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:38:11.299 23:24:00 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 173885 -w 256 00:38:11.299 23:24:00 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:38:11.556 23:24:00 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 173885 root 20 0 20.1t 62104 28716 S 0.0 0.5 0:00.27 reactor_0' 00:38:11.556 23:24:00 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 173885 root 20 0 20.1t 62104 28716 S 0.0 0.5 0:00.27 reactor_0 00:38:11.556 23:24:00 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:38:11.556 23:24:00 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:38:11.556 23:24:00 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:38:11.556 23:24:00 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:38:11.556 23:24:00 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:38:11.556 23:24:00 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:38:11.556 23:24:00 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:38:11.556 23:24:00 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:38:11.556 23:24:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:38:11.556 23:24:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 173885 1 00:38:11.556 23:24:00 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 173885 1 idle 00:38:11.556 23:24:00 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=173885 00:38:11.556 23:24:00 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:38:11.556 23:24:00 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:11.556 23:24:00 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:38:11.556 23:24:00 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:38:11.556 23:24:00 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:38:11.556 23:24:00 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:38:11.556 23:24:00 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:38:11.556 23:24:00 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 173885 -w 256 00:38:11.556 23:24:00 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:38:11.814 23:24:01 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 173888 root 20 0 20.1t 62104 28716 S 0.0 0.5 0:00.00 reactor_1' 00:38:11.814 23:24:01 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:38:11.814 23:24:01 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 173888 root 20 0 20.1t 62104 28716 S 0.0 0.5 0:00.00 reactor_1 00:38:11.814 23:24:01 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:38:11.814 23:24:01 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:38:11.814 23:24:01 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:38:11.814 23:24:01 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:38:11.814 23:24:01 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:38:11.814 23:24:01 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:38:11.814 23:24:01 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:38:11.814 23:24:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:38:11.814 23:24:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 173885 2 00:38:11.814 23:24:01 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 173885 2 idle 00:38:11.814 23:24:01 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=173885 00:38:11.814 23:24:01 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:38:11.814 23:24:01 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:11.814 23:24:01 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:38:11.814 23:24:01 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:38:11.814 23:24:01 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:38:11.814 23:24:01 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:38:11.814 23:24:01 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:38:11.814 23:24:01 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 173885 -w 256 00:38:11.814 23:24:01 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:38:11.814 23:24:01 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 173889 root 20 0 20.1t 62104 28716 S 0.0 0.5 0:00.00 reactor_2' 00:38:11.814 23:24:01 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 173889 root 20 0 20.1t 62104 28716 S 0.0 0.5 0:00.00 reactor_2 00:38:11.814 23:24:01 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:38:11.814 23:24:01 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:38:11.814 23:24:01 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:38:11.814 23:24:01 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:38:11.814 23:24:01 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:38:11.814 23:24:01 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:38:11.814 23:24:01 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:38:11.814 23:24:01 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:38:12.072 23:24:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:38:12.072 23:24:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:38:12.072 23:24:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:38:12.072 [2024-07-13 23:24:01.468515] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:12.330 23:24:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:38:12.330 [2024-07-13 23:24:01.732198] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:38:12.330 [2024-07-13 23:24:01.733125] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:38:12.587 23:24:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:38:12.587 [2024-07-13 23:24:01.992028] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:38:12.587 [2024-07-13 23:24:01.993041] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:38:12.845 23:24:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:38:12.845 23:24:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 173885 0 00:38:12.845 23:24:02 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 173885 0 busy 00:38:12.845 23:24:02 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=173885 00:38:12.845 23:24:02 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:38:12.845 23:24:02 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:38:12.845 23:24:02 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:38:12.845 23:24:02 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:38:12.845 23:24:02 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:38:12.845 23:24:02 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:38:12.845 23:24:02 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 173885 -w 256 00:38:12.845 23:24:02 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:38:12.845 23:24:02 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 173885 root 20 0 20.1t 62228 28716 R 99.9 0.5 0:00.72 reactor_0' 00:38:12.845 23:24:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 173885 root 20 0 20.1t 62228 28716 R 99.9 0.5 0:00.72 reactor_0 00:38:12.845 23:24:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:38:12.845 23:24:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:38:12.845 23:24:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:38:12.845 23:24:02 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:38:12.845 23:24:02 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:38:12.845 23:24:02 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:38:12.845 23:24:02 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:38:12.845 23:24:02 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:38:12.845 23:24:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:38:12.845 23:24:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 173885 2 00:38:12.845 23:24:02 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 173885 2 busy 00:38:12.845 23:24:02 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=173885 00:38:12.845 23:24:02 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:38:12.845 23:24:02 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:38:12.845 23:24:02 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:38:12.845 23:24:02 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:38:12.845 23:24:02 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:38:12.845 23:24:02 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:38:12.845 23:24:02 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 173885 -w 256 00:38:12.845 23:24:02 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:38:13.102 23:24:02 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 173889 root 20 0 20.1t 62228 28716 R 99.9 0.5 0:00.35 reactor_2' 00:38:13.102 23:24:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 173889 root 20 0 20.1t 62228 28716 R 99.9 0.5 0:00.35 reactor_2 00:38:13.102 23:24:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:38:13.102 23:24:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:38:13.102 23:24:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:38:13.102 23:24:02 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:38:13.102 23:24:02 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:38:13.102 23:24:02 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:38:13.102 23:24:02 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:38:13.102 23:24:02 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:38:13.102 23:24:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:38:13.360 [2024-07-13 23:24:02.576060] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:38:13.360 [2024-07-13 23:24:02.576952] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:38:13.360 23:24:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:38:13.360 23:24:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 173885 2 00:38:13.360 23:24:02 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 173885 2 idle 00:38:13.360 23:24:02 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=173885 00:38:13.360 23:24:02 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:38:13.360 23:24:02 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:13.360 23:24:02 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:38:13.360 23:24:02 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:38:13.360 23:24:02 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:38:13.360 23:24:02 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:38:13.360 23:24:02 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:38:13.360 23:24:02 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 173885 -w 256 00:38:13.360 23:24:02 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:38:13.360 23:24:02 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 173889 root 20 0 20.1t 62344 28716 S 0.0 0.5 0:00.58 reactor_2' 00:38:13.360 23:24:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 173889 root 20 0 20.1t 62344 28716 S 0.0 0.5 0:00.58 reactor_2 00:38:13.360 23:24:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:38:13.360 23:24:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:38:13.618 23:24:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:38:13.618 23:24:02 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:38:13.618 23:24:02 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:38:13.618 23:24:02 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:38:13.618 23:24:02 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:38:13.618 23:24:02 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:38:13.618 23:24:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:38:13.618 [2024-07-13 23:24:02.968050] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:38:13.618 [2024-07-13 23:24:02.968956] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:38:13.618 23:24:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:38:13.618 23:24:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:38:13.618 23:24:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:38:13.877 [2024-07-13 23:24:03.180478] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:13.877 23:24:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 173885 0 00:38:13.877 23:24:03 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 173885 0 idle 00:38:13.877 23:24:03 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=173885 00:38:13.877 23:24:03 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:38:13.877 23:24:03 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:13.877 23:24:03 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:38:13.877 23:24:03 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:38:13.877 23:24:03 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:38:13.877 23:24:03 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:38:13.877 23:24:03 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:38:13.877 23:24:03 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 173885 -w 256 00:38:13.877 23:24:03 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:38:14.135 23:24:03 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 173885 root 20 0 20.1t 62432 28716 S 0.0 0.5 0:01.52 reactor_0' 00:38:14.135 23:24:03 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 173885 root 20 0 20.1t 62432 28716 S 0.0 0.5 0:01.52 reactor_0 00:38:14.135 23:24:03 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:38:14.135 23:24:03 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:38:14.135 23:24:03 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:38:14.135 23:24:03 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:38:14.135 23:24:03 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:38:14.135 23:24:03 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:38:14.135 23:24:03 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:38:14.135 23:24:03 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:38:14.135 23:24:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:38:14.135 23:24:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:38:14.135 23:24:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:38:14.135 23:24:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 173885 00:38:14.135 23:24:03 reactor_set_interrupt -- common/autotest_common.sh@948 -- # '[' -z 173885 ']' 00:38:14.135 23:24:03 reactor_set_interrupt -- common/autotest_common.sh@952 -- # kill -0 173885 00:38:14.135 23:24:03 reactor_set_interrupt -- common/autotest_common.sh@953 -- # uname 00:38:14.135 23:24:03 reactor_set_interrupt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:14.135 23:24:03 reactor_set_interrupt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 173885 00:38:14.135 23:24:03 reactor_set_interrupt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:14.135 23:24:03 reactor_set_interrupt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:14.135 23:24:03 reactor_set_interrupt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 173885' 00:38:14.135 killing process with pid 173885 00:38:14.135 23:24:03 reactor_set_interrupt -- common/autotest_common.sh@967 -- # kill 173885 00:38:14.135 23:24:03 reactor_set_interrupt -- common/autotest_common.sh@972 -- # wait 173885 00:38:14.392 23:24:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:38:14.392 23:24:03 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:38:14.392 23:24:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:38:14.392 23:24:03 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:14.392 23:24:03 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:38:14.392 23:24:03 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=174014 00:38:14.392 23:24:03 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:38:14.392 23:24:03 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:38:14.392 23:24:03 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 174014 /var/tmp/spdk.sock 00:38:14.392 23:24:03 reactor_set_interrupt -- common/autotest_common.sh@829 -- # '[' -z 174014 ']' 00:38:14.392 23:24:03 reactor_set_interrupt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:14.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:14.392 23:24:03 reactor_set_interrupt -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:14.392 23:24:03 reactor_set_interrupt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:14.392 23:24:03 reactor_set_interrupt -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:14.392 23:24:03 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:14.392 [2024-07-13 23:24:03.708446] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:14.392 [2024-07-13 23:24:03.708884] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174014 ] 00:38:14.649 [2024-07-13 23:24:03.870263] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:14.649 [2024-07-13 23:24:03.945790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:14.649 [2024-07-13 23:24:03.945915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:14.649 [2024-07-13 23:24:03.945915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:14.649 [2024-07-13 23:24:04.026633] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:15.581 23:24:04 reactor_set_interrupt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:15.581 23:24:04 reactor_set_interrupt -- common/autotest_common.sh@862 -- # return 0 00:38:15.581 23:24:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:38:15.581 23:24:04 reactor_set_interrupt -- interrupt/common.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:15.581 Malloc0 00:38:15.581 Malloc1 00:38:15.581 Malloc2 00:38:15.581 23:24:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:38:15.581 23:24:04 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:38:15.581 23:24:04 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:38:15.581 23:24:04 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:38:15.839 5000+0 records in 00:38:15.839 5000+0 records out 00:38:15.839 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0250502 s, 409 MB/s 00:38:15.839 23:24:05 reactor_set_interrupt -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:38:16.097 AIO0 00:38:16.097 23:24:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 174014 00:38:16.097 23:24:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 174014 00:38:16.097 23:24:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=174014 00:38:16.097 23:24:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:38:16.097 23:24:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:38:16.097 23:24:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:38:16.097 23:24:05 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:38:16.097 23:24:05 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:38:16.097 23:24:05 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:38:16.097 23:24:05 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:38:16.097 23:24:05 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:38:16.097 23:24:05 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:38:16.355 23:24:05 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:38:16.355 23:24:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:38:16.355 23:24:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:38:16.355 23:24:05 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:38:16.355 23:24:05 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:38:16.355 23:24:05 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:38:16.355 23:24:05 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:38:16.355 23:24:05 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:38:16.355 23:24:05 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:38:16.613 spdk_thread ids are 1 on reactor0. 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 174014 0 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 174014 0 idle 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=174014 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 174014 -w 256 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 174014 root 20 0 20.1t 62104 28812 S 0.0 0.5 0:00.31 reactor_0' 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 174014 root 20 0 20.1t 62104 28812 S 0.0 0.5 0:00.31 reactor_0 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 174014 1 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 174014 1 idle 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=174014 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:38:16.613 23:24:05 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 174014 -w 256 00:38:16.872 23:24:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 174018 root 20 0 20.1t 62104 28812 S 0.0 0.5 0:00.00 reactor_1' 00:38:16.872 23:24:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 174018 root 20 0 20.1t 62104 28812 S 0.0 0.5 0:00.00 reactor_1 00:38:16.872 23:24:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:38:16.872 23:24:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:38:16.872 23:24:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:38:16.872 23:24:06 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:38:16.872 23:24:06 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:38:16.872 23:24:06 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:38:16.872 23:24:06 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:38:16.872 23:24:06 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:38:16.872 23:24:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:38:16.872 23:24:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 174014 2 00:38:16.872 23:24:06 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 174014 2 idle 00:38:16.872 23:24:06 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=174014 00:38:16.872 23:24:06 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:38:16.872 23:24:06 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:16.872 23:24:06 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:38:16.872 23:24:06 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:38:16.872 23:24:06 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:38:16.872 23:24:06 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:38:16.872 23:24:06 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:38:16.872 23:24:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 174014 -w 256 00:38:16.872 23:24:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:38:17.133 23:24:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 174022 root 20 0 20.1t 62104 28812 S 0.0 0.5 0:00.00 reactor_2' 00:38:17.133 23:24:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 174022 root 20 0 20.1t 62104 28812 S 0.0 0.5 0:00.00 reactor_2 00:38:17.133 23:24:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:38:17.133 23:24:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:38:17.133 23:24:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:38:17.133 23:24:06 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:38:17.133 23:24:06 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:38:17.133 23:24:06 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:38:17.133 23:24:06 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:38:17.133 23:24:06 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:38:17.133 23:24:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:38:17.133 23:24:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:38:17.398 [2024-07-13 23:24:06.591686] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:38:17.398 [2024-07-13 23:24:06.592314] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:38:17.398 [2024-07-13 23:24:06.592762] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:38:17.398 23:24:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:38:17.655 [2024-07-13 23:24:06.887520] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:38:17.655 [2024-07-13 23:24:06.888322] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:38:17.655 23:24:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:38:17.655 23:24:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 174014 0 00:38:17.655 23:24:06 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 174014 0 busy 00:38:17.655 23:24:06 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=174014 00:38:17.655 23:24:06 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:38:17.655 23:24:06 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:38:17.655 23:24:06 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:38:17.656 23:24:06 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:38:17.656 23:24:06 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:38:17.656 23:24:06 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:38:17.656 23:24:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:38:17.656 23:24:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 174014 -w 256 00:38:17.913 23:24:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 174014 root 20 0 20.1t 62256 28812 R 99.9 0.5 0:00.78 reactor_0' 00:38:17.913 23:24:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 174014 root 20 0 20.1t 62256 28812 R 99.9 0.5 0:00.78 reactor_0 00:38:17.913 23:24:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:38:17.913 23:24:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:38:17.913 23:24:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:38:17.913 23:24:07 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:38:17.913 23:24:07 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:38:17.913 23:24:07 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:38:17.913 23:24:07 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:38:17.913 23:24:07 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:38:17.913 23:24:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:38:17.913 23:24:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 174014 2 00:38:17.913 23:24:07 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 174014 2 busy 00:38:17.913 23:24:07 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=174014 00:38:17.913 23:24:07 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:38:17.913 23:24:07 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:38:17.913 23:24:07 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:38:17.913 23:24:07 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:38:17.913 23:24:07 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:38:17.913 23:24:07 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:38:17.913 23:24:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:38:17.913 23:24:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 174014 -w 256 00:38:17.913 23:24:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 174022 root 20 0 20.1t 62256 28812 R 93.8 0.5 0:00.34 reactor_2' 00:38:17.913 23:24:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 174022 root 20 0 20.1t 62256 28812 R 93.8 0.5 0:00.34 reactor_2 00:38:17.913 23:24:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:38:17.913 23:24:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:38:17.913 23:24:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=93.8 00:38:17.913 23:24:07 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=93 00:38:17.913 23:24:07 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:38:17.913 23:24:07 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 93 -lt 70 ]] 00:38:17.913 23:24:07 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:38:17.913 23:24:07 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:38:17.913 23:24:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:38:18.170 [2024-07-13 23:24:07.507678] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:38:18.170 [2024-07-13 23:24:07.508297] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:38:18.170 23:24:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:38:18.170 23:24:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 174014 2 00:38:18.170 23:24:07 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 174014 2 idle 00:38:18.170 23:24:07 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=174014 00:38:18.170 23:24:07 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:38:18.170 23:24:07 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:18.170 23:24:07 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:38:18.170 23:24:07 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:38:18.170 23:24:07 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:38:18.170 23:24:07 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:38:18.170 23:24:07 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:38:18.170 23:24:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 174014 -w 256 00:38:18.170 23:24:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:38:18.427 23:24:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 174022 root 20 0 20.1t 62256 28812 S 0.0 0.5 0:00.61 reactor_2' 00:38:18.427 23:24:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 174022 root 20 0 20.1t 62256 28812 S 0.0 0.5 0:00.61 reactor_2 00:38:18.427 23:24:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:38:18.427 23:24:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:38:18.427 23:24:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:38:18.427 23:24:07 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:38:18.427 23:24:07 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:38:18.427 23:24:07 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:38:18.427 23:24:07 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:38:18.427 23:24:07 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:38:18.427 23:24:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:38:18.684 [2024-07-13 23:24:07.963763] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:38:18.684 [2024-07-13 23:24:07.964320] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:38:18.684 [2024-07-13 23:24:07.964540] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:38:18.684 23:24:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:38:18.684 23:24:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 174014 0 00:38:18.684 23:24:07 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 174014 0 idle 00:38:18.684 23:24:07 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=174014 00:38:18.684 23:24:07 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:38:18.684 23:24:07 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:18.684 23:24:07 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:38:18.684 23:24:07 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:38:18.684 23:24:07 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:38:18.684 23:24:07 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:38:18.684 23:24:07 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:38:18.684 23:24:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 174014 -w 256 00:38:18.684 23:24:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:38:18.941 23:24:08 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 174014 root 20 0 20.1t 62380 28812 S 0.0 0.5 0:01.68 reactor_0' 00:38:18.941 23:24:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 174014 root 20 0 20.1t 62380 28812 S 0.0 0.5 0:01.68 reactor_0 00:38:18.941 23:24:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:38:18.941 23:24:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:38:18.941 23:24:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:38:18.941 23:24:08 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:38:18.941 23:24:08 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:38:18.941 23:24:08 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:38:18.941 23:24:08 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:38:18.941 23:24:08 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:38:18.941 23:24:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:38:18.941 23:24:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:38:18.941 23:24:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:38:18.941 23:24:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 174014 00:38:18.941 23:24:08 reactor_set_interrupt -- common/autotest_common.sh@948 -- # '[' -z 174014 ']' 00:38:18.941 23:24:08 reactor_set_interrupt -- common/autotest_common.sh@952 -- # kill -0 174014 00:38:18.941 23:24:08 reactor_set_interrupt -- common/autotest_common.sh@953 -- # uname 00:38:18.941 23:24:08 reactor_set_interrupt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:18.941 23:24:08 reactor_set_interrupt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 174014 00:38:18.941 23:24:08 reactor_set_interrupt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:18.941 23:24:08 reactor_set_interrupt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:18.941 23:24:08 reactor_set_interrupt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 174014' 00:38:18.941 killing process with pid 174014 00:38:18.941 23:24:08 reactor_set_interrupt -- common/autotest_common.sh@967 -- # kill 174014 00:38:18.941 23:24:08 reactor_set_interrupt -- common/autotest_common.sh@972 -- # wait 174014 00:38:19.198 23:24:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:38:19.198 23:24:08 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:38:19.198 ************************************ 00:38:19.198 END TEST reactor_set_interrupt 00:38:19.198 ************************************ 00:38:19.198 00:38:19.198 real 0m9.502s 00:38:19.198 user 0m9.861s 00:38:19.198 sys 0m1.421s 00:38:19.198 23:24:08 reactor_set_interrupt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:19.198 23:24:08 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:19.198 23:24:08 -- common/autotest_common.sh@1142 -- # return 0 00:38:19.198 23:24:08 -- spdk/autotest.sh@194 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:38:19.198 23:24:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:19.198 23:24:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:19.198 23:24:08 -- common/autotest_common.sh@10 -- # set +x 00:38:19.198 ************************************ 00:38:19.198 START TEST reap_unregistered_poller 00:38:19.198 ************************************ 00:38:19.198 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:38:19.458 * Looking for test storage... 00:38:19.458 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:38:19.458 23:24:08 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:38:19.458 23:24:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:38:19.458 23:24:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:38:19.458 23:24:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:38:19.458 23:24:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:38:19.458 23:24:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:38:19.458 23:24:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:38:19.458 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:38:19.458 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@34 -- # set -e 00:38:19.458 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:38:19.458 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@36 -- # shopt -s extglob 00:38:19.458 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:38:19.458 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:38:19.458 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:38:19.458 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@22 -- # CONFIG_CET=n 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:38:19.458 23:24:08 reap_unregistered_poller -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@70 -- # CONFIG_FC=n 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@73 -- # CONFIG_RAID5F=y 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:38:19.459 23:24:08 reap_unregistered_poller -- common/build_config.sh@83 -- # CONFIG_URING=n 00:38:19.459 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:38:19.459 23:24:08 reap_unregistered_poller -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:38:19.459 23:24:08 reap_unregistered_poller -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:38:19.459 23:24:08 reap_unregistered_poller -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:38:19.459 23:24:08 reap_unregistered_poller -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:38:19.459 23:24:08 reap_unregistered_poller -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:38:19.459 23:24:08 reap_unregistered_poller -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:38:19.459 23:24:08 reap_unregistered_poller -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:38:19.459 23:24:08 reap_unregistered_poller -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:38:19.459 23:24:08 reap_unregistered_poller -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:38:19.459 23:24:08 reap_unregistered_poller -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:38:19.459 23:24:08 reap_unregistered_poller -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:38:19.459 23:24:08 reap_unregistered_poller -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:38:19.459 23:24:08 reap_unregistered_poller -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:38:19.459 23:24:08 reap_unregistered_poller -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:38:19.459 23:24:08 reap_unregistered_poller -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:38:19.459 #define SPDK_CONFIG_H 00:38:19.459 #define SPDK_CONFIG_APPS 1 00:38:19.459 #define SPDK_CONFIG_ARCH native 00:38:19.459 #define SPDK_CONFIG_ASAN 1 00:38:19.459 #undef SPDK_CONFIG_AVAHI 00:38:19.459 #undef SPDK_CONFIG_CET 00:38:19.459 #define SPDK_CONFIG_COVERAGE 1 00:38:19.459 #define SPDK_CONFIG_CROSS_PREFIX 00:38:19.459 #undef SPDK_CONFIG_CRYPTO 00:38:19.459 #undef SPDK_CONFIG_CRYPTO_MLX5 00:38:19.459 #undef SPDK_CONFIG_CUSTOMOCF 00:38:19.459 #undef SPDK_CONFIG_DAOS 00:38:19.459 #define SPDK_CONFIG_DAOS_DIR 00:38:19.459 #define SPDK_CONFIG_DEBUG 1 00:38:19.459 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:38:19.459 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:38:19.459 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:38:19.459 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:38:19.459 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:38:19.459 #undef SPDK_CONFIG_DPDK_UADK 00:38:19.459 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:38:19.459 #define SPDK_CONFIG_EXAMPLES 1 00:38:19.459 #undef SPDK_CONFIG_FC 00:38:19.459 #define SPDK_CONFIG_FC_PATH 00:38:19.459 #define SPDK_CONFIG_FIO_PLUGIN 1 00:38:19.459 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:38:19.459 #undef SPDK_CONFIG_FUSE 00:38:19.459 #undef SPDK_CONFIG_FUZZER 00:38:19.459 #define SPDK_CONFIG_FUZZER_LIB 00:38:19.459 #undef SPDK_CONFIG_GOLANG 00:38:19.459 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:38:19.459 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:38:19.459 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:38:19.459 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:38:19.459 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:38:19.459 #undef SPDK_CONFIG_HAVE_LIBBSD 00:38:19.459 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:38:19.459 #define SPDK_CONFIG_IDXD 1 00:38:19.459 #undef SPDK_CONFIG_IDXD_KERNEL 00:38:19.459 #undef SPDK_CONFIG_IPSEC_MB 00:38:19.459 #define SPDK_CONFIG_IPSEC_MB_DIR 00:38:19.459 #define SPDK_CONFIG_ISAL 1 00:38:19.459 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:38:19.459 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:38:19.459 #define SPDK_CONFIG_LIBDIR 00:38:19.459 #undef SPDK_CONFIG_LTO 00:38:19.459 #define SPDK_CONFIG_MAX_LCORES 128 00:38:19.459 #define SPDK_CONFIG_NVME_CUSE 1 00:38:19.459 #undef SPDK_CONFIG_OCF 00:38:19.459 #define SPDK_CONFIG_OCF_PATH 00:38:19.459 #define SPDK_CONFIG_OPENSSL_PATH 00:38:19.459 #undef SPDK_CONFIG_PGO_CAPTURE 00:38:19.459 #define SPDK_CONFIG_PGO_DIR 00:38:19.459 #undef SPDK_CONFIG_PGO_USE 00:38:19.459 #define SPDK_CONFIG_PREFIX /usr/local 00:38:19.459 #define SPDK_CONFIG_RAID5F 1 00:38:19.459 #undef SPDK_CONFIG_RBD 00:38:19.459 #define SPDK_CONFIG_RDMA 1 00:38:19.459 #define SPDK_CONFIG_RDMA_PROV verbs 00:38:19.459 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:38:19.459 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:38:19.459 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:38:19.459 #undef SPDK_CONFIG_SHARED 00:38:19.459 #undef SPDK_CONFIG_SMA 00:38:19.459 #define SPDK_CONFIG_TESTS 1 00:38:19.459 #undef SPDK_CONFIG_TSAN 00:38:19.459 #undef SPDK_CONFIG_UBLK 00:38:19.459 #define SPDK_CONFIG_UBSAN 1 00:38:19.459 #define SPDK_CONFIG_UNIT_TESTS 1 00:38:19.459 #undef SPDK_CONFIG_URING 00:38:19.459 #define SPDK_CONFIG_URING_PATH 00:38:19.459 #undef SPDK_CONFIG_URING_ZNS 00:38:19.459 #undef SPDK_CONFIG_USDT 00:38:19.459 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:38:19.459 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:38:19.459 #undef SPDK_CONFIG_VFIO_USER 00:38:19.459 #define SPDK_CONFIG_VFIO_USER_DIR 00:38:19.459 #define SPDK_CONFIG_VHOST 1 00:38:19.459 #define SPDK_CONFIG_VIRTIO 1 00:38:19.459 #undef SPDK_CONFIG_VTUNE 00:38:19.459 #define SPDK_CONFIG_VTUNE_DIR 00:38:19.459 #define SPDK_CONFIG_WERROR 1 00:38:19.459 #define SPDK_CONFIG_WPDK_DIR 00:38:19.459 #undef SPDK_CONFIG_XNVME 00:38:19.459 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:38:19.459 23:24:08 reap_unregistered_poller -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:38:19.459 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:19.459 23:24:08 reap_unregistered_poller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:19.459 23:24:08 reap_unregistered_poller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:19.459 23:24:08 reap_unregistered_poller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:19.459 23:24:08 reap_unregistered_poller -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:19.459 23:24:08 reap_unregistered_poller -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:19.459 23:24:08 reap_unregistered_poller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:19.459 23:24:08 reap_unregistered_poller -- paths/export.sh@5 -- # export PATH 00:38:19.459 23:24:08 reap_unregistered_poller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:19.459 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:38:19.459 23:24:08 reap_unregistered_poller -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:38:19.459 23:24:08 reap_unregistered_poller -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:38:19.460 23:24:08 reap_unregistered_poller -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:38:19.460 23:24:08 reap_unregistered_poller -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:38:19.460 23:24:08 reap_unregistered_poller -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:38:19.460 23:24:08 reap_unregistered_poller -- pm/common@64 -- # TEST_TAG=N/A 00:38:19.460 23:24:08 reap_unregistered_poller -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:38:19.460 23:24:08 reap_unregistered_poller -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:38:19.460 23:24:08 reap_unregistered_poller -- pm/common@68 -- # uname -s 00:38:19.460 23:24:08 reap_unregistered_poller -- pm/common@68 -- # PM_OS=Linux 00:38:19.460 23:24:08 reap_unregistered_poller -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:38:19.460 23:24:08 reap_unregistered_poller -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:38:19.460 23:24:08 reap_unregistered_poller -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:38:19.460 23:24:08 reap_unregistered_poller -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:38:19.460 23:24:08 reap_unregistered_poller -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:38:19.460 23:24:08 reap_unregistered_poller -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:38:19.460 23:24:08 reap_unregistered_poller -- pm/common@76 -- # SUDO[0]= 00:38:19.460 23:24:08 reap_unregistered_poller -- pm/common@76 -- # SUDO[1]='sudo -E' 00:38:19.460 23:24:08 reap_unregistered_poller -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:38:19.460 23:24:08 reap_unregistered_poller -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:38:19.460 23:24:08 reap_unregistered_poller -- pm/common@81 -- # [[ Linux == Linux ]] 00:38:19.460 23:24:08 reap_unregistered_poller -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:38:19.460 23:24:08 reap_unregistered_poller -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@58 -- # : 1 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@62 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@64 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@66 -- # : 1 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@68 -- # : 1 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@70 -- # : 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@72 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@74 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@76 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@78 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@80 -- # : 1 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@82 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@84 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@86 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@88 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@90 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@92 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@94 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@96 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@98 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@100 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@102 -- # : rdma 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@104 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@106 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@108 -- # : 1 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@110 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@112 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@114 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@116 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@118 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@120 -- # : 1 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@122 -- # : 1 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@124 -- # : /home/vagrant/spdk_repo/dpdk/build 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@126 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@128 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@130 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@132 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@134 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@136 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@138 -- # : v22.11.4 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@140 -- # : true 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@142 -- # : 1 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@144 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@146 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@148 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@150 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@152 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@154 -- # : 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@156 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@158 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@160 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@162 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@164 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@167 -- # : 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@169 -- # : 0 00:38:19.460 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@171 -- # : 0 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@200 -- # cat 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@253 -- # export QEMU_BIN= 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@253 -- # QEMU_BIN= 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@254 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@263 -- # export valgrind= 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@263 -- # valgrind= 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@269 -- # uname -s 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@279 -- # MAKE=make 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@299 -- # TEST_MODE= 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@318 -- # [[ -z 174184 ]] 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@318 -- # kill -0 174184 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@331 -- # local mount target_dir 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.ayP470 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.ayP470/tests/interrupt /tmp/spdk.ayP470 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@327 -- # df -T 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=1248956416 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253683200 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4726784 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda1 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=9132343296 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=20616794112 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=11467673600 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=6265028608 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=6268403712 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=5242880 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=5242880 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda15 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:38:19.461 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=103061504 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=109395968 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=6334464 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=1253675008 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253679104 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=95921766400 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=3781013504 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:38:19.462 * Looking for test storage... 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@368 -- # local target_space new_size 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@372 -- # mount=/ 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@374 -- # target_space=9132343296 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ ext4 == tmpfs ]] 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ ext4 == ramfs ]] 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@381 -- # new_size=13682266112 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:38:19.462 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@389 -- # return 0 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@1682 -- # set -o errtrace 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@1687 -- # true 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@1689 -- # xtrace_fd 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@27 -- # exec 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@29 -- # exec 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@31 -- # xtrace_restore 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@18 -- # set -x 00:38:19.462 23:24:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:38:19.462 23:24:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:19.462 23:24:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:38:19.462 23:24:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:38:19.462 23:24:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:38:19.462 23:24:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:38:19.462 23:24:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:38:19.462 23:24:08 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:38:19.462 23:24:08 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:38:19.462 23:24:08 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:38:19.462 23:24:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:19.462 23:24:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:38:19.462 23:24:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=174227 00:38:19.462 23:24:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:38:19.462 23:24:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:38:19.462 23:24:08 reap_unregistered_poller -- interrupt/interrupt_common.sh@26 -- # waitforlisten 174227 /var/tmp/spdk.sock 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@829 -- # '[' -z 174227 ']' 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:19.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:19.462 23:24:08 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:38:19.462 [2024-07-13 23:24:08.827026] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:19.462 [2024-07-13 23:24:08.827733] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174227 ] 00:38:19.719 [2024-07-13 23:24:08.995407] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:19.719 [2024-07-13 23:24:09.084497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:19.719 [2024-07-13 23:24:09.084609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:19.719 [2024-07-13 23:24:09.084608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:19.976 [2024-07-13 23:24:09.167854] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:20.541 23:24:09 reap_unregistered_poller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:20.541 23:24:09 reap_unregistered_poller -- common/autotest_common.sh@862 -- # return 0 00:38:20.541 23:24:09 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:38:20.541 23:24:09 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:38:20.541 23:24:09 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:20.541 23:24:09 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:38:20.541 23:24:09 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:20.541 23:24:09 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:38:20.541 "name": "app_thread", 00:38:20.541 "id": 1, 00:38:20.541 "active_pollers": [], 00:38:20.541 "timed_pollers": [ 00:38:20.541 { 00:38:20.541 "name": "rpc_subsystem_poll_servers", 00:38:20.541 "id": 1, 00:38:20.541 "state": "waiting", 00:38:20.541 "run_count": 0, 00:38:20.541 "busy_count": 0, 00:38:20.541 "period_ticks": 8800000 00:38:20.541 } 00:38:20.541 ], 00:38:20.541 "paused_pollers": [] 00:38:20.541 }' 00:38:20.541 23:24:09 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:38:20.541 23:24:09 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:38:20.541 23:24:09 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:38:20.541 23:24:09 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:38:20.798 23:24:09 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:38:20.798 23:24:09 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:38:20.798 23:24:09 reap_unregistered_poller -- interrupt/common.sh@75 -- # uname -s 00:38:20.798 23:24:09 reap_unregistered_poller -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:38:20.798 23:24:09 reap_unregistered_poller -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:38:20.798 5000+0 records in 00:38:20.798 5000+0 records out 00:38:20.798 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0276699 s, 370 MB/s 00:38:20.798 23:24:10 reap_unregistered_poller -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:38:21.055 AIO0 00:38:21.056 23:24:10 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:21.313 23:24:10 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:38:21.313 23:24:10 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:38:21.313 23:24:10 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:21.313 23:24:10 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:38:21.313 23:24:10 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:38:21.313 23:24:10 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:21.570 23:24:10 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:38:21.570 "name": "app_thread", 00:38:21.570 "id": 1, 00:38:21.570 "active_pollers": [], 00:38:21.570 "timed_pollers": [ 00:38:21.570 { 00:38:21.570 "name": "rpc_subsystem_poll_servers", 00:38:21.570 "id": 1, 00:38:21.570 "state": "waiting", 00:38:21.570 "run_count": 0, 00:38:21.570 "busy_count": 0, 00:38:21.570 "period_ticks": 8800000 00:38:21.570 } 00:38:21.570 ], 00:38:21.570 "paused_pollers": [] 00:38:21.570 }' 00:38:21.570 23:24:10 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:38:21.570 23:24:10 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:38:21.570 23:24:10 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:38:21.570 23:24:10 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:38:21.570 23:24:10 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:38:21.570 23:24:10 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:38:21.570 23:24:10 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:38:21.570 23:24:10 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 174227 00:38:21.570 23:24:10 reap_unregistered_poller -- common/autotest_common.sh@948 -- # '[' -z 174227 ']' 00:38:21.570 23:24:10 reap_unregistered_poller -- common/autotest_common.sh@952 -- # kill -0 174227 00:38:21.570 23:24:10 reap_unregistered_poller -- common/autotest_common.sh@953 -- # uname 00:38:21.570 23:24:10 reap_unregistered_poller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:21.570 23:24:10 reap_unregistered_poller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 174227 00:38:21.570 23:24:10 reap_unregistered_poller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:21.570 23:24:10 reap_unregistered_poller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:21.570 23:24:10 reap_unregistered_poller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 174227' 00:38:21.570 killing process with pid 174227 00:38:21.570 23:24:10 reap_unregistered_poller -- common/autotest_common.sh@967 -- # kill 174227 00:38:21.570 23:24:10 reap_unregistered_poller -- common/autotest_common.sh@972 -- # wait 174227 00:38:21.827 23:24:11 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:38:21.827 23:24:11 reap_unregistered_poller -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:38:21.827 ************************************ 00:38:21.827 END TEST reap_unregistered_poller 00:38:21.827 ************************************ 00:38:21.827 00:38:21.827 real 0m2.621s 00:38:21.827 user 0m1.754s 00:38:21.827 sys 0m0.540s 00:38:21.827 23:24:11 reap_unregistered_poller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:21.827 23:24:11 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:38:21.827 23:24:11 -- common/autotest_common.sh@1142 -- # return 0 00:38:21.827 23:24:11 -- spdk/autotest.sh@198 -- # uname -s 00:38:21.827 23:24:11 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:38:21.827 23:24:11 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:38:21.827 23:24:11 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:38:21.827 23:24:11 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:38:21.827 23:24:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:21.827 23:24:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:21.827 23:24:11 -- common/autotest_common.sh@10 -- # set +x 00:38:21.827 ************************************ 00:38:21.827 START TEST spdk_dd 00:38:21.827 ************************************ 00:38:21.827 23:24:11 spdk_dd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:38:22.085 * Looking for test storage... 00:38:22.085 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:38:22.085 23:24:11 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:22.085 23:24:11 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:22.085 23:24:11 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:22.085 23:24:11 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:22.085 23:24:11 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:22.085 23:24:11 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:22.085 23:24:11 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:22.085 23:24:11 spdk_dd -- paths/export.sh@5 -- # export PATH 00:38:22.085 23:24:11 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:22.085 23:24:11 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:38:22.343 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:38:22.343 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:38:23.276 23:24:12 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:38:23.276 23:24:12 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:38:23.276 23:24:12 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:38:23.276 23:24:12 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:38:23.276 23:24:12 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:38:23.276 23:24:12 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:38:23.276 23:24:12 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:38:23.276 23:24:12 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:38:23.276 23:24:12 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:38:23.276 23:24:12 spdk_dd -- scripts/common.sh@230 -- # local class 00:38:23.276 23:24:12 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:38:23.276 23:24:12 spdk_dd -- scripts/common.sh@232 -- # local progif 00:38:23.276 23:24:12 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:38:23.276 23:24:12 spdk_dd -- scripts/common.sh@233 -- # class=01 00:38:23.276 23:24:12 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:38:23.276 23:24:12 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:38:23.276 23:24:12 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:38:23.276 23:24:12 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:38:23.276 23:24:12 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:38:23.276 23:24:12 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:38:23.276 23:24:12 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:38:23.276 23:24:12 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:38:23.276 23:24:12 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:38:23.276 23:24:12 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:38:23.276 23:24:12 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:38:23.276 23:24:12 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:38:23.276 23:24:12 spdk_dd -- scripts/common.sh@15 -- # local i 00:38:23.276 23:24:12 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:38:23.276 23:24:12 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:38:23.276 23:24:12 spdk_dd -- scripts/common.sh@24 -- # return 0 00:38:23.276 23:24:12 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:38:23.276 23:24:12 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:38:23.276 23:24:12 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:38:23.276 23:24:12 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:38:23.276 23:24:12 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:38:23.276 23:24:12 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:38:23.276 23:24:12 spdk_dd -- scripts/common.sh@325 -- # (( 1 )) 00:38:23.276 23:24:12 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 00:38:23.276 23:24:12 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:38:23.276 23:24:12 spdk_dd -- dd/common.sh@139 -- # local lib so 00:38:23.276 23:24:12 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:38:23.276 23:24:12 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:23.276 23:24:12 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:38:23.276 23:24:12 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:23.276 23:24:12 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:38:23.276 23:24:12 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:23.276 23:24:12 spdk_dd -- dd/common.sh@143 -- # [[ libasan.so.6 == liburing.so.* ]] 00:38:23.276 23:24:12 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:23.276 23:24:12 spdk_dd -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:38:23.276 23:24:12 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:23.276 23:24:12 spdk_dd -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:38:23.276 23:24:12 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:23.276 23:24:12 spdk_dd -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:38:23.276 23:24:12 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:23.276 23:24:12 spdk_dd -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:38:23.276 23:24:12 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:23.276 23:24:12 spdk_dd -- dd/common.sh@143 -- # [[ libssl.so.3 == liburing.so.* ]] 00:38:23.276 23:24:12 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:23.276 23:24:12 spdk_dd -- dd/common.sh@143 -- # [[ libcrypto.so.3 == liburing.so.* ]] 00:38:23.276 23:24:12 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:23.276 23:24:12 spdk_dd -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:38:23.276 23:24:12 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:23.276 23:24:12 spdk_dd -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:38:23.276 23:24:12 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:23.276 23:24:12 spdk_dd -- dd/common.sh@143 -- # [[ libkeyutils.so.1 == liburing.so.* ]] 00:38:23.276 23:24:12 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:23.276 23:24:12 spdk_dd -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:38:23.276 23:24:12 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:23.276 23:24:12 spdk_dd -- dd/common.sh@143 -- # [[ libiscsi.so.7 == liburing.so.* ]] 00:38:23.535 23:24:12 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:23.535 23:24:12 spdk_dd -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:38:23.535 23:24:12 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:23.535 23:24:12 spdk_dd -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:38:23.535 23:24:12 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:23.535 23:24:12 spdk_dd -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:38:23.535 23:24:12 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:23.535 23:24:12 spdk_dd -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:38:23.535 23:24:12 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:23.535 23:24:12 spdk_dd -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:38:23.535 23:24:12 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:23.535 23:24:12 spdk_dd -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:38:23.535 23:24:12 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:23.535 23:24:12 spdk_dd -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:38:23.535 23:24:12 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:23.535 23:24:12 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:38:23.535 23:24:12 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:38:23.535 23:24:12 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:38:23.535 23:24:12 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:23.535 23:24:12 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:38:23.535 ************************************ 00:38:23.535 START TEST spdk_dd_basic_rw 00:38:23.535 ************************************ 00:38:23.535 23:24:12 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:38:23.535 * Looking for test storage... 00:38:23.535 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:38:23.535 23:24:12 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:23.535 23:24:12 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:23.535 23:24:12 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:23.535 23:24:12 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:23.536 23:24:12 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:23.536 23:24:12 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:23.536 23:24:12 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:23.536 23:24:12 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:38:23.536 23:24:12 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:23.536 23:24:12 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:38:23.536 23:24:12 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:38:23.536 23:24:12 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:38:23.536 23:24:12 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:38:23.536 23:24:12 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:38:23.536 23:24:12 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:38:23.536 23:24:12 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:38:23.536 23:24:12 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:23.536 23:24:12 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:23.536 23:24:12 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:38:23.536 23:24:12 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:38:23.536 23:24:12 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:38:23.536 23:24:12 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:38:23.796 23:24:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 119 Data Units Written: 7 Host Read Commands: 2515 Host Write Commands: 111 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:38:23.796 23:24:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:38:23.797 23:24:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 119 Data Units Written: 7 Host Read Commands: 2515 Host Write Commands: 111 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:38:23.797 23:24:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:38:23.797 23:24:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:38:23.797 23:24:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:38:23.797 23:24:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:38:23.797 23:24:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:38:23.797 23:24:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:38:23.797 23:24:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:23.797 23:24:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:38:23.797 23:24:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:23.797 23:24:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:38:23.797 23:24:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:38:23.797 ************************************ 00:38:23.797 START TEST dd_bs_lt_native_bs 00:38:23.797 ************************************ 00:38:23.797 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:38:23.797 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:38:23.797 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:38:23.797 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:23.797 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:23.797 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:23.797 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:23.797 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:23.797 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:23.797 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:23.797 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:23.797 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:38:23.797 { 00:38:23.797 "subsystems": [ 00:38:23.797 { 00:38:23.797 "subsystem": "bdev", 00:38:23.797 "config": [ 00:38:23.797 { 00:38:23.797 "params": { 00:38:23.797 "trtype": "pcie", 00:38:23.797 "traddr": "0000:00:10.0", 00:38:23.797 "name": "Nvme0" 00:38:23.797 }, 00:38:23.797 "method": "bdev_nvme_attach_controller" 00:38:23.797 }, 00:38:23.797 { 00:38:23.797 "method": "bdev_wait_for_examine" 00:38:23.797 } 00:38:23.797 ] 00:38:23.797 } 00:38:23.797 ] 00:38:23.797 } 00:38:23.797 [2024-07-13 23:24:13.093264] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:23.797 [2024-07-13 23:24:13.093729] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174532 ] 00:38:24.054 [2024-07-13 23:24:13.236707] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:24.054 [2024-07-13 23:24:13.326308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:24.311 [2024-07-13 23:24:13.494282] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:38:24.311 [2024-07-13 23:24:13.494395] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:24.311 [2024-07-13 23:24:13.621153] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:24.568 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:38:24.568 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:24.568 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:38:24.568 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:38:24.568 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:38:24.568 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:24.568 00:38:24.568 real 0m0.711s 00:38:24.568 user 0m0.448s 00:38:24.568 sys 0m0.222s 00:38:24.568 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:24.568 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:38:24.568 ************************************ 00:38:24.569 END TEST dd_bs_lt_native_bs 00:38:24.569 ************************************ 00:38:24.569 23:24:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:38:24.569 23:24:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:38:24.569 23:24:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:38:24.569 23:24:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:24.569 23:24:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:38:24.569 ************************************ 00:38:24.569 START TEST dd_rw 00:38:24.569 ************************************ 00:38:24.569 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # basic_rw 4096 00:38:24.569 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:38:24.569 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:38:24.569 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:38:24.569 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:38:24.569 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:38:24.569 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:38:24.569 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:38:24.569 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:38:24.569 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:38:24.569 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:38:24.569 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:38:24.569 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:38:24.569 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:38:24.569 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:38:24.569 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:38:24.569 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:38:24.569 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:38:24.569 23:24:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:25.133 23:24:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:38:25.133 23:24:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:38:25.133 23:24:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:25.133 23:24:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:25.133 [2024-07-13 23:24:14.452688] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:25.133 [2024-07-13 23:24:14.452937] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174581 ] 00:38:25.133 { 00:38:25.133 "subsystems": [ 00:38:25.133 { 00:38:25.133 "subsystem": "bdev", 00:38:25.133 "config": [ 00:38:25.133 { 00:38:25.133 "params": { 00:38:25.133 "trtype": "pcie", 00:38:25.133 "traddr": "0000:00:10.0", 00:38:25.133 "name": "Nvme0" 00:38:25.133 }, 00:38:25.133 "method": "bdev_nvme_attach_controller" 00:38:25.133 }, 00:38:25.133 { 00:38:25.133 "method": "bdev_wait_for_examine" 00:38:25.133 } 00:38:25.133 ] 00:38:25.133 } 00:38:25.133 ] 00:38:25.133 } 00:38:25.391 [2024-07-13 23:24:14.590686] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:25.391 [2024-07-13 23:24:14.661261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:25.908  Copying: 60/60 [kB] (average 29 MBps) 00:38:25.908 00:38:25.908 23:24:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:38:25.908 23:24:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:38:25.908 23:24:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:25.908 23:24:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:25.908 [2024-07-13 23:24:15.174025] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:25.908 [2024-07-13 23:24:15.174284] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174589 ] 00:38:25.908 { 00:38:25.908 "subsystems": [ 00:38:25.908 { 00:38:25.908 "subsystem": "bdev", 00:38:25.908 "config": [ 00:38:25.908 { 00:38:25.908 "params": { 00:38:25.908 "trtype": "pcie", 00:38:25.908 "traddr": "0000:00:10.0", 00:38:25.908 "name": "Nvme0" 00:38:25.908 }, 00:38:25.908 "method": "bdev_nvme_attach_controller" 00:38:25.908 }, 00:38:25.908 { 00:38:25.908 "method": "bdev_wait_for_examine" 00:38:25.908 } 00:38:25.908 ] 00:38:25.908 } 00:38:25.908 ] 00:38:25.908 } 00:38:26.167 [2024-07-13 23:24:15.316638] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:26.167 [2024-07-13 23:24:15.379436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:26.426  Copying: 60/60 [kB] (average 19 MBps) 00:38:26.426 00:38:26.426 23:24:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:26.426 23:24:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:38:26.426 23:24:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:38:26.426 23:24:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:38:26.426 23:24:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:38:26.426 23:24:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:38:26.426 23:24:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:38:26.426 23:24:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:38:26.426 23:24:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:38:26.426 23:24:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:26.426 23:24:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:26.685 [2024-07-13 23:24:15.871349] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:26.685 [2024-07-13 23:24:15.871569] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174610 ] 00:38:26.685 { 00:38:26.685 "subsystems": [ 00:38:26.685 { 00:38:26.685 "subsystem": "bdev", 00:38:26.685 "config": [ 00:38:26.685 { 00:38:26.685 "params": { 00:38:26.685 "trtype": "pcie", 00:38:26.685 "traddr": "0000:00:10.0", 00:38:26.685 "name": "Nvme0" 00:38:26.685 }, 00:38:26.685 "method": "bdev_nvme_attach_controller" 00:38:26.685 }, 00:38:26.685 { 00:38:26.685 "method": "bdev_wait_for_examine" 00:38:26.685 } 00:38:26.685 ] 00:38:26.685 } 00:38:26.685 ] 00:38:26.685 } 00:38:26.685 [2024-07-13 23:24:16.005963] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:26.685 [2024-07-13 23:24:16.064853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:27.205  Copying: 1024/1024 [kB] (average 500 MBps) 00:38:27.205 00:38:27.205 23:24:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:38:27.205 23:24:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:38:27.205 23:24:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:38:27.205 23:24:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:38:27.205 23:24:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:38:27.205 23:24:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:38:27.205 23:24:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:27.771 23:24:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:38:27.771 23:24:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:38:27.771 23:24:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:27.771 23:24:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:27.771 { 00:38:27.771 "subsystems": [ 00:38:27.771 { 00:38:27.771 "subsystem": "bdev", 00:38:27.771 "config": [ 00:38:27.771 { 00:38:27.771 "params": { 00:38:27.771 "trtype": "pcie", 00:38:27.771 "traddr": "0000:00:10.0", 00:38:27.771 "name": "Nvme0" 00:38:27.771 }, 00:38:27.771 "method": "bdev_nvme_attach_controller" 00:38:27.771 }, 00:38:27.771 { 00:38:27.771 "method": "bdev_wait_for_examine" 00:38:27.771 } 00:38:27.771 ] 00:38:27.771 } 00:38:27.771 ] 00:38:27.771 } 00:38:27.771 [2024-07-13 23:24:17.166550] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:27.771 [2024-07-13 23:24:17.166819] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174630 ] 00:38:28.029 [2024-07-13 23:24:17.312176] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:28.029 [2024-07-13 23:24:17.378316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:28.547  Copying: 60/60 [kB] (average 58 MBps) 00:38:28.547 00:38:28.547 23:24:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:38:28.547 23:24:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:38:28.547 23:24:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:28.547 23:24:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:28.547 [2024-07-13 23:24:17.879535] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:28.547 [2024-07-13 23:24:17.879806] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174645 ] 00:38:28.547 { 00:38:28.547 "subsystems": [ 00:38:28.547 { 00:38:28.547 "subsystem": "bdev", 00:38:28.547 "config": [ 00:38:28.547 { 00:38:28.547 "params": { 00:38:28.547 "trtype": "pcie", 00:38:28.547 "traddr": "0000:00:10.0", 00:38:28.547 "name": "Nvme0" 00:38:28.547 }, 00:38:28.547 "method": "bdev_nvme_attach_controller" 00:38:28.547 }, 00:38:28.547 { 00:38:28.547 "method": "bdev_wait_for_examine" 00:38:28.547 } 00:38:28.547 ] 00:38:28.547 } 00:38:28.547 ] 00:38:28.547 } 00:38:28.805 [2024-07-13 23:24:18.027780] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:28.805 [2024-07-13 23:24:18.098742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:29.322  Copying: 60/60 [kB] (average 58 MBps) 00:38:29.322 00:38:29.322 23:24:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:29.322 23:24:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:38:29.322 23:24:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:38:29.322 23:24:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:38:29.322 23:24:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:38:29.322 23:24:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:38:29.322 23:24:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:38:29.322 23:24:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:38:29.322 23:24:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:38:29.322 23:24:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:29.322 23:24:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:29.322 [2024-07-13 23:24:18.608472] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:29.322 [2024-07-13 23:24:18.608712] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174666 ] 00:38:29.322 { 00:38:29.322 "subsystems": [ 00:38:29.322 { 00:38:29.322 "subsystem": "bdev", 00:38:29.322 "config": [ 00:38:29.322 { 00:38:29.322 "params": { 00:38:29.322 "trtype": "pcie", 00:38:29.322 "traddr": "0000:00:10.0", 00:38:29.322 "name": "Nvme0" 00:38:29.322 }, 00:38:29.322 "method": "bdev_nvme_attach_controller" 00:38:29.322 }, 00:38:29.322 { 00:38:29.322 "method": "bdev_wait_for_examine" 00:38:29.322 } 00:38:29.322 ] 00:38:29.322 } 00:38:29.322 ] 00:38:29.322 } 00:38:29.579 [2024-07-13 23:24:18.748706] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:29.579 [2024-07-13 23:24:18.803501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:29.836  Copying: 1024/1024 [kB] (average 1000 MBps) 00:38:29.836 00:38:29.836 23:24:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:38:29.836 23:24:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:38:29.836 23:24:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:38:29.836 23:24:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:38:29.836 23:24:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:38:29.836 23:24:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:38:29.836 23:24:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:38:29.836 23:24:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:30.403 23:24:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:38:30.403 23:24:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:38:30.403 23:24:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:30.403 23:24:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:30.662 [2024-07-13 23:24:19.836976] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:30.662 [2024-07-13 23:24:19.837258] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174686 ] 00:38:30.662 { 00:38:30.662 "subsystems": [ 00:38:30.662 { 00:38:30.662 "subsystem": "bdev", 00:38:30.662 "config": [ 00:38:30.662 { 00:38:30.662 "params": { 00:38:30.662 "trtype": "pcie", 00:38:30.662 "traddr": "0000:00:10.0", 00:38:30.662 "name": "Nvme0" 00:38:30.662 }, 00:38:30.662 "method": "bdev_nvme_attach_controller" 00:38:30.662 }, 00:38:30.662 { 00:38:30.662 "method": "bdev_wait_for_examine" 00:38:30.662 } 00:38:30.662 ] 00:38:30.662 } 00:38:30.662 ] 00:38:30.662 } 00:38:30.662 [2024-07-13 23:24:19.983709] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:30.662 [2024-07-13 23:24:20.040688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:31.179  Copying: 56/56 [kB] (average 54 MBps) 00:38:31.179 00:38:31.179 23:24:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:38:31.179 23:24:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:38:31.179 23:24:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:31.179 23:24:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:31.179 [2024-07-13 23:24:20.550645] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:31.179 [2024-07-13 23:24:20.550920] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174704 ] 00:38:31.179 { 00:38:31.179 "subsystems": [ 00:38:31.179 { 00:38:31.179 "subsystem": "bdev", 00:38:31.179 "config": [ 00:38:31.179 { 00:38:31.179 "params": { 00:38:31.179 "trtype": "pcie", 00:38:31.179 "traddr": "0000:00:10.0", 00:38:31.179 "name": "Nvme0" 00:38:31.179 }, 00:38:31.179 "method": "bdev_nvme_attach_controller" 00:38:31.179 }, 00:38:31.179 { 00:38:31.179 "method": "bdev_wait_for_examine" 00:38:31.179 } 00:38:31.179 ] 00:38:31.179 } 00:38:31.179 ] 00:38:31.179 } 00:38:31.437 [2024-07-13 23:24:20.696351] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:31.437 [2024-07-13 23:24:20.760962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:31.953  Copying: 56/56 [kB] (average 27 MBps) 00:38:31.953 00:38:31.953 23:24:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:31.953 23:24:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:38:31.953 23:24:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:38:31.953 23:24:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:38:31.953 23:24:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:38:31.953 23:24:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:38:31.953 23:24:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:38:31.953 23:24:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:38:31.953 23:24:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:38:31.953 23:24:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:31.953 23:24:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:31.953 [2024-07-13 23:24:21.293948] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:31.953 [2024-07-13 23:24:21.294242] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174717 ] 00:38:31.953 { 00:38:31.953 "subsystems": [ 00:38:31.953 { 00:38:31.953 "subsystem": "bdev", 00:38:31.953 "config": [ 00:38:31.953 { 00:38:31.953 "params": { 00:38:31.953 "trtype": "pcie", 00:38:31.953 "traddr": "0000:00:10.0", 00:38:31.953 "name": "Nvme0" 00:38:31.953 }, 00:38:31.953 "method": "bdev_nvme_attach_controller" 00:38:31.953 }, 00:38:31.953 { 00:38:31.953 "method": "bdev_wait_for_examine" 00:38:31.953 } 00:38:31.953 ] 00:38:31.953 } 00:38:31.953 ] 00:38:31.953 } 00:38:32.212 [2024-07-13 23:24:21.439508] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:32.212 [2024-07-13 23:24:21.493930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:32.728  Copying: 1024/1024 [kB] (average 1000 MBps) 00:38:32.728 00:38:32.728 23:24:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:38:32.728 23:24:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:38:32.728 23:24:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:38:32.728 23:24:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:38:32.728 23:24:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:38:32.728 23:24:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:38:32.728 23:24:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:33.295 23:24:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:38:33.295 23:24:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:38:33.295 23:24:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:33.295 23:24:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:33.295 [2024-07-13 23:24:22.524715] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:33.295 [2024-07-13 23:24:22.524952] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174744 ] 00:38:33.295 { 00:38:33.295 "subsystems": [ 00:38:33.295 { 00:38:33.295 "subsystem": "bdev", 00:38:33.295 "config": [ 00:38:33.295 { 00:38:33.295 "params": { 00:38:33.295 "trtype": "pcie", 00:38:33.295 "traddr": "0000:00:10.0", 00:38:33.295 "name": "Nvme0" 00:38:33.295 }, 00:38:33.295 "method": "bdev_nvme_attach_controller" 00:38:33.295 }, 00:38:33.295 { 00:38:33.295 "method": "bdev_wait_for_examine" 00:38:33.295 } 00:38:33.295 ] 00:38:33.295 } 00:38:33.295 ] 00:38:33.295 } 00:38:33.295 [2024-07-13 23:24:22.661715] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:33.554 [2024-07-13 23:24:22.734934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:33.812  Copying: 56/56 [kB] (average 54 MBps) 00:38:33.812 00:38:33.812 23:24:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:38:33.812 23:24:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:38:33.812 23:24:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:33.812 23:24:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:34.071 [2024-07-13 23:24:23.246867] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:34.071 [2024-07-13 23:24:23.247114] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174753 ] 00:38:34.072 { 00:38:34.072 "subsystems": [ 00:38:34.072 { 00:38:34.072 "subsystem": "bdev", 00:38:34.072 "config": [ 00:38:34.072 { 00:38:34.072 "params": { 00:38:34.072 "trtype": "pcie", 00:38:34.072 "traddr": "0000:00:10.0", 00:38:34.072 "name": "Nvme0" 00:38:34.072 }, 00:38:34.072 "method": "bdev_nvme_attach_controller" 00:38:34.072 }, 00:38:34.072 { 00:38:34.072 "method": "bdev_wait_for_examine" 00:38:34.072 } 00:38:34.072 ] 00:38:34.072 } 00:38:34.072 ] 00:38:34.072 } 00:38:34.072 [2024-07-13 23:24:23.391825] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:34.329 [2024-07-13 23:24:23.476671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:34.585  Copying: 56/56 [kB] (average 54 MBps) 00:38:34.585 00:38:34.585 23:24:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:34.585 23:24:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:38:34.585 23:24:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:38:34.585 23:24:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:38:34.585 23:24:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:38:34.585 23:24:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:38:34.585 23:24:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:38:34.585 23:24:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:38:34.585 23:24:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:38:34.585 23:24:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:34.585 23:24:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:34.842 [2024-07-13 23:24:24.001473] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:34.842 [2024-07-13 23:24:24.001713] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174773 ] 00:38:34.842 { 00:38:34.842 "subsystems": [ 00:38:34.842 { 00:38:34.842 "subsystem": "bdev", 00:38:34.842 "config": [ 00:38:34.842 { 00:38:34.842 "params": { 00:38:34.842 "trtype": "pcie", 00:38:34.842 "traddr": "0000:00:10.0", 00:38:34.842 "name": "Nvme0" 00:38:34.842 }, 00:38:34.842 "method": "bdev_nvme_attach_controller" 00:38:34.842 }, 00:38:34.842 { 00:38:34.842 "method": "bdev_wait_for_examine" 00:38:34.842 } 00:38:34.842 ] 00:38:34.842 } 00:38:34.842 ] 00:38:34.842 } 00:38:34.842 [2024-07-13 23:24:24.139879] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:34.842 [2024-07-13 23:24:24.204662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:35.357  Copying: 1024/1024 [kB] (average 1000 MBps) 00:38:35.357 00:38:35.357 23:24:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:38:35.357 23:24:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:38:35.357 23:24:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:38:35.357 23:24:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:38:35.357 23:24:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:38:35.357 23:24:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:38:35.357 23:24:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:38:35.357 23:24:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:35.922 23:24:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:38:35.922 23:24:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:38:35.922 23:24:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:35.922 23:24:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:35.922 [2024-07-13 23:24:25.174748] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:35.922 [2024-07-13 23:24:25.175044] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174789 ] 00:38:35.922 { 00:38:35.922 "subsystems": [ 00:38:35.922 { 00:38:35.922 "subsystem": "bdev", 00:38:35.922 "config": [ 00:38:35.922 { 00:38:35.922 "params": { 00:38:35.922 "trtype": "pcie", 00:38:35.922 "traddr": "0000:00:10.0", 00:38:35.922 "name": "Nvme0" 00:38:35.922 }, 00:38:35.922 "method": "bdev_nvme_attach_controller" 00:38:35.922 }, 00:38:35.922 { 00:38:35.922 "method": "bdev_wait_for_examine" 00:38:35.922 } 00:38:35.922 ] 00:38:35.922 } 00:38:35.922 ] 00:38:35.922 } 00:38:35.922 [2024-07-13 23:24:25.320728] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:36.180 [2024-07-13 23:24:25.394966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:36.438  Copying: 48/48 [kB] (average 46 MBps) 00:38:36.438 00:38:36.438 23:24:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:38:36.438 23:24:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:38:36.438 23:24:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:36.438 23:24:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:36.697 { 00:38:36.697 "subsystems": [ 00:38:36.697 { 00:38:36.697 "subsystem": "bdev", 00:38:36.697 "config": [ 00:38:36.697 { 00:38:36.697 "params": { 00:38:36.697 "trtype": "pcie", 00:38:36.697 "traddr": "0000:00:10.0", 00:38:36.697 "name": "Nvme0" 00:38:36.697 }, 00:38:36.697 "method": "bdev_nvme_attach_controller" 00:38:36.697 }, 00:38:36.697 { 00:38:36.697 "method": "bdev_wait_for_examine" 00:38:36.697 } 00:38:36.697 ] 00:38:36.697 } 00:38:36.697 ] 00:38:36.697 } 00:38:36.697 [2024-07-13 23:24:25.895736] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:36.697 [2024-07-13 23:24:25.895998] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174809 ] 00:38:36.697 [2024-07-13 23:24:26.042524] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:36.697 [2024-07-13 23:24:26.101205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:37.215  Copying: 48/48 [kB] (average 23 MBps) 00:38:37.215 00:38:37.215 23:24:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:37.215 23:24:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:38:37.215 23:24:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:38:37.215 23:24:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:38:37.215 23:24:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:38:37.215 23:24:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:38:37.215 23:24:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:38:37.215 23:24:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:38:37.215 23:24:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:38:37.215 23:24:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:37.215 23:24:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:37.215 [2024-07-13 23:24:26.608307] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:37.215 [2024-07-13 23:24:26.609164] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174825 ] 00:38:37.215 { 00:38:37.215 "subsystems": [ 00:38:37.215 { 00:38:37.215 "subsystem": "bdev", 00:38:37.215 "config": [ 00:38:37.215 { 00:38:37.215 "params": { 00:38:37.215 "trtype": "pcie", 00:38:37.215 "traddr": "0000:00:10.0", 00:38:37.215 "name": "Nvme0" 00:38:37.215 }, 00:38:37.215 "method": "bdev_nvme_attach_controller" 00:38:37.215 }, 00:38:37.215 { 00:38:37.215 "method": "bdev_wait_for_examine" 00:38:37.215 } 00:38:37.215 ] 00:38:37.215 } 00:38:37.215 ] 00:38:37.215 } 00:38:37.474 [2024-07-13 23:24:26.751886] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:37.474 [2024-07-13 23:24:26.840501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:37.992  Copying: 1024/1024 [kB] (average 500 MBps) 00:38:37.992 00:38:37.992 23:24:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:38:37.992 23:24:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:38:37.992 23:24:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:38:37.992 23:24:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:38:37.992 23:24:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:38:37.992 23:24:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:38:37.992 23:24:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:38.556 23:24:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:38:38.556 23:24:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:38:38.556 23:24:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:38.556 23:24:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:38.556 [2024-07-13 23:24:27.750947] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:38.556 [2024-07-13 23:24:27.751573] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174845 ] 00:38:38.556 { 00:38:38.556 "subsystems": [ 00:38:38.556 { 00:38:38.556 "subsystem": "bdev", 00:38:38.556 "config": [ 00:38:38.556 { 00:38:38.556 "params": { 00:38:38.556 "trtype": "pcie", 00:38:38.556 "traddr": "0000:00:10.0", 00:38:38.556 "name": "Nvme0" 00:38:38.556 }, 00:38:38.556 "method": "bdev_nvme_attach_controller" 00:38:38.556 }, 00:38:38.556 { 00:38:38.556 "method": "bdev_wait_for_examine" 00:38:38.556 } 00:38:38.556 ] 00:38:38.556 } 00:38:38.556 ] 00:38:38.556 } 00:38:38.556 [2024-07-13 23:24:27.888164] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:38.815 [2024-07-13 23:24:27.964258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:39.072  Copying: 48/48 [kB] (average 46 MBps) 00:38:39.072 00:38:39.072 23:24:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:38:39.072 23:24:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:38:39.072 23:24:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:39.072 23:24:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:39.072 { 00:38:39.072 "subsystems": [ 00:38:39.072 { 00:38:39.072 "subsystem": "bdev", 00:38:39.072 "config": [ 00:38:39.072 { 00:38:39.072 "params": { 00:38:39.072 "trtype": "pcie", 00:38:39.072 "traddr": "0000:00:10.0", 00:38:39.072 "name": "Nvme0" 00:38:39.072 }, 00:38:39.072 "method": "bdev_nvme_attach_controller" 00:38:39.072 }, 00:38:39.072 { 00:38:39.072 "method": "bdev_wait_for_examine" 00:38:39.072 } 00:38:39.072 ] 00:38:39.072 } 00:38:39.072 ] 00:38:39.072 } 00:38:39.072 [2024-07-13 23:24:28.445465] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:39.072 [2024-07-13 23:24:28.445730] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174857 ] 00:38:39.330 [2024-07-13 23:24:28.588342] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:39.330 [2024-07-13 23:24:28.649148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:39.847  Copying: 48/48 [kB] (average 46 MBps) 00:38:39.847 00:38:39.847 23:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:39.847 23:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:38:39.847 23:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:38:39.847 23:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:38:39.847 23:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:38:39.847 23:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:38:39.847 23:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:38:39.847 23:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:38:39.847 23:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:38:39.847 23:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:39.847 23:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:39.847 [2024-07-13 23:24:29.137365] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:39.847 [2024-07-13 23:24:29.137642] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174874 ] 00:38:39.847 { 00:38:39.847 "subsystems": [ 00:38:39.847 { 00:38:39.847 "subsystem": "bdev", 00:38:39.847 "config": [ 00:38:39.847 { 00:38:39.847 "params": { 00:38:39.847 "trtype": "pcie", 00:38:39.847 "traddr": "0000:00:10.0", 00:38:39.847 "name": "Nvme0" 00:38:39.847 }, 00:38:39.847 "method": "bdev_nvme_attach_controller" 00:38:39.847 }, 00:38:39.847 { 00:38:39.847 "method": "bdev_wait_for_examine" 00:38:39.847 } 00:38:39.847 ] 00:38:39.847 } 00:38:39.847 ] 00:38:39.847 } 00:38:40.106 [2024-07-13 23:24:29.282435] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:40.106 [2024-07-13 23:24:29.374960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:40.625  Copying: 1024/1024 [kB] (average 500 MBps) 00:38:40.625 00:38:40.625 00:38:40.625 real 0m16.037s 00:38:40.625 user 0m10.884s 00:38:40.625 sys 0m3.742s 00:38:40.625 23:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:40.625 23:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:40.625 ************************************ 00:38:40.625 END TEST dd_rw 00:38:40.625 ************************************ 00:38:40.625 23:24:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:38:40.625 23:24:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:38:40.625 23:24:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:40.625 23:24:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:40.625 23:24:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:38:40.625 ************************************ 00:38:40.625 START TEST dd_rw_offset 00:38:40.625 ************************************ 00:38:40.625 23:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # basic_offset 00:38:40.625 23:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:38:40.625 23:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:38:40.625 23:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:38:40.625 23:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:38:40.625 23:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:38:40.625 23:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=s1ct7dd9tetnd32gof6u102usrt0x1o27nkiys4fw9xj30oqa7x6ndvksy7qaosfboammdgoxg9ib0qjau4hvy3ansrpzvzdv1tqeqp8aax0fnis0mpo7jwse3ywmzr2asdw84fts5i3hq9clii2evzuswldjog9yemi1o1jm8hsft2xtfj9gubii7sotrfivwnjptlf90sgvolqwr88lo36rfs7txt61ugwtddca15hvlxxsq750pxp5ugk8rl84602tj04qm7m84zsvppt1k0wuhz9y014f451vp1kd4h432tfrl9nwcubs2dxxhjcts7hbgh0xyy6knbn8pezeedmew1macxkxinqd9xgze2ljvd9r363wej2g590apcfmpcdo8ueavcmbzt6ktwfu0ibl63fo6jl20dvegj5pilzziddimtn9tbr9u1lrg8ofc13ol0rwf6h8n9tsyr7qcbtz3yxp0gbsh8poueyghvbmf2yajv9aplzf9dyo3wzolhkk92oco67vdiop7tdx07xpd9kv8n3fbj1b0f9ee9mn7rpxm4agdufp8zegqz7g0o2svro3miu9em7xu919p40o681pxwgo64md9k0ctoo2rs12bvr61i72kd5no8qsefnxu59unnmkjcb1pqo7jkj4fnparqu9sova3bb0uvccvvhs0ge7y2foxvrnv7opxixyd242u7lh5j9nxkn3ecp8sk2ny0ncnkxjn17ugemdcmx9qo5ab1fazlcxr871z7b4742ibxg7vdolwa1giunpecg1n91znlp1has2jhpyea9g44zsofi13rwyjcqtewkrlzw3ijfdlpezh2oca23ayij4qnd8mi93y8eev1idljuua0t93r7dq5c87ln18q2tc91nza1z79oi4e38r1ob0rzbii2ui6d4f5kuune1ztq21n6ool9lujbwn36ejdvikuq1fet2zccrdsawlhhipykmwh5uw14lrsaycbm5zotyuegd1cu92jpsiu3ke3ra8jm1rhfmgafchpvolottieylrtypqz1g7imctza4t738n1nzm52osnkca0dau8n0xv1113drszznw42ezy9g3krm2df4974ft2ma03rn2hculvdhkka3jkixbxypqjmz5x0d29uervshtig1pm9bkxd36vwc0tv708ng9xhf7lbdpul5jcv1xuqykdnoi387klblyintl4osa0fzskstyuav09s7to6m8oet37lohirks222179q687wnzeiuhe0ag7ub2jgt8y09ee33egjgfl8lq46w6y3bhyteex893rcfa2f8sa6ilfbz3ul91v1o60k4r45xmexntbpy6ojj2sy83ub6en82gdmwcue0lz3qqtwnfm359573nuc967tea2em8vtugbxmdszglucj0mxkqwh0c0pbxd9ooctqqkd0n0s0z5yn61bv6xkopptusiaujbx6yom9omkmkz9auza9ky83pcbtqie040a91d99q3rpbg4wdp8emntcauc4k85uce2eel94wc0d6k0n646nn0iyef4glfhdabgfynu6cb4ivn9j83m54mjmn1zpiy5dlhtzzj9r93kjm6po355do2licrzsdx6ji0fldshhozlqsbq9uiucyrvtzyn2k8mvzf3gw2edqfj4hh5zixcryot1ric0yed0n2aug9fvcgbd98uqznlt9uw716bnnhhs24gfzvh4kxe296rwzee1ff21ke5o8a4ml18ee784wr00ibt7eoy6t1hupq90oz1n4m0xiyuwocxvy9hm7z4jh6c6c7ms2ivr16239zbl0dsoo41bt8r8iiddh5qqgy7jqisa4krvc1hdhqqovczj6h8jz8dos6i8lm3upahpmey84knpsdfl024ogrq214ahpewrdgrf83fz5ome7mbiet99h0uokn7st21dn7wmdajkly0ftdqlkhob0l79pngl7zkeq1ranybjp61iloo35yb35iafhwxnjuyi63gb204rv9dtj6yymvqggigiw9rx2w325aywz57atr6exnavqrt6kv0zufn4qsl76jv7xjdrbvj8pf74vbw9azt3ek5tlk3fks322if5kgv539phjnc5j7q4dyqmwqul5w8oreqaktury8w2bn1w74z8w6gbjqwgaa4mbcppsmytj10uf42djvxj5nqkt9awvix3dqdw1cdwj6w3x6sop6j33cvwu0rapy5rbu4064jn72tvqltaveumapi1d9sipl6njk8hz7ul3h7dzsm9cc5mmafn0v4xdv66pophvpjpa9z0rriftltrxq5pq9lu91yizga0c7zg4qti7j50pi0co4pqkqz2hgdqvh19343v10usi5xa9zuxjr9psrglykoksu10hg2a5ypw1lv5tts3cqi7jgsnqdowotfdb15xy91tvbhpg660wr9gfutc1ehfhgjrn2mjqx768wqwk9d3vs9gud2yhk7l5uai3v0n272q1rm7l15b2euowoys6xxqfd0uuo5r2i509ai7nbsik7cymtnccenpx7oefe9rt2e9e6uyn4ksgm8qvgyinl0f0yif3ntg4n1fkgjlo7toxwn6ii9z8s5kjhwq5l1c2f8mxqy8km8ir255ghalmcl6o2so7gb4p03yhl7uptlfyufge2ezk18hwrfbxo8k4lsxmf9hx5kqs6iui2669n5iop3xse412wub04hr97a2hnzosvxdz6mw4chw50c2wk8z0loc1n0lz1gtkvii426tcn302li9xvhuf2yzxwwp77rj9s3zxueicnc90dlx7yvgqcm51tpd7e1k9hjpylwnghsajgjiytoi6g19th0ctk06cd8pqatwiyo7wzae3vr1khlxqx2xogscq2vquabzj3ne2yfnw7x2wmns61e1m8fgz9wp57kgvhke29zvlxcb5yg1g976i5aokb0fubaiz7fsy77bau313r6j8kee7wh64tahdz2wisaz88pbsyv6y55bmh4py38au9xov13nuwh98m2xe7c9qj4995xxvd885759lqt45hxc8qp1rkkpsj9ktfzdajri6bs4b5z45pl8tb8akdh8609bqy5lc4kq28jq3p4jj00r3n37ig568gisq4iq13agnrvx5ms2fc15a8vccacnt4a5kwrntp8ox5egmiy76uycfacievlt4pk7o4n294n5s22hgv8vhrr852xznqzgkygr94evqmh13u3x7hnvjnroy9ndly1ct2td6fped01poaq09e69xjcix74cl6bgvd9a7fl4202jdyaoeozdba00j28nyblfv3jnhzs6sxq36fdu8mo3dacj9iwpw4zzkbjf7i7bsexiaxj9gukf2yznp4gzvumuuvrj87u8biuskkp1finjl7xw6pxp7zl6n1gfehi7fg1jsss6em1v8youw82trwtyjrajkfij9mhgk0bhwu0cciems3e29o67zg26v26fslzb0vuic0xlm2gm6ofa312fvmfu0v5qy5elh0spa5ya9bf90wrod4gokopvrl6tvh9e547nvk6yl5wxhk4fs8tc8j8eg1g77f924pd1oc24z9h2087r1in1d84isi5tp4pgueul7pepazc6nwuogb688dzb33mmevkq76i62n8umx21f70fvfvlp4541lfz36p4wa3dyseilcxf1lc4xp3ccb704p6t035uy8skx2jnnox961vcy3k97wxtzdmrqisyd2occ6fp960ik5jhiy0jtp1z46st6qx5t9df128z4g80jlxwzrjc72h85kjen2qxy5nazl1mmdilarlg80qu38ewxje2sm5zwrqvv0iedwvd2tcaeqhk0su36xidv09vou4cd51hkj04u6m1auro96p0i1ul9ruet8nihehj8t6cp0w1hf3z9welsc9siskc42l0pyfpqvbq5a0uzz6yz4k51x5jxhw2a3zsk0d80wyjzo25b7fi9a0j5mlqkg1o60wegrj3zw37iece6389raettig8ifjcc8gui1ofr1z5q5yuf42dwd 00:38:40.625 23:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:38:40.625 23:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:38:40.625 23:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:38:40.625 23:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:38:40.625 [2024-07-13 23:24:30.003252] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:40.626 [2024-07-13 23:24:30.003544] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174915 ] 00:38:40.626 { 00:38:40.626 "subsystems": [ 00:38:40.626 { 00:38:40.626 "subsystem": "bdev", 00:38:40.626 "config": [ 00:38:40.626 { 00:38:40.626 "params": { 00:38:40.626 "trtype": "pcie", 00:38:40.626 "traddr": "0000:00:10.0", 00:38:40.626 "name": "Nvme0" 00:38:40.626 }, 00:38:40.626 "method": "bdev_nvme_attach_controller" 00:38:40.626 }, 00:38:40.626 { 00:38:40.626 "method": "bdev_wait_for_examine" 00:38:40.626 } 00:38:40.626 ] 00:38:40.626 } 00:38:40.626 ] 00:38:40.626 } 00:38:40.884 [2024-07-13 23:24:30.151760] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:40.884 [2024-07-13 23:24:30.212065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:41.400  Copying: 4096/4096 [B] (average 4000 kBps) 00:38:41.400 00:38:41.400 23:24:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:38:41.400 23:24:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:38:41.400 23:24:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:38:41.400 23:24:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:38:41.400 { 00:38:41.400 "subsystems": [ 00:38:41.400 { 00:38:41.400 "subsystem": "bdev", 00:38:41.400 "config": [ 00:38:41.400 { 00:38:41.400 "params": { 00:38:41.400 "trtype": "pcie", 00:38:41.400 "traddr": "0000:00:10.0", 00:38:41.400 "name": "Nvme0" 00:38:41.400 }, 00:38:41.400 "method": "bdev_nvme_attach_controller" 00:38:41.400 }, 00:38:41.400 { 00:38:41.400 "method": "bdev_wait_for_examine" 00:38:41.400 } 00:38:41.400 ] 00:38:41.400 } 00:38:41.400 ] 00:38:41.400 } 00:38:41.400 [2024-07-13 23:24:30.714554] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:41.400 [2024-07-13 23:24:30.715032] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174935 ] 00:38:41.659 [2024-07-13 23:24:30.860346] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:41.659 [2024-07-13 23:24:30.927747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:42.176  Copying: 4096/4096 [B] (average 4000 kBps) 00:38:42.176 00:38:42.176 23:24:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:38:42.177 23:24:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ s1ct7dd9tetnd32gof6u102usrt0x1o27nkiys4fw9xj30oqa7x6ndvksy7qaosfboammdgoxg9ib0qjau4hvy3ansrpzvzdv1tqeqp8aax0fnis0mpo7jwse3ywmzr2asdw84fts5i3hq9clii2evzuswldjog9yemi1o1jm8hsft2xtfj9gubii7sotrfivwnjptlf90sgvolqwr88lo36rfs7txt61ugwtddca15hvlxxsq750pxp5ugk8rl84602tj04qm7m84zsvppt1k0wuhz9y014f451vp1kd4h432tfrl9nwcubs2dxxhjcts7hbgh0xyy6knbn8pezeedmew1macxkxinqd9xgze2ljvd9r363wej2g590apcfmpcdo8ueavcmbzt6ktwfu0ibl63fo6jl20dvegj5pilzziddimtn9tbr9u1lrg8ofc13ol0rwf6h8n9tsyr7qcbtz3yxp0gbsh8poueyghvbmf2yajv9aplzf9dyo3wzolhkk92oco67vdiop7tdx07xpd9kv8n3fbj1b0f9ee9mn7rpxm4agdufp8zegqz7g0o2svro3miu9em7xu919p40o681pxwgo64md9k0ctoo2rs12bvr61i72kd5no8qsefnxu59unnmkjcb1pqo7jkj4fnparqu9sova3bb0uvccvvhs0ge7y2foxvrnv7opxixyd242u7lh5j9nxkn3ecp8sk2ny0ncnkxjn17ugemdcmx9qo5ab1fazlcxr871z7b4742ibxg7vdolwa1giunpecg1n91znlp1has2jhpyea9g44zsofi13rwyjcqtewkrlzw3ijfdlpezh2oca23ayij4qnd8mi93y8eev1idljuua0t93r7dq5c87ln18q2tc91nza1z79oi4e38r1ob0rzbii2ui6d4f5kuune1ztq21n6ool9lujbwn36ejdvikuq1fet2zccrdsawlhhipykmwh5uw14lrsaycbm5zotyuegd1cu92jpsiu3ke3ra8jm1rhfmgafchpvolottieylrtypqz1g7imctza4t738n1nzm52osnkca0dau8n0xv1113drszznw42ezy9g3krm2df4974ft2ma03rn2hculvdhkka3jkixbxypqjmz5x0d29uervshtig1pm9bkxd36vwc0tv708ng9xhf7lbdpul5jcv1xuqykdnoi387klblyintl4osa0fzskstyuav09s7to6m8oet37lohirks222179q687wnzeiuhe0ag7ub2jgt8y09ee33egjgfl8lq46w6y3bhyteex893rcfa2f8sa6ilfbz3ul91v1o60k4r45xmexntbpy6ojj2sy83ub6en82gdmwcue0lz3qqtwnfm359573nuc967tea2em8vtugbxmdszglucj0mxkqwh0c0pbxd9ooctqqkd0n0s0z5yn61bv6xkopptusiaujbx6yom9omkmkz9auza9ky83pcbtqie040a91d99q3rpbg4wdp8emntcauc4k85uce2eel94wc0d6k0n646nn0iyef4glfhdabgfynu6cb4ivn9j83m54mjmn1zpiy5dlhtzzj9r93kjm6po355do2licrzsdx6ji0fldshhozlqsbq9uiucyrvtzyn2k8mvzf3gw2edqfj4hh5zixcryot1ric0yed0n2aug9fvcgbd98uqznlt9uw716bnnhhs24gfzvh4kxe296rwzee1ff21ke5o8a4ml18ee784wr00ibt7eoy6t1hupq90oz1n4m0xiyuwocxvy9hm7z4jh6c6c7ms2ivr16239zbl0dsoo41bt8r8iiddh5qqgy7jqisa4krvc1hdhqqovczj6h8jz8dos6i8lm3upahpmey84knpsdfl024ogrq214ahpewrdgrf83fz5ome7mbiet99h0uokn7st21dn7wmdajkly0ftdqlkhob0l79pngl7zkeq1ranybjp61iloo35yb35iafhwxnjuyi63gb204rv9dtj6yymvqggigiw9rx2w325aywz57atr6exnavqrt6kv0zufn4qsl76jv7xjdrbvj8pf74vbw9azt3ek5tlk3fks322if5kgv539phjnc5j7q4dyqmwqul5w8oreqaktury8w2bn1w74z8w6gbjqwgaa4mbcppsmytj10uf42djvxj5nqkt9awvix3dqdw1cdwj6w3x6sop6j33cvwu0rapy5rbu4064jn72tvqltaveumapi1d9sipl6njk8hz7ul3h7dzsm9cc5mmafn0v4xdv66pophvpjpa9z0rriftltrxq5pq9lu91yizga0c7zg4qti7j50pi0co4pqkqz2hgdqvh19343v10usi5xa9zuxjr9psrglykoksu10hg2a5ypw1lv5tts3cqi7jgsnqdowotfdb15xy91tvbhpg660wr9gfutc1ehfhgjrn2mjqx768wqwk9d3vs9gud2yhk7l5uai3v0n272q1rm7l15b2euowoys6xxqfd0uuo5r2i509ai7nbsik7cymtnccenpx7oefe9rt2e9e6uyn4ksgm8qvgyinl0f0yif3ntg4n1fkgjlo7toxwn6ii9z8s5kjhwq5l1c2f8mxqy8km8ir255ghalmcl6o2so7gb4p03yhl7uptlfyufge2ezk18hwrfbxo8k4lsxmf9hx5kqs6iui2669n5iop3xse412wub04hr97a2hnzosvxdz6mw4chw50c2wk8z0loc1n0lz1gtkvii426tcn302li9xvhuf2yzxwwp77rj9s3zxueicnc90dlx7yvgqcm51tpd7e1k9hjpylwnghsajgjiytoi6g19th0ctk06cd8pqatwiyo7wzae3vr1khlxqx2xogscq2vquabzj3ne2yfnw7x2wmns61e1m8fgz9wp57kgvhke29zvlxcb5yg1g976i5aokb0fubaiz7fsy77bau313r6j8kee7wh64tahdz2wisaz88pbsyv6y55bmh4py38au9xov13nuwh98m2xe7c9qj4995xxvd885759lqt45hxc8qp1rkkpsj9ktfzdajri6bs4b5z45pl8tb8akdh8609bqy5lc4kq28jq3p4jj00r3n37ig568gisq4iq13agnrvx5ms2fc15a8vccacnt4a5kwrntp8ox5egmiy76uycfacievlt4pk7o4n294n5s22hgv8vhrr852xznqzgkygr94evqmh13u3x7hnvjnroy9ndly1ct2td6fped01poaq09e69xjcix74cl6bgvd9a7fl4202jdyaoeozdba00j28nyblfv3jnhzs6sxq36fdu8mo3dacj9iwpw4zzkbjf7i7bsexiaxj9gukf2yznp4gzvumuuvrj87u8biuskkp1finjl7xw6pxp7zl6n1gfehi7fg1jsss6em1v8youw82trwtyjrajkfij9mhgk0bhwu0cciems3e29o67zg26v26fslzb0vuic0xlm2gm6ofa312fvmfu0v5qy5elh0spa5ya9bf90wrod4gokopvrl6tvh9e547nvk6yl5wxhk4fs8tc8j8eg1g77f924pd1oc24z9h2087r1in1d84isi5tp4pgueul7pepazc6nwuogb688dzb33mmevkq76i62n8umx21f70fvfvlp4541lfz36p4wa3dyseilcxf1lc4xp3ccb704p6t035uy8skx2jnnox961vcy3k97wxtzdmrqisyd2occ6fp960ik5jhiy0jtp1z46st6qx5t9df128z4g80jlxwzrjc72h85kjen2qxy5nazl1mmdilarlg80qu38ewxje2sm5zwrqvv0iedwvd2tcaeqhk0su36xidv09vou4cd51hkj04u6m1auro96p0i1ul9ruet8nihehj8t6cp0w1hf3z9welsc9siskc42l0pyfpqvbq5a0uzz6yz4k51x5jxhw2a3zsk0d80wyjzo25b7fi9a0j5mlqkg1o60wegrj3zw37iece6389raettig8ifjcc8gui1ofr1z5q5yuf42dwd == \s\1\c\t\7\d\d\9\t\e\t\n\d\3\2\g\o\f\6\u\1\0\2\u\s\r\t\0\x\1\o\2\7\n\k\i\y\s\4\f\w\9\x\j\3\0\o\q\a\7\x\6\n\d\v\k\s\y\7\q\a\o\s\f\b\o\a\m\m\d\g\o\x\g\9\i\b\0\q\j\a\u\4\h\v\y\3\a\n\s\r\p\z\v\z\d\v\1\t\q\e\q\p\8\a\a\x\0\f\n\i\s\0\m\p\o\7\j\w\s\e\3\y\w\m\z\r\2\a\s\d\w\8\4\f\t\s\5\i\3\h\q\9\c\l\i\i\2\e\v\z\u\s\w\l\d\j\o\g\9\y\e\m\i\1\o\1\j\m\8\h\s\f\t\2\x\t\f\j\9\g\u\b\i\i\7\s\o\t\r\f\i\v\w\n\j\p\t\l\f\9\0\s\g\v\o\l\q\w\r\8\8\l\o\3\6\r\f\s\7\t\x\t\6\1\u\g\w\t\d\d\c\a\1\5\h\v\l\x\x\s\q\7\5\0\p\x\p\5\u\g\k\8\r\l\8\4\6\0\2\t\j\0\4\q\m\7\m\8\4\z\s\v\p\p\t\1\k\0\w\u\h\z\9\y\0\1\4\f\4\5\1\v\p\1\k\d\4\h\4\3\2\t\f\r\l\9\n\w\c\u\b\s\2\d\x\x\h\j\c\t\s\7\h\b\g\h\0\x\y\y\6\k\n\b\n\8\p\e\z\e\e\d\m\e\w\1\m\a\c\x\k\x\i\n\q\d\9\x\g\z\e\2\l\j\v\d\9\r\3\6\3\w\e\j\2\g\5\9\0\a\p\c\f\m\p\c\d\o\8\u\e\a\v\c\m\b\z\t\6\k\t\w\f\u\0\i\b\l\6\3\f\o\6\j\l\2\0\d\v\e\g\j\5\p\i\l\z\z\i\d\d\i\m\t\n\9\t\b\r\9\u\1\l\r\g\8\o\f\c\1\3\o\l\0\r\w\f\6\h\8\n\9\t\s\y\r\7\q\c\b\t\z\3\y\x\p\0\g\b\s\h\8\p\o\u\e\y\g\h\v\b\m\f\2\y\a\j\v\9\a\p\l\z\f\9\d\y\o\3\w\z\o\l\h\k\k\9\2\o\c\o\6\7\v\d\i\o\p\7\t\d\x\0\7\x\p\d\9\k\v\8\n\3\f\b\j\1\b\0\f\9\e\e\9\m\n\7\r\p\x\m\4\a\g\d\u\f\p\8\z\e\g\q\z\7\g\0\o\2\s\v\r\o\3\m\i\u\9\e\m\7\x\u\9\1\9\p\4\0\o\6\8\1\p\x\w\g\o\6\4\m\d\9\k\0\c\t\o\o\2\r\s\1\2\b\v\r\6\1\i\7\2\k\d\5\n\o\8\q\s\e\f\n\x\u\5\9\u\n\n\m\k\j\c\b\1\p\q\o\7\j\k\j\4\f\n\p\a\r\q\u\9\s\o\v\a\3\b\b\0\u\v\c\c\v\v\h\s\0\g\e\7\y\2\f\o\x\v\r\n\v\7\o\p\x\i\x\y\d\2\4\2\u\7\l\h\5\j\9\n\x\k\n\3\e\c\p\8\s\k\2\n\y\0\n\c\n\k\x\j\n\1\7\u\g\e\m\d\c\m\x\9\q\o\5\a\b\1\f\a\z\l\c\x\r\8\7\1\z\7\b\4\7\4\2\i\b\x\g\7\v\d\o\l\w\a\1\g\i\u\n\p\e\c\g\1\n\9\1\z\n\l\p\1\h\a\s\2\j\h\p\y\e\a\9\g\4\4\z\s\o\f\i\1\3\r\w\y\j\c\q\t\e\w\k\r\l\z\w\3\i\j\f\d\l\p\e\z\h\2\o\c\a\2\3\a\y\i\j\4\q\n\d\8\m\i\9\3\y\8\e\e\v\1\i\d\l\j\u\u\a\0\t\9\3\r\7\d\q\5\c\8\7\l\n\1\8\q\2\t\c\9\1\n\z\a\1\z\7\9\o\i\4\e\3\8\r\1\o\b\0\r\z\b\i\i\2\u\i\6\d\4\f\5\k\u\u\n\e\1\z\t\q\2\1\n\6\o\o\l\9\l\u\j\b\w\n\3\6\e\j\d\v\i\k\u\q\1\f\e\t\2\z\c\c\r\d\s\a\w\l\h\h\i\p\y\k\m\w\h\5\u\w\1\4\l\r\s\a\y\c\b\m\5\z\o\t\y\u\e\g\d\1\c\u\9\2\j\p\s\i\u\3\k\e\3\r\a\8\j\m\1\r\h\f\m\g\a\f\c\h\p\v\o\l\o\t\t\i\e\y\l\r\t\y\p\q\z\1\g\7\i\m\c\t\z\a\4\t\7\3\8\n\1\n\z\m\5\2\o\s\n\k\c\a\0\d\a\u\8\n\0\x\v\1\1\1\3\d\r\s\z\z\n\w\4\2\e\z\y\9\g\3\k\r\m\2\d\f\4\9\7\4\f\t\2\m\a\0\3\r\n\2\h\c\u\l\v\d\h\k\k\a\3\j\k\i\x\b\x\y\p\q\j\m\z\5\x\0\d\2\9\u\e\r\v\s\h\t\i\g\1\p\m\9\b\k\x\d\3\6\v\w\c\0\t\v\7\0\8\n\g\9\x\h\f\7\l\b\d\p\u\l\5\j\c\v\1\x\u\q\y\k\d\n\o\i\3\8\7\k\l\b\l\y\i\n\t\l\4\o\s\a\0\f\z\s\k\s\t\y\u\a\v\0\9\s\7\t\o\6\m\8\o\e\t\3\7\l\o\h\i\r\k\s\2\2\2\1\7\9\q\6\8\7\w\n\z\e\i\u\h\e\0\a\g\7\u\b\2\j\g\t\8\y\0\9\e\e\3\3\e\g\j\g\f\l\8\l\q\4\6\w\6\y\3\b\h\y\t\e\e\x\8\9\3\r\c\f\a\2\f\8\s\a\6\i\l\f\b\z\3\u\l\9\1\v\1\o\6\0\k\4\r\4\5\x\m\e\x\n\t\b\p\y\6\o\j\j\2\s\y\8\3\u\b\6\e\n\8\2\g\d\m\w\c\u\e\0\l\z\3\q\q\t\w\n\f\m\3\5\9\5\7\3\n\u\c\9\6\7\t\e\a\2\e\m\8\v\t\u\g\b\x\m\d\s\z\g\l\u\c\j\0\m\x\k\q\w\h\0\c\0\p\b\x\d\9\o\o\c\t\q\q\k\d\0\n\0\s\0\z\5\y\n\6\1\b\v\6\x\k\o\p\p\t\u\s\i\a\u\j\b\x\6\y\o\m\9\o\m\k\m\k\z\9\a\u\z\a\9\k\y\8\3\p\c\b\t\q\i\e\0\4\0\a\9\1\d\9\9\q\3\r\p\b\g\4\w\d\p\8\e\m\n\t\c\a\u\c\4\k\8\5\u\c\e\2\e\e\l\9\4\w\c\0\d\6\k\0\n\6\4\6\n\n\0\i\y\e\f\4\g\l\f\h\d\a\b\g\f\y\n\u\6\c\b\4\i\v\n\9\j\8\3\m\5\4\m\j\m\n\1\z\p\i\y\5\d\l\h\t\z\z\j\9\r\9\3\k\j\m\6\p\o\3\5\5\d\o\2\l\i\c\r\z\s\d\x\6\j\i\0\f\l\d\s\h\h\o\z\l\q\s\b\q\9\u\i\u\c\y\r\v\t\z\y\n\2\k\8\m\v\z\f\3\g\w\2\e\d\q\f\j\4\h\h\5\z\i\x\c\r\y\o\t\1\r\i\c\0\y\e\d\0\n\2\a\u\g\9\f\v\c\g\b\d\9\8\u\q\z\n\l\t\9\u\w\7\1\6\b\n\n\h\h\s\2\4\g\f\z\v\h\4\k\x\e\2\9\6\r\w\z\e\e\1\f\f\2\1\k\e\5\o\8\a\4\m\l\1\8\e\e\7\8\4\w\r\0\0\i\b\t\7\e\o\y\6\t\1\h\u\p\q\9\0\o\z\1\n\4\m\0\x\i\y\u\w\o\c\x\v\y\9\h\m\7\z\4\j\h\6\c\6\c\7\m\s\2\i\v\r\1\6\2\3\9\z\b\l\0\d\s\o\o\4\1\b\t\8\r\8\i\i\d\d\h\5\q\q\g\y\7\j\q\i\s\a\4\k\r\v\c\1\h\d\h\q\q\o\v\c\z\j\6\h\8\j\z\8\d\o\s\6\i\8\l\m\3\u\p\a\h\p\m\e\y\8\4\k\n\p\s\d\f\l\0\2\4\o\g\r\q\2\1\4\a\h\p\e\w\r\d\g\r\f\8\3\f\z\5\o\m\e\7\m\b\i\e\t\9\9\h\0\u\o\k\n\7\s\t\2\1\d\n\7\w\m\d\a\j\k\l\y\0\f\t\d\q\l\k\h\o\b\0\l\7\9\p\n\g\l\7\z\k\e\q\1\r\a\n\y\b\j\p\6\1\i\l\o\o\3\5\y\b\3\5\i\a\f\h\w\x\n\j\u\y\i\6\3\g\b\2\0\4\r\v\9\d\t\j\6\y\y\m\v\q\g\g\i\g\i\w\9\r\x\2\w\3\2\5\a\y\w\z\5\7\a\t\r\6\e\x\n\a\v\q\r\t\6\k\v\0\z\u\f\n\4\q\s\l\7\6\j\v\7\x\j\d\r\b\v\j\8\p\f\7\4\v\b\w\9\a\z\t\3\e\k\5\t\l\k\3\f\k\s\3\2\2\i\f\5\k\g\v\5\3\9\p\h\j\n\c\5\j\7\q\4\d\y\q\m\w\q\u\l\5\w\8\o\r\e\q\a\k\t\u\r\y\8\w\2\b\n\1\w\7\4\z\8\w\6\g\b\j\q\w\g\a\a\4\m\b\c\p\p\s\m\y\t\j\1\0\u\f\4\2\d\j\v\x\j\5\n\q\k\t\9\a\w\v\i\x\3\d\q\d\w\1\c\d\w\j\6\w\3\x\6\s\o\p\6\j\3\3\c\v\w\u\0\r\a\p\y\5\r\b\u\4\0\6\4\j\n\7\2\t\v\q\l\t\a\v\e\u\m\a\p\i\1\d\9\s\i\p\l\6\n\j\k\8\h\z\7\u\l\3\h\7\d\z\s\m\9\c\c\5\m\m\a\f\n\0\v\4\x\d\v\6\6\p\o\p\h\v\p\j\p\a\9\z\0\r\r\i\f\t\l\t\r\x\q\5\p\q\9\l\u\9\1\y\i\z\g\a\0\c\7\z\g\4\q\t\i\7\j\5\0\p\i\0\c\o\4\p\q\k\q\z\2\h\g\d\q\v\h\1\9\3\4\3\v\1\0\u\s\i\5\x\a\9\z\u\x\j\r\9\p\s\r\g\l\y\k\o\k\s\u\1\0\h\g\2\a\5\y\p\w\1\l\v\5\t\t\s\3\c\q\i\7\j\g\s\n\q\d\o\w\o\t\f\d\b\1\5\x\y\9\1\t\v\b\h\p\g\6\6\0\w\r\9\g\f\u\t\c\1\e\h\f\h\g\j\r\n\2\m\j\q\x\7\6\8\w\q\w\k\9\d\3\v\s\9\g\u\d\2\y\h\k\7\l\5\u\a\i\3\v\0\n\2\7\2\q\1\r\m\7\l\1\5\b\2\e\u\o\w\o\y\s\6\x\x\q\f\d\0\u\u\o\5\r\2\i\5\0\9\a\i\7\n\b\s\i\k\7\c\y\m\t\n\c\c\e\n\p\x\7\o\e\f\e\9\r\t\2\e\9\e\6\u\y\n\4\k\s\g\m\8\q\v\g\y\i\n\l\0\f\0\y\i\f\3\n\t\g\4\n\1\f\k\g\j\l\o\7\t\o\x\w\n\6\i\i\9\z\8\s\5\k\j\h\w\q\5\l\1\c\2\f\8\m\x\q\y\8\k\m\8\i\r\2\5\5\g\h\a\l\m\c\l\6\o\2\s\o\7\g\b\4\p\0\3\y\h\l\7\u\p\t\l\f\y\u\f\g\e\2\e\z\k\1\8\h\w\r\f\b\x\o\8\k\4\l\s\x\m\f\9\h\x\5\k\q\s\6\i\u\i\2\6\6\9\n\5\i\o\p\3\x\s\e\4\1\2\w\u\b\0\4\h\r\9\7\a\2\h\n\z\o\s\v\x\d\z\6\m\w\4\c\h\w\5\0\c\2\w\k\8\z\0\l\o\c\1\n\0\l\z\1\g\t\k\v\i\i\4\2\6\t\c\n\3\0\2\l\i\9\x\v\h\u\f\2\y\z\x\w\w\p\7\7\r\j\9\s\3\z\x\u\e\i\c\n\c\9\0\d\l\x\7\y\v\g\q\c\m\5\1\t\p\d\7\e\1\k\9\h\j\p\y\l\w\n\g\h\s\a\j\g\j\i\y\t\o\i\6\g\1\9\t\h\0\c\t\k\0\6\c\d\8\p\q\a\t\w\i\y\o\7\w\z\a\e\3\v\r\1\k\h\l\x\q\x\2\x\o\g\s\c\q\2\v\q\u\a\b\z\j\3\n\e\2\y\f\n\w\7\x\2\w\m\n\s\6\1\e\1\m\8\f\g\z\9\w\p\5\7\k\g\v\h\k\e\2\9\z\v\l\x\c\b\5\y\g\1\g\9\7\6\i\5\a\o\k\b\0\f\u\b\a\i\z\7\f\s\y\7\7\b\a\u\3\1\3\r\6\j\8\k\e\e\7\w\h\6\4\t\a\h\d\z\2\w\i\s\a\z\8\8\p\b\s\y\v\6\y\5\5\b\m\h\4\p\y\3\8\a\u\9\x\o\v\1\3\n\u\w\h\9\8\m\2\x\e\7\c\9\q\j\4\9\9\5\x\x\v\d\8\8\5\7\5\9\l\q\t\4\5\h\x\c\8\q\p\1\r\k\k\p\s\j\9\k\t\f\z\d\a\j\r\i\6\b\s\4\b\5\z\4\5\p\l\8\t\b\8\a\k\d\h\8\6\0\9\b\q\y\5\l\c\4\k\q\2\8\j\q\3\p\4\j\j\0\0\r\3\n\3\7\i\g\5\6\8\g\i\s\q\4\i\q\1\3\a\g\n\r\v\x\5\m\s\2\f\c\1\5\a\8\v\c\c\a\c\n\t\4\a\5\k\w\r\n\t\p\8\o\x\5\e\g\m\i\y\7\6\u\y\c\f\a\c\i\e\v\l\t\4\p\k\7\o\4\n\2\9\4\n\5\s\2\2\h\g\v\8\v\h\r\r\8\5\2\x\z\n\q\z\g\k\y\g\r\9\4\e\v\q\m\h\1\3\u\3\x\7\h\n\v\j\n\r\o\y\9\n\d\l\y\1\c\t\2\t\d\6\f\p\e\d\0\1\p\o\a\q\0\9\e\6\9\x\j\c\i\x\7\4\c\l\6\b\g\v\d\9\a\7\f\l\4\2\0\2\j\d\y\a\o\e\o\z\d\b\a\0\0\j\2\8\n\y\b\l\f\v\3\j\n\h\z\s\6\s\x\q\3\6\f\d\u\8\m\o\3\d\a\c\j\9\i\w\p\w\4\z\z\k\b\j\f\7\i\7\b\s\e\x\i\a\x\j\9\g\u\k\f\2\y\z\n\p\4\g\z\v\u\m\u\u\v\r\j\8\7\u\8\b\i\u\s\k\k\p\1\f\i\n\j\l\7\x\w\6\p\x\p\7\z\l\6\n\1\g\f\e\h\i\7\f\g\1\j\s\s\s\6\e\m\1\v\8\y\o\u\w\8\2\t\r\w\t\y\j\r\a\j\k\f\i\j\9\m\h\g\k\0\b\h\w\u\0\c\c\i\e\m\s\3\e\2\9\o\6\7\z\g\2\6\v\2\6\f\s\l\z\b\0\v\u\i\c\0\x\l\m\2\g\m\6\o\f\a\3\1\2\f\v\m\f\u\0\v\5\q\y\5\e\l\h\0\s\p\a\5\y\a\9\b\f\9\0\w\r\o\d\4\g\o\k\o\p\v\r\l\6\t\v\h\9\e\5\4\7\n\v\k\6\y\l\5\w\x\h\k\4\f\s\8\t\c\8\j\8\e\g\1\g\7\7\f\9\2\4\p\d\1\o\c\2\4\z\9\h\2\0\8\7\r\1\i\n\1\d\8\4\i\s\i\5\t\p\4\p\g\u\e\u\l\7\p\e\p\a\z\c\6\n\w\u\o\g\b\6\8\8\d\z\b\3\3\m\m\e\v\k\q\7\6\i\6\2\n\8\u\m\x\2\1\f\7\0\f\v\f\v\l\p\4\5\4\1\l\f\z\3\6\p\4\w\a\3\d\y\s\e\i\l\c\x\f\1\l\c\4\x\p\3\c\c\b\7\0\4\p\6\t\0\3\5\u\y\8\s\k\x\2\j\n\n\o\x\9\6\1\v\c\y\3\k\9\7\w\x\t\z\d\m\r\q\i\s\y\d\2\o\c\c\6\f\p\9\6\0\i\k\5\j\h\i\y\0\j\t\p\1\z\4\6\s\t\6\q\x\5\t\9\d\f\1\2\8\z\4\g\8\0\j\l\x\w\z\r\j\c\7\2\h\8\5\k\j\e\n\2\q\x\y\5\n\a\z\l\1\m\m\d\i\l\a\r\l\g\8\0\q\u\3\8\e\w\x\j\e\2\s\m\5\z\w\r\q\v\v\0\i\e\d\w\v\d\2\t\c\a\e\q\h\k\0\s\u\3\6\x\i\d\v\0\9\v\o\u\4\c\d\5\1\h\k\j\0\4\u\6\m\1\a\u\r\o\9\6\p\0\i\1\u\l\9\r\u\e\t\8\n\i\h\e\h\j\8\t\6\c\p\0\w\1\h\f\3\z\9\w\e\l\s\c\9\s\i\s\k\c\4\2\l\0\p\y\f\p\q\v\b\q\5\a\0\u\z\z\6\y\z\4\k\5\1\x\5\j\x\h\w\2\a\3\z\s\k\0\d\8\0\w\y\j\z\o\2\5\b\7\f\i\9\a\0\j\5\m\l\q\k\g\1\o\6\0\w\e\g\r\j\3\z\w\3\7\i\e\c\e\6\3\8\9\r\a\e\t\t\i\g\8\i\f\j\c\c\8\g\u\i\1\o\f\r\1\z\5\q\5\y\u\f\4\2\d\w\d ]] 00:38:42.177 00:38:42.177 real 0m1.480s 00:38:42.177 user 0m0.923s 00:38:42.177 sys 0m0.392s 00:38:42.177 23:24:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:42.177 23:24:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:38:42.177 ************************************ 00:38:42.177 END TEST dd_rw_offset 00:38:42.177 ************************************ 00:38:42.177 23:24:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:38:42.177 23:24:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:38:42.177 23:24:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:38:42.177 23:24:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:38:42.177 23:24:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:38:42.177 23:24:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:38:42.177 23:24:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:38:42.177 23:24:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:38:42.177 23:24:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:38:42.177 23:24:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:38:42.177 23:24:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:42.177 23:24:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:38:42.177 [2024-07-13 23:24:31.471242] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:42.177 [2024-07-13 23:24:31.471501] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174963 ] 00:38:42.177 { 00:38:42.177 "subsystems": [ 00:38:42.177 { 00:38:42.177 "subsystem": "bdev", 00:38:42.177 "config": [ 00:38:42.177 { 00:38:42.177 "params": { 00:38:42.177 "trtype": "pcie", 00:38:42.177 "traddr": "0000:00:10.0", 00:38:42.177 "name": "Nvme0" 00:38:42.177 }, 00:38:42.177 "method": "bdev_nvme_attach_controller" 00:38:42.177 }, 00:38:42.177 { 00:38:42.177 "method": "bdev_wait_for_examine" 00:38:42.177 } 00:38:42.177 ] 00:38:42.177 } 00:38:42.177 ] 00:38:42.177 } 00:38:42.435 [2024-07-13 23:24:31.617952] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:42.435 [2024-07-13 23:24:31.678127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:43.002  Copying: 1024/1024 [kB] (average 1000 MBps) 00:38:43.002 00:38:43.002 23:24:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:43.002 ************************************ 00:38:43.002 END TEST spdk_dd_basic_rw 00:38:43.002 ************************************ 00:38:43.002 00:38:43.002 real 0m19.435s 00:38:43.002 user 0m12.940s 00:38:43.002 sys 0m4.708s 00:38:43.002 23:24:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:43.002 23:24:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:38:43.002 23:24:32 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:38:43.002 23:24:32 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:38:43.002 23:24:32 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:43.002 23:24:32 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:43.002 23:24:32 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:38:43.002 ************************************ 00:38:43.002 START TEST spdk_dd_posix 00:38:43.002 ************************************ 00:38:43.002 23:24:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:38:43.002 * Looking for test storage... 00:38:43.002 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:38:43.002 23:24:32 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:43.002 23:24:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:43.002 23:24:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:43.002 23:24:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:43.002 23:24:32 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:43.002 23:24:32 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:43.002 23:24:32 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:43.002 23:24:32 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:38:43.002 23:24:32 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:43.002 23:24:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:38:43.002 23:24:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:38:43.002 23:24:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:38:43.002 23:24:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:38:43.002 23:24:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:43.002 23:24:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:43.002 23:24:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:38:43.002 23:24:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:38:43.002 * First test run, using AIO 00:38:43.002 23:24:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:38:43.002 23:24:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:43.002 23:24:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:43.002 23:24:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:43.002 ************************************ 00:38:43.002 START TEST dd_flag_append 00:38:43.002 ************************************ 00:38:43.002 23:24:32 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # append 00:38:43.002 23:24:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:38:43.002 23:24:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:38:43.002 23:24:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:38:43.002 23:24:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:38:43.002 23:24:32 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:38:43.002 23:24:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=j51jz5niopaq6ozkl9wqulnt1douanwn 00:38:43.002 23:24:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:38:43.002 23:24:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:38:43.002 23:24:32 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:38:43.002 23:24:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=po4blt2f03pqwn0ol5t1px03lblkojxm 00:38:43.002 23:24:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s j51jz5niopaq6ozkl9wqulnt1douanwn 00:38:43.002 23:24:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s po4blt2f03pqwn0ol5t1px03lblkojxm 00:38:43.002 23:24:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:38:43.002 [2024-07-13 23:24:32.353718] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:43.002 [2024-07-13 23:24:32.354771] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175029 ] 00:38:43.261 [2024-07-13 23:24:32.498595] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:43.261 [2024-07-13 23:24:32.570086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:43.827  Copying: 32/32 [B] (average 31 kBps) 00:38:43.827 00:38:43.827 23:24:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ po4blt2f03pqwn0ol5t1px03lblkojxmj51jz5niopaq6ozkl9wqulnt1douanwn == \p\o\4\b\l\t\2\f\0\3\p\q\w\n\0\o\l\5\t\1\p\x\0\3\l\b\l\k\o\j\x\m\j\5\1\j\z\5\n\i\o\p\a\q\6\o\z\k\l\9\w\q\u\l\n\t\1\d\o\u\a\n\w\n ]] 00:38:43.827 00:38:43.827 real 0m0.663s 00:38:43.827 user 0m0.316s 00:38:43.827 sys 0m0.202s 00:38:43.827 23:24:32 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:43.827 ************************************ 00:38:43.827 23:24:32 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:38:43.827 END TEST dd_flag_append 00:38:43.827 ************************************ 00:38:43.827 23:24:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:38:43.827 23:24:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:38:43.827 23:24:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:43.827 23:24:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:43.827 23:24:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:43.827 ************************************ 00:38:43.827 START TEST dd_flag_directory 00:38:43.827 ************************************ 00:38:43.827 23:24:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # directory 00:38:43.827 23:24:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:43.827 23:24:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:38:43.827 23:24:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:43.827 23:24:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:43.827 23:24:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:43.827 23:24:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:43.827 23:24:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:43.827 23:24:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:43.827 23:24:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:43.827 23:24:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:43.827 23:24:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:43.827 23:24:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:43.827 [2024-07-13 23:24:33.066326] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:43.827 [2024-07-13 23:24:33.066757] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175070 ] 00:38:43.827 [2024-07-13 23:24:33.211642] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:44.086 [2024-07-13 23:24:33.275231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:44.086 [2024-07-13 23:24:33.353379] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:38:44.086 [2024-07-13 23:24:33.353754] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:38:44.086 [2024-07-13 23:24:33.353850] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:44.086 [2024-07-13 23:24:33.466329] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:44.343 23:24:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:38:44.343 23:24:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:44.343 23:24:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:38:44.343 23:24:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:38:44.343 23:24:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:38:44.343 23:24:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:44.343 23:24:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:38:44.343 23:24:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:38:44.343 23:24:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:38:44.343 23:24:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:44.343 23:24:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:44.343 23:24:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:44.343 23:24:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:44.343 23:24:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:44.343 23:24:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:44.343 23:24:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:44.343 23:24:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:44.343 23:24:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:38:44.343 [2024-07-13 23:24:33.637623] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:44.343 [2024-07-13 23:24:33.638104] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175075 ] 00:38:44.601 [2024-07-13 23:24:33.784955] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:44.601 [2024-07-13 23:24:33.856886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:44.601 [2024-07-13 23:24:33.940912] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:38:44.601 [2024-07-13 23:24:33.941256] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:38:44.601 [2024-07-13 23:24:33.941354] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:44.860 [2024-07-13 23:24:34.061743] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:44.860 ************************************ 00:38:44.860 END TEST dd_flag_directory 00:38:44.860 ************************************ 00:38:44.860 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:38:44.860 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:44.860 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:38:44.860 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:38:44.860 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:38:44.860 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:44.860 00:38:44.860 real 0m1.189s 00:38:44.860 user 0m0.619s 00:38:44.860 sys 0m0.361s 00:38:44.860 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:44.860 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:38:44.860 23:24:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:38:44.860 23:24:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:38:44.860 23:24:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:44.860 23:24:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:44.860 23:24:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:44.860 ************************************ 00:38:44.860 START TEST dd_flag_nofollow 00:38:44.860 ************************************ 00:38:44.860 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # nofollow 00:38:44.860 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:38:44.860 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:38:44.860 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:38:44.860 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:38:44.860 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:44.860 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:38:44.860 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:44.860 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:44.860 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:44.860 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:44.860 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:44.860 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:44.860 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:44.860 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:44.860 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:45.119 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:45.119 [2024-07-13 23:24:34.313424] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:45.119 [2024-07-13 23:24:34.313686] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175118 ] 00:38:45.119 [2024-07-13 23:24:34.460340] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:45.119 [2024-07-13 23:24:34.523917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:45.377 [2024-07-13 23:24:34.602949] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:38:45.377 [2024-07-13 23:24:34.603058] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:38:45.377 [2024-07-13 23:24:34.603100] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:45.377 [2024-07-13 23:24:34.720794] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:45.635 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:38:45.635 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:45.635 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:38:45.635 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:38:45.635 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:38:45.635 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:45.635 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:38:45.635 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:38:45.635 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:38:45.636 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:45.636 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:45.636 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:45.636 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:45.636 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:45.636 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:45.636 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:45.636 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:45.636 23:24:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:38:45.636 [2024-07-13 23:24:34.894631] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:45.636 [2024-07-13 23:24:34.894844] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175133 ] 00:38:45.636 [2024-07-13 23:24:35.024561] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:45.894 [2024-07-13 23:24:35.120963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:45.894 [2024-07-13 23:24:35.204622] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:38:45.894 [2024-07-13 23:24:35.204725] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:38:45.894 [2024-07-13 23:24:35.204783] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:46.153 [2024-07-13 23:24:35.330809] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:46.153 23:24:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:38:46.153 23:24:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:46.153 23:24:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:38:46.153 23:24:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:38:46.153 23:24:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:38:46.153 23:24:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:46.153 23:24:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:38:46.153 23:24:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:38:46.153 23:24:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:38:46.153 23:24:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:46.153 [2024-07-13 23:24:35.527184] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:46.153 [2024-07-13 23:24:35.527402] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175140 ] 00:38:46.412 [2024-07-13 23:24:35.665006] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:46.412 [2024-07-13 23:24:35.739183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:46.929  Copying: 512/512 [B] (average 500 kBps) 00:38:46.929 00:38:46.929 23:24:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ ykq4dgno40h1o63x4a7b51j82pmenoxn2orinnju5rbmqvdautuotfsmdv0n9d86p7lsqoz5wi71u3oek5gq0gknvh7wr6f5x3px4rnvfnn0h40yrcvn1ts6upg2a7qqp2hjbu5pbxt2yjf7xlycrrexbag3humc7a6lcxqtgmxalqh13ab4dt31n6qj3pmyov62rdnu87wetbx8wz0f2t7ljdv6hmwlqscr4nedypb9wy8gg8asnzcp0o119wn9f5t2eugyy6909m8awvow1as6axebktdbaar3km8nujxvkswktupr4r0l2ygolah613hhd19b9crcmqilkw2tg58dqj5xdisk8qn0275z2zjc8bx9tgkqnkvly9yzdxq4rrjn71g37oykqo0qe51axhkr5toc4cy1tc0q4nolfnkzd06w9qj75ja4x999omjkrmw4ub7c428qkfwao55z60s64qiwvp9hlobmvuoexqzu5mhmr3kzlmw0wmd3ntfz == \y\k\q\4\d\g\n\o\4\0\h\1\o\6\3\x\4\a\7\b\5\1\j\8\2\p\m\e\n\o\x\n\2\o\r\i\n\n\j\u\5\r\b\m\q\v\d\a\u\t\u\o\t\f\s\m\d\v\0\n\9\d\8\6\p\7\l\s\q\o\z\5\w\i\7\1\u\3\o\e\k\5\g\q\0\g\k\n\v\h\7\w\r\6\f\5\x\3\p\x\4\r\n\v\f\n\n\0\h\4\0\y\r\c\v\n\1\t\s\6\u\p\g\2\a\7\q\q\p\2\h\j\b\u\5\p\b\x\t\2\y\j\f\7\x\l\y\c\r\r\e\x\b\a\g\3\h\u\m\c\7\a\6\l\c\x\q\t\g\m\x\a\l\q\h\1\3\a\b\4\d\t\3\1\n\6\q\j\3\p\m\y\o\v\6\2\r\d\n\u\8\7\w\e\t\b\x\8\w\z\0\f\2\t\7\l\j\d\v\6\h\m\w\l\q\s\c\r\4\n\e\d\y\p\b\9\w\y\8\g\g\8\a\s\n\z\c\p\0\o\1\1\9\w\n\9\f\5\t\2\e\u\g\y\y\6\9\0\9\m\8\a\w\v\o\w\1\a\s\6\a\x\e\b\k\t\d\b\a\a\r\3\k\m\8\n\u\j\x\v\k\s\w\k\t\u\p\r\4\r\0\l\2\y\g\o\l\a\h\6\1\3\h\h\d\1\9\b\9\c\r\c\m\q\i\l\k\w\2\t\g\5\8\d\q\j\5\x\d\i\s\k\8\q\n\0\2\7\5\z\2\z\j\c\8\b\x\9\t\g\k\q\n\k\v\l\y\9\y\z\d\x\q\4\r\r\j\n\7\1\g\3\7\o\y\k\q\o\0\q\e\5\1\a\x\h\k\r\5\t\o\c\4\c\y\1\t\c\0\q\4\n\o\l\f\n\k\z\d\0\6\w\9\q\j\7\5\j\a\4\x\9\9\9\o\m\j\k\r\m\w\4\u\b\7\c\4\2\8\q\k\f\w\a\o\5\5\z\6\0\s\6\4\q\i\w\v\p\9\h\l\o\b\m\v\u\o\e\x\q\z\u\5\m\h\m\r\3\k\z\l\m\w\0\w\m\d\3\n\t\f\z ]] 00:38:46.929 00:38:46.929 real 0m1.848s 00:38:46.929 user 0m0.962s 00:38:46.929 sys 0m0.549s 00:38:46.929 23:24:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:46.929 23:24:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:38:46.929 ************************************ 00:38:46.929 END TEST dd_flag_nofollow 00:38:46.929 ************************************ 00:38:46.929 23:24:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:38:46.930 23:24:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:38:46.930 23:24:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:46.930 23:24:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:46.930 23:24:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:46.930 ************************************ 00:38:46.930 START TEST dd_flag_noatime 00:38:46.930 ************************************ 00:38:46.930 23:24:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # noatime 00:38:46.930 23:24:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:38:46.930 23:24:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:38:46.930 23:24:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:38:46.930 23:24:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:38:46.930 23:24:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:38:46.930 23:24:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:46.930 23:24:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1720913075 00:38:46.930 23:24:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:46.930 23:24:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1720913076 00:38:46.930 23:24:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:38:47.866 23:24:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:47.866 [2024-07-13 23:24:37.216780] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:47.866 [2024-07-13 23:24:37.217054] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175192 ] 00:38:48.125 [2024-07-13 23:24:37.352617] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:48.125 [2024-07-13 23:24:37.420724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:48.383  Copying: 512/512 [B] (average 500 kBps) 00:38:48.383 00:38:48.383 23:24:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:48.383 23:24:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1720913075 )) 00:38:48.383 23:24:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:48.383 23:24:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1720913076 )) 00:38:48.383 23:24:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:48.642 [2024-07-13 23:24:37.809950] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:48.642 [2024-07-13 23:24:37.810175] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175207 ] 00:38:48.642 [2024-07-13 23:24:37.951059] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:48.642 [2024-07-13 23:24:38.013782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:49.207  Copying: 512/512 [B] (average 500 kBps) 00:38:49.207 00:38:49.208 23:24:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:49.208 23:24:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1720913078 )) 00:38:49.208 00:38:49.208 real 0m2.247s 00:38:49.208 user 0m0.619s 00:38:49.208 sys 0m0.359s 00:38:49.208 23:24:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:49.208 23:24:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:38:49.208 ************************************ 00:38:49.208 END TEST dd_flag_noatime 00:38:49.208 ************************************ 00:38:49.208 23:24:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:38:49.208 23:24:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:38:49.208 23:24:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:49.208 23:24:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:49.208 23:24:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:49.208 ************************************ 00:38:49.208 START TEST dd_flags_misc 00:38:49.208 ************************************ 00:38:49.208 23:24:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # io 00:38:49.208 23:24:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:38:49.208 23:24:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:38:49.208 23:24:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:38:49.208 23:24:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:38:49.208 23:24:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:38:49.208 23:24:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:38:49.208 23:24:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:38:49.208 23:24:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:49.208 23:24:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:38:49.208 [2024-07-13 23:24:38.508956] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:49.208 [2024-07-13 23:24:38.509231] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175237 ] 00:38:49.465 [2024-07-13 23:24:38.656729] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:49.465 [2024-07-13 23:24:38.733332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:49.723  Copying: 512/512 [B] (average 500 kBps) 00:38:49.723 00:38:49.723 23:24:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ zaelft3uv4kdu1fo4ml092m55i9u13v7p4nulfbt1avbwy4rafqtfiredjnkgxrcdf9pbsfn1kshw5m9dxvuvyi9knzc7wpulvrnmm0lha74pwd2we7xs9l0uezu6zsvljm8ndhcc6qzz7qet44cjfpfvul2p0ta98bdn5bwzo8wxz6k33zv1lk5ffgff55rtji7an3qe4jnwusdzgup02m23uddsa88yium3tbkw3378ghpx7mivbqqhv13x0np65mshhk1g3fftm4apomi24m60f38ynwzl8nbay2ofohe25z0ork9lusqm2ss0c75azoqjfms4e7635t5u10thevpqalxfv68qhiev3mjwnduf4j5pduqhqv97aw5tphicpj249vy21kuo3op03hhvriawp85a5cfxn6gm6sgteqd6x5hsq8o6409gyos9zz218kvpdx3knt7w4p0c27iseslrl4ka0d19a5wywzul0a3mzcj615dzwcjx2uqiry5 == \z\a\e\l\f\t\3\u\v\4\k\d\u\1\f\o\4\m\l\0\9\2\m\5\5\i\9\u\1\3\v\7\p\4\n\u\l\f\b\t\1\a\v\b\w\y\4\r\a\f\q\t\f\i\r\e\d\j\n\k\g\x\r\c\d\f\9\p\b\s\f\n\1\k\s\h\w\5\m\9\d\x\v\u\v\y\i\9\k\n\z\c\7\w\p\u\l\v\r\n\m\m\0\l\h\a\7\4\p\w\d\2\w\e\7\x\s\9\l\0\u\e\z\u\6\z\s\v\l\j\m\8\n\d\h\c\c\6\q\z\z\7\q\e\t\4\4\c\j\f\p\f\v\u\l\2\p\0\t\a\9\8\b\d\n\5\b\w\z\o\8\w\x\z\6\k\3\3\z\v\1\l\k\5\f\f\g\f\f\5\5\r\t\j\i\7\a\n\3\q\e\4\j\n\w\u\s\d\z\g\u\p\0\2\m\2\3\u\d\d\s\a\8\8\y\i\u\m\3\t\b\k\w\3\3\7\8\g\h\p\x\7\m\i\v\b\q\q\h\v\1\3\x\0\n\p\6\5\m\s\h\h\k\1\g\3\f\f\t\m\4\a\p\o\m\i\2\4\m\6\0\f\3\8\y\n\w\z\l\8\n\b\a\y\2\o\f\o\h\e\2\5\z\0\o\r\k\9\l\u\s\q\m\2\s\s\0\c\7\5\a\z\o\q\j\f\m\s\4\e\7\6\3\5\t\5\u\1\0\t\h\e\v\p\q\a\l\x\f\v\6\8\q\h\i\e\v\3\m\j\w\n\d\u\f\4\j\5\p\d\u\q\h\q\v\9\7\a\w\5\t\p\h\i\c\p\j\2\4\9\v\y\2\1\k\u\o\3\o\p\0\3\h\h\v\r\i\a\w\p\8\5\a\5\c\f\x\n\6\g\m\6\s\g\t\e\q\d\6\x\5\h\s\q\8\o\6\4\0\9\g\y\o\s\9\z\z\2\1\8\k\v\p\d\x\3\k\n\t\7\w\4\p\0\c\2\7\i\s\e\s\l\r\l\4\k\a\0\d\1\9\a\5\w\y\w\z\u\l\0\a\3\m\z\c\j\6\1\5\d\z\w\c\j\x\2\u\q\i\r\y\5 ]] 00:38:49.723 23:24:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:49.723 23:24:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:38:49.980 [2024-07-13 23:24:39.166694] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:49.981 [2024-07-13 23:24:39.166918] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175252 ] 00:38:49.981 [2024-07-13 23:24:39.308521] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:49.981 [2024-07-13 23:24:39.383907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:50.496  Copying: 512/512 [B] (average 500 kBps) 00:38:50.496 00:38:50.496 23:24:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ zaelft3uv4kdu1fo4ml092m55i9u13v7p4nulfbt1avbwy4rafqtfiredjnkgxrcdf9pbsfn1kshw5m9dxvuvyi9knzc7wpulvrnmm0lha74pwd2we7xs9l0uezu6zsvljm8ndhcc6qzz7qet44cjfpfvul2p0ta98bdn5bwzo8wxz6k33zv1lk5ffgff55rtji7an3qe4jnwusdzgup02m23uddsa88yium3tbkw3378ghpx7mivbqqhv13x0np65mshhk1g3fftm4apomi24m60f38ynwzl8nbay2ofohe25z0ork9lusqm2ss0c75azoqjfms4e7635t5u10thevpqalxfv68qhiev3mjwnduf4j5pduqhqv97aw5tphicpj249vy21kuo3op03hhvriawp85a5cfxn6gm6sgteqd6x5hsq8o6409gyos9zz218kvpdx3knt7w4p0c27iseslrl4ka0d19a5wywzul0a3mzcj615dzwcjx2uqiry5 == \z\a\e\l\f\t\3\u\v\4\k\d\u\1\f\o\4\m\l\0\9\2\m\5\5\i\9\u\1\3\v\7\p\4\n\u\l\f\b\t\1\a\v\b\w\y\4\r\a\f\q\t\f\i\r\e\d\j\n\k\g\x\r\c\d\f\9\p\b\s\f\n\1\k\s\h\w\5\m\9\d\x\v\u\v\y\i\9\k\n\z\c\7\w\p\u\l\v\r\n\m\m\0\l\h\a\7\4\p\w\d\2\w\e\7\x\s\9\l\0\u\e\z\u\6\z\s\v\l\j\m\8\n\d\h\c\c\6\q\z\z\7\q\e\t\4\4\c\j\f\p\f\v\u\l\2\p\0\t\a\9\8\b\d\n\5\b\w\z\o\8\w\x\z\6\k\3\3\z\v\1\l\k\5\f\f\g\f\f\5\5\r\t\j\i\7\a\n\3\q\e\4\j\n\w\u\s\d\z\g\u\p\0\2\m\2\3\u\d\d\s\a\8\8\y\i\u\m\3\t\b\k\w\3\3\7\8\g\h\p\x\7\m\i\v\b\q\q\h\v\1\3\x\0\n\p\6\5\m\s\h\h\k\1\g\3\f\f\t\m\4\a\p\o\m\i\2\4\m\6\0\f\3\8\y\n\w\z\l\8\n\b\a\y\2\o\f\o\h\e\2\5\z\0\o\r\k\9\l\u\s\q\m\2\s\s\0\c\7\5\a\z\o\q\j\f\m\s\4\e\7\6\3\5\t\5\u\1\0\t\h\e\v\p\q\a\l\x\f\v\6\8\q\h\i\e\v\3\m\j\w\n\d\u\f\4\j\5\p\d\u\q\h\q\v\9\7\a\w\5\t\p\h\i\c\p\j\2\4\9\v\y\2\1\k\u\o\3\o\p\0\3\h\h\v\r\i\a\w\p\8\5\a\5\c\f\x\n\6\g\m\6\s\g\t\e\q\d\6\x\5\h\s\q\8\o\6\4\0\9\g\y\o\s\9\z\z\2\1\8\k\v\p\d\x\3\k\n\t\7\w\4\p\0\c\2\7\i\s\e\s\l\r\l\4\k\a\0\d\1\9\a\5\w\y\w\z\u\l\0\a\3\m\z\c\j\6\1\5\d\z\w\c\j\x\2\u\q\i\r\y\5 ]] 00:38:50.496 23:24:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:50.496 23:24:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:38:50.496 [2024-07-13 23:24:39.803837] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:50.496 [2024-07-13 23:24:39.804081] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175261 ] 00:38:50.755 [2024-07-13 23:24:39.942247] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:50.755 [2024-07-13 23:24:40.031820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:51.014  Copying: 512/512 [B] (average 250 kBps) 00:38:51.014 00:38:51.014 23:24:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ zaelft3uv4kdu1fo4ml092m55i9u13v7p4nulfbt1avbwy4rafqtfiredjnkgxrcdf9pbsfn1kshw5m9dxvuvyi9knzc7wpulvrnmm0lha74pwd2we7xs9l0uezu6zsvljm8ndhcc6qzz7qet44cjfpfvul2p0ta98bdn5bwzo8wxz6k33zv1lk5ffgff55rtji7an3qe4jnwusdzgup02m23uddsa88yium3tbkw3378ghpx7mivbqqhv13x0np65mshhk1g3fftm4apomi24m60f38ynwzl8nbay2ofohe25z0ork9lusqm2ss0c75azoqjfms4e7635t5u10thevpqalxfv68qhiev3mjwnduf4j5pduqhqv97aw5tphicpj249vy21kuo3op03hhvriawp85a5cfxn6gm6sgteqd6x5hsq8o6409gyos9zz218kvpdx3knt7w4p0c27iseslrl4ka0d19a5wywzul0a3mzcj615dzwcjx2uqiry5 == \z\a\e\l\f\t\3\u\v\4\k\d\u\1\f\o\4\m\l\0\9\2\m\5\5\i\9\u\1\3\v\7\p\4\n\u\l\f\b\t\1\a\v\b\w\y\4\r\a\f\q\t\f\i\r\e\d\j\n\k\g\x\r\c\d\f\9\p\b\s\f\n\1\k\s\h\w\5\m\9\d\x\v\u\v\y\i\9\k\n\z\c\7\w\p\u\l\v\r\n\m\m\0\l\h\a\7\4\p\w\d\2\w\e\7\x\s\9\l\0\u\e\z\u\6\z\s\v\l\j\m\8\n\d\h\c\c\6\q\z\z\7\q\e\t\4\4\c\j\f\p\f\v\u\l\2\p\0\t\a\9\8\b\d\n\5\b\w\z\o\8\w\x\z\6\k\3\3\z\v\1\l\k\5\f\f\g\f\f\5\5\r\t\j\i\7\a\n\3\q\e\4\j\n\w\u\s\d\z\g\u\p\0\2\m\2\3\u\d\d\s\a\8\8\y\i\u\m\3\t\b\k\w\3\3\7\8\g\h\p\x\7\m\i\v\b\q\q\h\v\1\3\x\0\n\p\6\5\m\s\h\h\k\1\g\3\f\f\t\m\4\a\p\o\m\i\2\4\m\6\0\f\3\8\y\n\w\z\l\8\n\b\a\y\2\o\f\o\h\e\2\5\z\0\o\r\k\9\l\u\s\q\m\2\s\s\0\c\7\5\a\z\o\q\j\f\m\s\4\e\7\6\3\5\t\5\u\1\0\t\h\e\v\p\q\a\l\x\f\v\6\8\q\h\i\e\v\3\m\j\w\n\d\u\f\4\j\5\p\d\u\q\h\q\v\9\7\a\w\5\t\p\h\i\c\p\j\2\4\9\v\y\2\1\k\u\o\3\o\p\0\3\h\h\v\r\i\a\w\p\8\5\a\5\c\f\x\n\6\g\m\6\s\g\t\e\q\d\6\x\5\h\s\q\8\o\6\4\0\9\g\y\o\s\9\z\z\2\1\8\k\v\p\d\x\3\k\n\t\7\w\4\p\0\c\2\7\i\s\e\s\l\r\l\4\k\a\0\d\1\9\a\5\w\y\w\z\u\l\0\a\3\m\z\c\j\6\1\5\d\z\w\c\j\x\2\u\q\i\r\y\5 ]] 00:38:51.014 23:24:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:51.014 23:24:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:38:51.273 [2024-07-13 23:24:40.464896] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:51.273 [2024-07-13 23:24:40.465220] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175274 ] 00:38:51.273 [2024-07-13 23:24:40.612717] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:51.556 [2024-07-13 23:24:40.704713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:51.815  Copying: 512/512 [B] (average 250 kBps) 00:38:51.815 00:38:51.815 23:24:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ zaelft3uv4kdu1fo4ml092m55i9u13v7p4nulfbt1avbwy4rafqtfiredjnkgxrcdf9pbsfn1kshw5m9dxvuvyi9knzc7wpulvrnmm0lha74pwd2we7xs9l0uezu6zsvljm8ndhcc6qzz7qet44cjfpfvul2p0ta98bdn5bwzo8wxz6k33zv1lk5ffgff55rtji7an3qe4jnwusdzgup02m23uddsa88yium3tbkw3378ghpx7mivbqqhv13x0np65mshhk1g3fftm4apomi24m60f38ynwzl8nbay2ofohe25z0ork9lusqm2ss0c75azoqjfms4e7635t5u10thevpqalxfv68qhiev3mjwnduf4j5pduqhqv97aw5tphicpj249vy21kuo3op03hhvriawp85a5cfxn6gm6sgteqd6x5hsq8o6409gyos9zz218kvpdx3knt7w4p0c27iseslrl4ka0d19a5wywzul0a3mzcj615dzwcjx2uqiry5 == \z\a\e\l\f\t\3\u\v\4\k\d\u\1\f\o\4\m\l\0\9\2\m\5\5\i\9\u\1\3\v\7\p\4\n\u\l\f\b\t\1\a\v\b\w\y\4\r\a\f\q\t\f\i\r\e\d\j\n\k\g\x\r\c\d\f\9\p\b\s\f\n\1\k\s\h\w\5\m\9\d\x\v\u\v\y\i\9\k\n\z\c\7\w\p\u\l\v\r\n\m\m\0\l\h\a\7\4\p\w\d\2\w\e\7\x\s\9\l\0\u\e\z\u\6\z\s\v\l\j\m\8\n\d\h\c\c\6\q\z\z\7\q\e\t\4\4\c\j\f\p\f\v\u\l\2\p\0\t\a\9\8\b\d\n\5\b\w\z\o\8\w\x\z\6\k\3\3\z\v\1\l\k\5\f\f\g\f\f\5\5\r\t\j\i\7\a\n\3\q\e\4\j\n\w\u\s\d\z\g\u\p\0\2\m\2\3\u\d\d\s\a\8\8\y\i\u\m\3\t\b\k\w\3\3\7\8\g\h\p\x\7\m\i\v\b\q\q\h\v\1\3\x\0\n\p\6\5\m\s\h\h\k\1\g\3\f\f\t\m\4\a\p\o\m\i\2\4\m\6\0\f\3\8\y\n\w\z\l\8\n\b\a\y\2\o\f\o\h\e\2\5\z\0\o\r\k\9\l\u\s\q\m\2\s\s\0\c\7\5\a\z\o\q\j\f\m\s\4\e\7\6\3\5\t\5\u\1\0\t\h\e\v\p\q\a\l\x\f\v\6\8\q\h\i\e\v\3\m\j\w\n\d\u\f\4\j\5\p\d\u\q\h\q\v\9\7\a\w\5\t\p\h\i\c\p\j\2\4\9\v\y\2\1\k\u\o\3\o\p\0\3\h\h\v\r\i\a\w\p\8\5\a\5\c\f\x\n\6\g\m\6\s\g\t\e\q\d\6\x\5\h\s\q\8\o\6\4\0\9\g\y\o\s\9\z\z\2\1\8\k\v\p\d\x\3\k\n\t\7\w\4\p\0\c\2\7\i\s\e\s\l\r\l\4\k\a\0\d\1\9\a\5\w\y\w\z\u\l\0\a\3\m\z\c\j\6\1\5\d\z\w\c\j\x\2\u\q\i\r\y\5 ]] 00:38:51.815 23:24:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:38:51.815 23:24:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:38:51.815 23:24:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:38:51.815 23:24:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:38:51.815 23:24:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:51.815 23:24:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:38:51.815 [2024-07-13 23:24:41.169733] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:51.815 [2024-07-13 23:24:41.170044] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175289 ] 00:38:52.073 [2024-07-13 23:24:41.325094] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:52.073 [2024-07-13 23:24:41.408196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:52.590  Copying: 512/512 [B] (average 500 kBps) 00:38:52.590 00:38:52.590 23:24:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ssaivootsi4q2ujcjoda6fd22q90l1vehlx098vhqc5yfww7ncn6f1j9oyxjwahfw3x19gpdxb2uin90y1zfqwq7he0dmc7h1chbje6y2rkcfywvera0lgndulpghag3dfptuubrpk2httgqpcrosv4skvc8w5wjoize23ak09v0g18ge1nyrkb9xdkcbehyao7i3up23wcp2gmoj42o3aitdfqrh5o70d3dh2y17258h2u3bfwbghmp4synk2xneghrnuntgi34dd16pb5cypb2l343bnmrgibdbsnmcewjji0i3nk1w63qoyfsqtdtv7hjvn8n30cd18y9qz180rlltj0aq1nprjfercr9b27p1zsddlk46e4u7yqmjq7yatkx4ahuzlhr90on9f58twphqgf1zmoc59ex7tyscl5f8b2mbnkqa7ynovrcmqmv4r0bjvnsngbxk8zo4d2ujaaya9gh7gjkbvbgrqs5nhealqx0wzi5xjsrlafmy1c6 == \s\s\a\i\v\o\o\t\s\i\4\q\2\u\j\c\j\o\d\a\6\f\d\2\2\q\9\0\l\1\v\e\h\l\x\0\9\8\v\h\q\c\5\y\f\w\w\7\n\c\n\6\f\1\j\9\o\y\x\j\w\a\h\f\w\3\x\1\9\g\p\d\x\b\2\u\i\n\9\0\y\1\z\f\q\w\q\7\h\e\0\d\m\c\7\h\1\c\h\b\j\e\6\y\2\r\k\c\f\y\w\v\e\r\a\0\l\g\n\d\u\l\p\g\h\a\g\3\d\f\p\t\u\u\b\r\p\k\2\h\t\t\g\q\p\c\r\o\s\v\4\s\k\v\c\8\w\5\w\j\o\i\z\e\2\3\a\k\0\9\v\0\g\1\8\g\e\1\n\y\r\k\b\9\x\d\k\c\b\e\h\y\a\o\7\i\3\u\p\2\3\w\c\p\2\g\m\o\j\4\2\o\3\a\i\t\d\f\q\r\h\5\o\7\0\d\3\d\h\2\y\1\7\2\5\8\h\2\u\3\b\f\w\b\g\h\m\p\4\s\y\n\k\2\x\n\e\g\h\r\n\u\n\t\g\i\3\4\d\d\1\6\p\b\5\c\y\p\b\2\l\3\4\3\b\n\m\r\g\i\b\d\b\s\n\m\c\e\w\j\j\i\0\i\3\n\k\1\w\6\3\q\o\y\f\s\q\t\d\t\v\7\h\j\v\n\8\n\3\0\c\d\1\8\y\9\q\z\1\8\0\r\l\l\t\j\0\a\q\1\n\p\r\j\f\e\r\c\r\9\b\2\7\p\1\z\s\d\d\l\k\4\6\e\4\u\7\y\q\m\j\q\7\y\a\t\k\x\4\a\h\u\z\l\h\r\9\0\o\n\9\f\5\8\t\w\p\h\q\g\f\1\z\m\o\c\5\9\e\x\7\t\y\s\c\l\5\f\8\b\2\m\b\n\k\q\a\7\y\n\o\v\r\c\m\q\m\v\4\r\0\b\j\v\n\s\n\g\b\x\k\8\z\o\4\d\2\u\j\a\a\y\a\9\g\h\7\g\j\k\b\v\b\g\r\q\s\5\n\h\e\a\l\q\x\0\w\z\i\5\x\j\s\r\l\a\f\m\y\1\c\6 ]] 00:38:52.590 23:24:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:52.590 23:24:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:38:52.590 [2024-07-13 23:24:41.820226] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:52.590 [2024-07-13 23:24:41.820458] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175299 ] 00:38:52.590 [2024-07-13 23:24:41.961708] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:52.848 [2024-07-13 23:24:42.048679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:53.105  Copying: 512/512 [B] (average 500 kBps) 00:38:53.105 00:38:53.106 23:24:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ssaivootsi4q2ujcjoda6fd22q90l1vehlx098vhqc5yfww7ncn6f1j9oyxjwahfw3x19gpdxb2uin90y1zfqwq7he0dmc7h1chbje6y2rkcfywvera0lgndulpghag3dfptuubrpk2httgqpcrosv4skvc8w5wjoize23ak09v0g18ge1nyrkb9xdkcbehyao7i3up23wcp2gmoj42o3aitdfqrh5o70d3dh2y17258h2u3bfwbghmp4synk2xneghrnuntgi34dd16pb5cypb2l343bnmrgibdbsnmcewjji0i3nk1w63qoyfsqtdtv7hjvn8n30cd18y9qz180rlltj0aq1nprjfercr9b27p1zsddlk46e4u7yqmjq7yatkx4ahuzlhr90on9f58twphqgf1zmoc59ex7tyscl5f8b2mbnkqa7ynovrcmqmv4r0bjvnsngbxk8zo4d2ujaaya9gh7gjkbvbgrqs5nhealqx0wzi5xjsrlafmy1c6 == \s\s\a\i\v\o\o\t\s\i\4\q\2\u\j\c\j\o\d\a\6\f\d\2\2\q\9\0\l\1\v\e\h\l\x\0\9\8\v\h\q\c\5\y\f\w\w\7\n\c\n\6\f\1\j\9\o\y\x\j\w\a\h\f\w\3\x\1\9\g\p\d\x\b\2\u\i\n\9\0\y\1\z\f\q\w\q\7\h\e\0\d\m\c\7\h\1\c\h\b\j\e\6\y\2\r\k\c\f\y\w\v\e\r\a\0\l\g\n\d\u\l\p\g\h\a\g\3\d\f\p\t\u\u\b\r\p\k\2\h\t\t\g\q\p\c\r\o\s\v\4\s\k\v\c\8\w\5\w\j\o\i\z\e\2\3\a\k\0\9\v\0\g\1\8\g\e\1\n\y\r\k\b\9\x\d\k\c\b\e\h\y\a\o\7\i\3\u\p\2\3\w\c\p\2\g\m\o\j\4\2\o\3\a\i\t\d\f\q\r\h\5\o\7\0\d\3\d\h\2\y\1\7\2\5\8\h\2\u\3\b\f\w\b\g\h\m\p\4\s\y\n\k\2\x\n\e\g\h\r\n\u\n\t\g\i\3\4\d\d\1\6\p\b\5\c\y\p\b\2\l\3\4\3\b\n\m\r\g\i\b\d\b\s\n\m\c\e\w\j\j\i\0\i\3\n\k\1\w\6\3\q\o\y\f\s\q\t\d\t\v\7\h\j\v\n\8\n\3\0\c\d\1\8\y\9\q\z\1\8\0\r\l\l\t\j\0\a\q\1\n\p\r\j\f\e\r\c\r\9\b\2\7\p\1\z\s\d\d\l\k\4\6\e\4\u\7\y\q\m\j\q\7\y\a\t\k\x\4\a\h\u\z\l\h\r\9\0\o\n\9\f\5\8\t\w\p\h\q\g\f\1\z\m\o\c\5\9\e\x\7\t\y\s\c\l\5\f\8\b\2\m\b\n\k\q\a\7\y\n\o\v\r\c\m\q\m\v\4\r\0\b\j\v\n\s\n\g\b\x\k\8\z\o\4\d\2\u\j\a\a\y\a\9\g\h\7\g\j\k\b\v\b\g\r\q\s\5\n\h\e\a\l\q\x\0\w\z\i\5\x\j\s\r\l\a\f\m\y\1\c\6 ]] 00:38:53.106 23:24:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:53.106 23:24:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:38:53.106 [2024-07-13 23:24:42.483743] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:53.106 [2024-07-13 23:24:42.484056] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175311 ] 00:38:53.364 [2024-07-13 23:24:42.633197] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:53.364 [2024-07-13 23:24:42.724537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:53.879  Copying: 512/512 [B] (average 250 kBps) 00:38:53.879 00:38:53.880 23:24:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ssaivootsi4q2ujcjoda6fd22q90l1vehlx098vhqc5yfww7ncn6f1j9oyxjwahfw3x19gpdxb2uin90y1zfqwq7he0dmc7h1chbje6y2rkcfywvera0lgndulpghag3dfptuubrpk2httgqpcrosv4skvc8w5wjoize23ak09v0g18ge1nyrkb9xdkcbehyao7i3up23wcp2gmoj42o3aitdfqrh5o70d3dh2y17258h2u3bfwbghmp4synk2xneghrnuntgi34dd16pb5cypb2l343bnmrgibdbsnmcewjji0i3nk1w63qoyfsqtdtv7hjvn8n30cd18y9qz180rlltj0aq1nprjfercr9b27p1zsddlk46e4u7yqmjq7yatkx4ahuzlhr90on9f58twphqgf1zmoc59ex7tyscl5f8b2mbnkqa7ynovrcmqmv4r0bjvnsngbxk8zo4d2ujaaya9gh7gjkbvbgrqs5nhealqx0wzi5xjsrlafmy1c6 == \s\s\a\i\v\o\o\t\s\i\4\q\2\u\j\c\j\o\d\a\6\f\d\2\2\q\9\0\l\1\v\e\h\l\x\0\9\8\v\h\q\c\5\y\f\w\w\7\n\c\n\6\f\1\j\9\o\y\x\j\w\a\h\f\w\3\x\1\9\g\p\d\x\b\2\u\i\n\9\0\y\1\z\f\q\w\q\7\h\e\0\d\m\c\7\h\1\c\h\b\j\e\6\y\2\r\k\c\f\y\w\v\e\r\a\0\l\g\n\d\u\l\p\g\h\a\g\3\d\f\p\t\u\u\b\r\p\k\2\h\t\t\g\q\p\c\r\o\s\v\4\s\k\v\c\8\w\5\w\j\o\i\z\e\2\3\a\k\0\9\v\0\g\1\8\g\e\1\n\y\r\k\b\9\x\d\k\c\b\e\h\y\a\o\7\i\3\u\p\2\3\w\c\p\2\g\m\o\j\4\2\o\3\a\i\t\d\f\q\r\h\5\o\7\0\d\3\d\h\2\y\1\7\2\5\8\h\2\u\3\b\f\w\b\g\h\m\p\4\s\y\n\k\2\x\n\e\g\h\r\n\u\n\t\g\i\3\4\d\d\1\6\p\b\5\c\y\p\b\2\l\3\4\3\b\n\m\r\g\i\b\d\b\s\n\m\c\e\w\j\j\i\0\i\3\n\k\1\w\6\3\q\o\y\f\s\q\t\d\t\v\7\h\j\v\n\8\n\3\0\c\d\1\8\y\9\q\z\1\8\0\r\l\l\t\j\0\a\q\1\n\p\r\j\f\e\r\c\r\9\b\2\7\p\1\z\s\d\d\l\k\4\6\e\4\u\7\y\q\m\j\q\7\y\a\t\k\x\4\a\h\u\z\l\h\r\9\0\o\n\9\f\5\8\t\w\p\h\q\g\f\1\z\m\o\c\5\9\e\x\7\t\y\s\c\l\5\f\8\b\2\m\b\n\k\q\a\7\y\n\o\v\r\c\m\q\m\v\4\r\0\b\j\v\n\s\n\g\b\x\k\8\z\o\4\d\2\u\j\a\a\y\a\9\g\h\7\g\j\k\b\v\b\g\r\q\s\5\n\h\e\a\l\q\x\0\w\z\i\5\x\j\s\r\l\a\f\m\y\1\c\6 ]] 00:38:53.880 23:24:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:53.880 23:24:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:38:53.880 [2024-07-13 23:24:43.161383] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:53.880 [2024-07-13 23:24:43.161650] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175328 ] 00:38:54.138 [2024-07-13 23:24:43.308214] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:54.138 [2024-07-13 23:24:43.389890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:54.396  Copying: 512/512 [B] (average 250 kBps) 00:38:54.396 00:38:54.396 23:24:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ssaivootsi4q2ujcjoda6fd22q90l1vehlx098vhqc5yfww7ncn6f1j9oyxjwahfw3x19gpdxb2uin90y1zfqwq7he0dmc7h1chbje6y2rkcfywvera0lgndulpghag3dfptuubrpk2httgqpcrosv4skvc8w5wjoize23ak09v0g18ge1nyrkb9xdkcbehyao7i3up23wcp2gmoj42o3aitdfqrh5o70d3dh2y17258h2u3bfwbghmp4synk2xneghrnuntgi34dd16pb5cypb2l343bnmrgibdbsnmcewjji0i3nk1w63qoyfsqtdtv7hjvn8n30cd18y9qz180rlltj0aq1nprjfercr9b27p1zsddlk46e4u7yqmjq7yatkx4ahuzlhr90on9f58twphqgf1zmoc59ex7tyscl5f8b2mbnkqa7ynovrcmqmv4r0bjvnsngbxk8zo4d2ujaaya9gh7gjkbvbgrqs5nhealqx0wzi5xjsrlafmy1c6 == \s\s\a\i\v\o\o\t\s\i\4\q\2\u\j\c\j\o\d\a\6\f\d\2\2\q\9\0\l\1\v\e\h\l\x\0\9\8\v\h\q\c\5\y\f\w\w\7\n\c\n\6\f\1\j\9\o\y\x\j\w\a\h\f\w\3\x\1\9\g\p\d\x\b\2\u\i\n\9\0\y\1\z\f\q\w\q\7\h\e\0\d\m\c\7\h\1\c\h\b\j\e\6\y\2\r\k\c\f\y\w\v\e\r\a\0\l\g\n\d\u\l\p\g\h\a\g\3\d\f\p\t\u\u\b\r\p\k\2\h\t\t\g\q\p\c\r\o\s\v\4\s\k\v\c\8\w\5\w\j\o\i\z\e\2\3\a\k\0\9\v\0\g\1\8\g\e\1\n\y\r\k\b\9\x\d\k\c\b\e\h\y\a\o\7\i\3\u\p\2\3\w\c\p\2\g\m\o\j\4\2\o\3\a\i\t\d\f\q\r\h\5\o\7\0\d\3\d\h\2\y\1\7\2\5\8\h\2\u\3\b\f\w\b\g\h\m\p\4\s\y\n\k\2\x\n\e\g\h\r\n\u\n\t\g\i\3\4\d\d\1\6\p\b\5\c\y\p\b\2\l\3\4\3\b\n\m\r\g\i\b\d\b\s\n\m\c\e\w\j\j\i\0\i\3\n\k\1\w\6\3\q\o\y\f\s\q\t\d\t\v\7\h\j\v\n\8\n\3\0\c\d\1\8\y\9\q\z\1\8\0\r\l\l\t\j\0\a\q\1\n\p\r\j\f\e\r\c\r\9\b\2\7\p\1\z\s\d\d\l\k\4\6\e\4\u\7\y\q\m\j\q\7\y\a\t\k\x\4\a\h\u\z\l\h\r\9\0\o\n\9\f\5\8\t\w\p\h\q\g\f\1\z\m\o\c\5\9\e\x\7\t\y\s\c\l\5\f\8\b\2\m\b\n\k\q\a\7\y\n\o\v\r\c\m\q\m\v\4\r\0\b\j\v\n\s\n\g\b\x\k\8\z\o\4\d\2\u\j\a\a\y\a\9\g\h\7\g\j\k\b\v\b\g\r\q\s\5\n\h\e\a\l\q\x\0\w\z\i\5\x\j\s\r\l\a\f\m\y\1\c\6 ]] 00:38:54.396 00:38:54.396 real 0m5.313s 00:38:54.396 user 0m2.714s 00:38:54.396 sys 0m1.478s 00:38:54.396 23:24:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:54.396 ************************************ 00:38:54.396 23:24:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:38:54.396 END TEST dd_flags_misc 00:38:54.396 ************************************ 00:38:54.654 23:24:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:38:54.654 23:24:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:38:54.654 23:24:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:38:54.654 * Second test run, using AIO 00:38:54.654 23:24:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:38:54.654 23:24:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:38:54.654 23:24:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:54.654 23:24:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:54.654 23:24:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:54.654 ************************************ 00:38:54.654 START TEST dd_flag_append_forced_aio 00:38:54.654 ************************************ 00:38:54.654 23:24:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # append 00:38:54.654 23:24:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:38:54.654 23:24:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:38:54.654 23:24:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:38:54.654 23:24:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:38:54.654 23:24:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:54.654 23:24:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=9q0yj1l0r2hdqtvscw8yx3e17tugcf4h 00:38:54.654 23:24:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:38:54.654 23:24:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:38:54.654 23:24:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:54.654 23:24:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=nhfn3pec2jn5hh90mmlu62gpw9wx3hs6 00:38:54.654 23:24:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s 9q0yj1l0r2hdqtvscw8yx3e17tugcf4h 00:38:54.654 23:24:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s nhfn3pec2jn5hh90mmlu62gpw9wx3hs6 00:38:54.654 23:24:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:38:54.654 [2024-07-13 23:24:43.874811] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:54.654 [2024-07-13 23:24:43.875091] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175354 ] 00:38:54.654 [2024-07-13 23:24:44.028870] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:54.912 [2024-07-13 23:24:44.117369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:55.170  Copying: 32/32 [B] (average 31 kBps) 00:38:55.170 00:38:55.170 23:24:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ nhfn3pec2jn5hh90mmlu62gpw9wx3hs69q0yj1l0r2hdqtvscw8yx3e17tugcf4h == \n\h\f\n\3\p\e\c\2\j\n\5\h\h\9\0\m\m\l\u\6\2\g\p\w\9\w\x\3\h\s\6\9\q\0\y\j\1\l\0\r\2\h\d\q\t\v\s\c\w\8\y\x\3\e\1\7\t\u\g\c\f\4\h ]] 00:38:55.170 00:38:55.170 real 0m0.666s 00:38:55.170 user 0m0.338s 00:38:55.170 sys 0m0.199s 00:38:55.170 ************************************ 00:38:55.170 END TEST dd_flag_append_forced_aio 00:38:55.170 ************************************ 00:38:55.170 23:24:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:55.170 23:24:44 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:55.170 23:24:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:38:55.170 23:24:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:38:55.170 23:24:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:55.170 23:24:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:55.170 23:24:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:55.170 ************************************ 00:38:55.170 START TEST dd_flag_directory_forced_aio 00:38:55.170 ************************************ 00:38:55.170 23:24:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # directory 00:38:55.170 23:24:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:55.170 23:24:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:38:55.170 23:24:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:55.170 23:24:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:55.170 23:24:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:55.170 23:24:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:55.170 23:24:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:55.170 23:24:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:55.170 23:24:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:55.170 23:24:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:55.170 23:24:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:55.170 23:24:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:55.428 [2024-07-13 23:24:44.591294] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:55.428 [2024-07-13 23:24:44.591557] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175396 ] 00:38:55.428 [2024-07-13 23:24:44.735782] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:55.428 [2024-07-13 23:24:44.808850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:55.686 [2024-07-13 23:24:44.891849] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:38:55.686 [2024-07-13 23:24:44.891951] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:38:55.686 [2024-07-13 23:24:44.891989] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:55.686 [2024-07-13 23:24:45.010886] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:55.944 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:38:55.944 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:55.944 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:38:55.944 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:38:55.944 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:38:55.944 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:55.944 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:38:55.944 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:38:55.944 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:38:55.944 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:55.944 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:55.944 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:55.944 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:55.944 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:55.944 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:55.944 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:55.944 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:55.944 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:38:55.944 [2024-07-13 23:24:45.190419] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:55.944 [2024-07-13 23:24:45.190696] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175411 ] 00:38:55.944 [2024-07-13 23:24:45.337306] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:56.203 [2024-07-13 23:24:45.431289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:56.203 [2024-07-13 23:24:45.517432] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:38:56.203 [2024-07-13 23:24:45.517537] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:38:56.203 [2024-07-13 23:24:45.517584] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:56.461 [2024-07-13 23:24:45.639452] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:56.461 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:38:56.461 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:56.461 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:38:56.462 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:38:56.462 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:38:56.462 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:56.462 00:38:56.462 real 0m1.248s 00:38:56.462 user 0m0.641s 00:38:56.462 sys 0m0.407s 00:38:56.462 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:56.462 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:56.462 ************************************ 00:38:56.462 END TEST dd_flag_directory_forced_aio 00:38:56.462 ************************************ 00:38:56.462 23:24:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:38:56.462 23:24:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:38:56.462 23:24:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:56.462 23:24:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:56.462 23:24:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:56.462 ************************************ 00:38:56.462 START TEST dd_flag_nofollow_forced_aio 00:38:56.462 ************************************ 00:38:56.462 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # nofollow 00:38:56.462 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:38:56.462 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:38:56.462 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:38:56.462 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:38:56.462 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:56.462 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:38:56.462 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:56.462 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:56.462 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:56.462 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:56.462 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:56.462 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:56.462 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:56.462 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:56.462 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:56.462 23:24:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:56.721 [2024-07-13 23:24:45.897534] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:56.721 [2024-07-13 23:24:45.898071] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175443 ] 00:38:56.721 [2024-07-13 23:24:46.041776] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:56.721 [2024-07-13 23:24:46.111888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:56.980 [2024-07-13 23:24:46.193795] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:38:56.980 [2024-07-13 23:24:46.193916] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:38:56.980 [2024-07-13 23:24:46.193958] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:56.980 [2024-07-13 23:24:46.312627] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:57.238 23:24:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:38:57.238 23:24:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:57.238 23:24:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:38:57.238 23:24:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:38:57.238 23:24:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:38:57.238 23:24:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:57.238 23:24:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:38:57.238 23:24:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:38:57.238 23:24:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:38:57.238 23:24:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:57.238 23:24:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:57.238 23:24:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:57.238 23:24:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:57.238 23:24:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:57.238 23:24:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:57.238 23:24:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:57.238 23:24:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:57.238 23:24:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:38:57.238 [2024-07-13 23:24:46.500370] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:57.238 [2024-07-13 23:24:46.500645] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175458 ] 00:38:57.497 [2024-07-13 23:24:46.649026] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:57.497 [2024-07-13 23:24:46.726714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:57.497 [2024-07-13 23:24:46.807736] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:38:57.497 [2024-07-13 23:24:46.807857] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:38:57.497 [2024-07-13 23:24:46.807917] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:57.756 [2024-07-13 23:24:46.929678] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:57.756 23:24:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:38:57.756 23:24:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:57.756 23:24:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:38:57.756 23:24:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:38:57.756 23:24:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:38:57.756 23:24:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:57.756 23:24:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:38:57.756 23:24:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:38:57.756 23:24:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:57.756 23:24:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:57.756 [2024-07-13 23:24:47.116025] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:57.756 [2024-07-13 23:24:47.116295] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175469 ] 00:38:58.015 [2024-07-13 23:24:47.264642] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:58.015 [2024-07-13 23:24:47.355139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:58.533  Copying: 512/512 [B] (average 500 kBps) 00:38:58.533 00:38:58.533 23:24:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ raszxn75zjalc2jljmuhfk20r7ljclqdj3y8629yjxwx6ew4tv7mehhn9rydso7iq9zh2uqsx7ddyyx4qbgpjkr9w41fk855c97bbjmpevu25pdcaijuowd170wvw7ndbr2qofo86od8z2vwldgwufrccq6ojjest6k6s7jzg8nc342gs2y0htqzyfg1rxwc3cqoulxut82fh7e2g2xrkg3bb1nvd2ek3y5x8jb3thabu7ox5tbngbp5ok8s60q1xmo1a76uljipcvrpqvyn57jpexvpqq7ach7nti3p1br2x6si36webuyncizahebarri4ztyflvhwm0lgs2i3t1sy6vw8g7f0026j5tmr4sgt5lvqt74kq9m0o4198ta86j2y4ufktcgbchx7yyvu3t9tdcswsmtvun4xrfl33efzf2vf061xycox268hqcsoder0agll3fu29hcpaakyv3j6e050kcsy2g95dmh18e77sc5gnusykr646eeoz1bg == \r\a\s\z\x\n\7\5\z\j\a\l\c\2\j\l\j\m\u\h\f\k\2\0\r\7\l\j\c\l\q\d\j\3\y\8\6\2\9\y\j\x\w\x\6\e\w\4\t\v\7\m\e\h\h\n\9\r\y\d\s\o\7\i\q\9\z\h\2\u\q\s\x\7\d\d\y\y\x\4\q\b\g\p\j\k\r\9\w\4\1\f\k\8\5\5\c\9\7\b\b\j\m\p\e\v\u\2\5\p\d\c\a\i\j\u\o\w\d\1\7\0\w\v\w\7\n\d\b\r\2\q\o\f\o\8\6\o\d\8\z\2\v\w\l\d\g\w\u\f\r\c\c\q\6\o\j\j\e\s\t\6\k\6\s\7\j\z\g\8\n\c\3\4\2\g\s\2\y\0\h\t\q\z\y\f\g\1\r\x\w\c\3\c\q\o\u\l\x\u\t\8\2\f\h\7\e\2\g\2\x\r\k\g\3\b\b\1\n\v\d\2\e\k\3\y\5\x\8\j\b\3\t\h\a\b\u\7\o\x\5\t\b\n\g\b\p\5\o\k\8\s\6\0\q\1\x\m\o\1\a\7\6\u\l\j\i\p\c\v\r\p\q\v\y\n\5\7\j\p\e\x\v\p\q\q\7\a\c\h\7\n\t\i\3\p\1\b\r\2\x\6\s\i\3\6\w\e\b\u\y\n\c\i\z\a\h\e\b\a\r\r\i\4\z\t\y\f\l\v\h\w\m\0\l\g\s\2\i\3\t\1\s\y\6\v\w\8\g\7\f\0\0\2\6\j\5\t\m\r\4\s\g\t\5\l\v\q\t\7\4\k\q\9\m\0\o\4\1\9\8\t\a\8\6\j\2\y\4\u\f\k\t\c\g\b\c\h\x\7\y\y\v\u\3\t\9\t\d\c\s\w\s\m\t\v\u\n\4\x\r\f\l\3\3\e\f\z\f\2\v\f\0\6\1\x\y\c\o\x\2\6\8\h\q\c\s\o\d\e\r\0\a\g\l\l\3\f\u\2\9\h\c\p\a\a\k\y\v\3\j\6\e\0\5\0\k\c\s\y\2\g\9\5\d\m\h\1\8\e\7\7\s\c\5\g\n\u\s\y\k\r\6\4\6\e\e\o\z\1\b\g ]] 00:38:58.533 00:38:58.533 real 0m1.903s 00:38:58.533 user 0m0.964s 00:38:58.533 sys 0m0.597s 00:38:58.533 ************************************ 00:38:58.533 END TEST dd_flag_nofollow_forced_aio 00:38:58.533 ************************************ 00:38:58.533 23:24:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:58.533 23:24:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:58.533 23:24:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:38:58.533 23:24:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:38:58.533 23:24:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:58.533 23:24:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:58.533 23:24:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:58.533 ************************************ 00:38:58.533 START TEST dd_flag_noatime_forced_aio 00:38:58.533 ************************************ 00:38:58.533 23:24:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # noatime 00:38:58.533 23:24:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:38:58.533 23:24:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:38:58.533 23:24:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:38:58.533 23:24:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:38:58.533 23:24:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:58.533 23:24:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:58.533 23:24:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1720913087 00:38:58.533 23:24:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:58.533 23:24:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1720913087 00:38:58.533 23:24:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:38:59.497 23:24:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:59.497 [2024-07-13 23:24:48.867468] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:38:59.497 [2024-07-13 23:24:48.868203] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175517 ] 00:38:59.756 [2024-07-13 23:24:49.012808] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:59.756 [2024-07-13 23:24:49.112674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:00.274  Copying: 512/512 [B] (average 500 kBps) 00:39:00.274 00:39:00.274 23:24:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:39:00.274 23:24:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1720913087 )) 00:39:00.274 23:24:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:39:00.274 23:24:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1720913087 )) 00:39:00.274 23:24:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:39:00.274 [2024-07-13 23:24:49.575299] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:39:00.274 [2024-07-13 23:24:49.575570] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175532 ] 00:39:00.533 [2024-07-13 23:24:49.721009] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:00.533 [2024-07-13 23:24:49.804102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:00.791  Copying: 512/512 [B] (average 500 kBps) 00:39:00.791 00:39:01.051 23:24:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:39:01.051 23:24:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1720913089 )) 00:39:01.051 00:39:01.051 real 0m2.408s 00:39:01.051 user 0m0.735s 00:39:01.051 sys 0m0.384s 00:39:01.051 23:24:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:01.051 23:24:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:39:01.051 ************************************ 00:39:01.051 END TEST dd_flag_noatime_forced_aio 00:39:01.051 ************************************ 00:39:01.051 23:24:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:39:01.051 23:24:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:39:01.051 23:24:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:01.051 23:24:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:01.051 23:24:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:39:01.051 ************************************ 00:39:01.051 START TEST dd_flags_misc_forced_aio 00:39:01.051 ************************************ 00:39:01.051 23:24:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # io 00:39:01.051 23:24:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:39:01.051 23:24:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:39:01.051 23:24:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:39:01.051 23:24:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:39:01.051 23:24:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:39:01.051 23:24:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:39:01.051 23:24:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:39:01.051 23:24:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:39:01.051 23:24:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:39:01.051 [2024-07-13 23:24:50.322195] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:39:01.051 [2024-07-13 23:24:50.323016] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175569 ] 00:39:01.310 [2024-07-13 23:24:50.469454] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:01.310 [2024-07-13 23:24:50.569732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:01.568  Copying: 512/512 [B] (average 500 kBps) 00:39:01.568 00:39:01.568 23:24:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 6k3b1q715pq38aoepjw0yjwtnm1w8hc7j1ixwyzpmdkgd1wyedwck1hssjt6um4r8f94fmbfx60837tpvn4j76517qv7jycjzuzbzjmsumvy2dfzgd285kxvvzcitg3rxjacycqmzy3dntj6l6x9n3qltiq4g5oe2koixx3ezra53bx87e7ofjoamxxjjjckq7rssgh3gjxdxz64uzbzn28rrq628wyrnatnma61vryb5zsctr02u4v519emwj86rsx6e8c9qyda3fv5gkn1y3kl8xlr6s8fwc9qzcfqmv7mu3zg4mycml1s68yar78yclo5a2e5cfc0j8yc5uwowvqdzycoot59xn3s80wddgz1bbe7rfdh7ncy5xqnktzn2y73ny50qi8qbxpc57npu1x3yr2m0ccxqtf3n9fkt7nuxcjro2gqutewsu7iisw0002a62nuipdgohjhcf030mice7ttau1b8siqp171iq2vi9h9zp202294c68y6tyi == \6\k\3\b\1\q\7\1\5\p\q\3\8\a\o\e\p\j\w\0\y\j\w\t\n\m\1\w\8\h\c\7\j\1\i\x\w\y\z\p\m\d\k\g\d\1\w\y\e\d\w\c\k\1\h\s\s\j\t\6\u\m\4\r\8\f\9\4\f\m\b\f\x\6\0\8\3\7\t\p\v\n\4\j\7\6\5\1\7\q\v\7\j\y\c\j\z\u\z\b\z\j\m\s\u\m\v\y\2\d\f\z\g\d\2\8\5\k\x\v\v\z\c\i\t\g\3\r\x\j\a\c\y\c\q\m\z\y\3\d\n\t\j\6\l\6\x\9\n\3\q\l\t\i\q\4\g\5\o\e\2\k\o\i\x\x\3\e\z\r\a\5\3\b\x\8\7\e\7\o\f\j\o\a\m\x\x\j\j\j\c\k\q\7\r\s\s\g\h\3\g\j\x\d\x\z\6\4\u\z\b\z\n\2\8\r\r\q\6\2\8\w\y\r\n\a\t\n\m\a\6\1\v\r\y\b\5\z\s\c\t\r\0\2\u\4\v\5\1\9\e\m\w\j\8\6\r\s\x\6\e\8\c\9\q\y\d\a\3\f\v\5\g\k\n\1\y\3\k\l\8\x\l\r\6\s\8\f\w\c\9\q\z\c\f\q\m\v\7\m\u\3\z\g\4\m\y\c\m\l\1\s\6\8\y\a\r\7\8\y\c\l\o\5\a\2\e\5\c\f\c\0\j\8\y\c\5\u\w\o\w\v\q\d\z\y\c\o\o\t\5\9\x\n\3\s\8\0\w\d\d\g\z\1\b\b\e\7\r\f\d\h\7\n\c\y\5\x\q\n\k\t\z\n\2\y\7\3\n\y\5\0\q\i\8\q\b\x\p\c\5\7\n\p\u\1\x\3\y\r\2\m\0\c\c\x\q\t\f\3\n\9\f\k\t\7\n\u\x\c\j\r\o\2\g\q\u\t\e\w\s\u\7\i\i\s\w\0\0\0\2\a\6\2\n\u\i\p\d\g\o\h\j\h\c\f\0\3\0\m\i\c\e\7\t\t\a\u\1\b\8\s\i\q\p\1\7\1\i\q\2\v\i\9\h\9\z\p\2\0\2\2\9\4\c\6\8\y\6\t\y\i ]] 00:39:01.568 23:24:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:39:01.568 23:24:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:39:01.827 [2024-07-13 23:24:50.992442] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:39:01.827 [2024-07-13 23:24:50.992687] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175577 ] 00:39:01.827 [2024-07-13 23:24:51.132484] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:01.827 [2024-07-13 23:24:51.205549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:02.344  Copying: 512/512 [B] (average 500 kBps) 00:39:02.344 00:39:02.344 23:24:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 6k3b1q715pq38aoepjw0yjwtnm1w8hc7j1ixwyzpmdkgd1wyedwck1hssjt6um4r8f94fmbfx60837tpvn4j76517qv7jycjzuzbzjmsumvy2dfzgd285kxvvzcitg3rxjacycqmzy3dntj6l6x9n3qltiq4g5oe2koixx3ezra53bx87e7ofjoamxxjjjckq7rssgh3gjxdxz64uzbzn28rrq628wyrnatnma61vryb5zsctr02u4v519emwj86rsx6e8c9qyda3fv5gkn1y3kl8xlr6s8fwc9qzcfqmv7mu3zg4mycml1s68yar78yclo5a2e5cfc0j8yc5uwowvqdzycoot59xn3s80wddgz1bbe7rfdh7ncy5xqnktzn2y73ny50qi8qbxpc57npu1x3yr2m0ccxqtf3n9fkt7nuxcjro2gqutewsu7iisw0002a62nuipdgohjhcf030mice7ttau1b8siqp171iq2vi9h9zp202294c68y6tyi == \6\k\3\b\1\q\7\1\5\p\q\3\8\a\o\e\p\j\w\0\y\j\w\t\n\m\1\w\8\h\c\7\j\1\i\x\w\y\z\p\m\d\k\g\d\1\w\y\e\d\w\c\k\1\h\s\s\j\t\6\u\m\4\r\8\f\9\4\f\m\b\f\x\6\0\8\3\7\t\p\v\n\4\j\7\6\5\1\7\q\v\7\j\y\c\j\z\u\z\b\z\j\m\s\u\m\v\y\2\d\f\z\g\d\2\8\5\k\x\v\v\z\c\i\t\g\3\r\x\j\a\c\y\c\q\m\z\y\3\d\n\t\j\6\l\6\x\9\n\3\q\l\t\i\q\4\g\5\o\e\2\k\o\i\x\x\3\e\z\r\a\5\3\b\x\8\7\e\7\o\f\j\o\a\m\x\x\j\j\j\c\k\q\7\r\s\s\g\h\3\g\j\x\d\x\z\6\4\u\z\b\z\n\2\8\r\r\q\6\2\8\w\y\r\n\a\t\n\m\a\6\1\v\r\y\b\5\z\s\c\t\r\0\2\u\4\v\5\1\9\e\m\w\j\8\6\r\s\x\6\e\8\c\9\q\y\d\a\3\f\v\5\g\k\n\1\y\3\k\l\8\x\l\r\6\s\8\f\w\c\9\q\z\c\f\q\m\v\7\m\u\3\z\g\4\m\y\c\m\l\1\s\6\8\y\a\r\7\8\y\c\l\o\5\a\2\e\5\c\f\c\0\j\8\y\c\5\u\w\o\w\v\q\d\z\y\c\o\o\t\5\9\x\n\3\s\8\0\w\d\d\g\z\1\b\b\e\7\r\f\d\h\7\n\c\y\5\x\q\n\k\t\z\n\2\y\7\3\n\y\5\0\q\i\8\q\b\x\p\c\5\7\n\p\u\1\x\3\y\r\2\m\0\c\c\x\q\t\f\3\n\9\f\k\t\7\n\u\x\c\j\r\o\2\g\q\u\t\e\w\s\u\7\i\i\s\w\0\0\0\2\a\6\2\n\u\i\p\d\g\o\h\j\h\c\f\0\3\0\m\i\c\e\7\t\t\a\u\1\b\8\s\i\q\p\1\7\1\i\q\2\v\i\9\h\9\z\p\2\0\2\2\9\4\c\6\8\y\6\t\y\i ]] 00:39:02.344 23:24:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:39:02.344 23:24:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:39:02.344 [2024-07-13 23:24:51.650757] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:39:02.344 [2024-07-13 23:24:51.651041] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175595 ] 00:39:02.602 [2024-07-13 23:24:51.797144] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:02.602 [2024-07-13 23:24:51.894737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:03.169  Copying: 512/512 [B] (average 125 kBps) 00:39:03.169 00:39:03.169 23:24:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 6k3b1q715pq38aoepjw0yjwtnm1w8hc7j1ixwyzpmdkgd1wyedwck1hssjt6um4r8f94fmbfx60837tpvn4j76517qv7jycjzuzbzjmsumvy2dfzgd285kxvvzcitg3rxjacycqmzy3dntj6l6x9n3qltiq4g5oe2koixx3ezra53bx87e7ofjoamxxjjjckq7rssgh3gjxdxz64uzbzn28rrq628wyrnatnma61vryb5zsctr02u4v519emwj86rsx6e8c9qyda3fv5gkn1y3kl8xlr6s8fwc9qzcfqmv7mu3zg4mycml1s68yar78yclo5a2e5cfc0j8yc5uwowvqdzycoot59xn3s80wddgz1bbe7rfdh7ncy5xqnktzn2y73ny50qi8qbxpc57npu1x3yr2m0ccxqtf3n9fkt7nuxcjro2gqutewsu7iisw0002a62nuipdgohjhcf030mice7ttau1b8siqp171iq2vi9h9zp202294c68y6tyi == \6\k\3\b\1\q\7\1\5\p\q\3\8\a\o\e\p\j\w\0\y\j\w\t\n\m\1\w\8\h\c\7\j\1\i\x\w\y\z\p\m\d\k\g\d\1\w\y\e\d\w\c\k\1\h\s\s\j\t\6\u\m\4\r\8\f\9\4\f\m\b\f\x\6\0\8\3\7\t\p\v\n\4\j\7\6\5\1\7\q\v\7\j\y\c\j\z\u\z\b\z\j\m\s\u\m\v\y\2\d\f\z\g\d\2\8\5\k\x\v\v\z\c\i\t\g\3\r\x\j\a\c\y\c\q\m\z\y\3\d\n\t\j\6\l\6\x\9\n\3\q\l\t\i\q\4\g\5\o\e\2\k\o\i\x\x\3\e\z\r\a\5\3\b\x\8\7\e\7\o\f\j\o\a\m\x\x\j\j\j\c\k\q\7\r\s\s\g\h\3\g\j\x\d\x\z\6\4\u\z\b\z\n\2\8\r\r\q\6\2\8\w\y\r\n\a\t\n\m\a\6\1\v\r\y\b\5\z\s\c\t\r\0\2\u\4\v\5\1\9\e\m\w\j\8\6\r\s\x\6\e\8\c\9\q\y\d\a\3\f\v\5\g\k\n\1\y\3\k\l\8\x\l\r\6\s\8\f\w\c\9\q\z\c\f\q\m\v\7\m\u\3\z\g\4\m\y\c\m\l\1\s\6\8\y\a\r\7\8\y\c\l\o\5\a\2\e\5\c\f\c\0\j\8\y\c\5\u\w\o\w\v\q\d\z\y\c\o\o\t\5\9\x\n\3\s\8\0\w\d\d\g\z\1\b\b\e\7\r\f\d\h\7\n\c\y\5\x\q\n\k\t\z\n\2\y\7\3\n\y\5\0\q\i\8\q\b\x\p\c\5\7\n\p\u\1\x\3\y\r\2\m\0\c\c\x\q\t\f\3\n\9\f\k\t\7\n\u\x\c\j\r\o\2\g\q\u\t\e\w\s\u\7\i\i\s\w\0\0\0\2\a\6\2\n\u\i\p\d\g\o\h\j\h\c\f\0\3\0\m\i\c\e\7\t\t\a\u\1\b\8\s\i\q\p\1\7\1\i\q\2\v\i\9\h\9\z\p\2\0\2\2\9\4\c\6\8\y\6\t\y\i ]] 00:39:03.169 23:24:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:39:03.169 23:24:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:39:03.169 [2024-07-13 23:24:52.335878] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:39:03.169 [2024-07-13 23:24:52.336168] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175608 ] 00:39:03.169 [2024-07-13 23:24:52.479355] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:03.169 [2024-07-13 23:24:52.565035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:03.686  Copying: 512/512 [B] (average 166 kBps) 00:39:03.686 00:39:03.686 23:24:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 6k3b1q715pq38aoepjw0yjwtnm1w8hc7j1ixwyzpmdkgd1wyedwck1hssjt6um4r8f94fmbfx60837tpvn4j76517qv7jycjzuzbzjmsumvy2dfzgd285kxvvzcitg3rxjacycqmzy3dntj6l6x9n3qltiq4g5oe2koixx3ezra53bx87e7ofjoamxxjjjckq7rssgh3gjxdxz64uzbzn28rrq628wyrnatnma61vryb5zsctr02u4v519emwj86rsx6e8c9qyda3fv5gkn1y3kl8xlr6s8fwc9qzcfqmv7mu3zg4mycml1s68yar78yclo5a2e5cfc0j8yc5uwowvqdzycoot59xn3s80wddgz1bbe7rfdh7ncy5xqnktzn2y73ny50qi8qbxpc57npu1x3yr2m0ccxqtf3n9fkt7nuxcjro2gqutewsu7iisw0002a62nuipdgohjhcf030mice7ttau1b8siqp171iq2vi9h9zp202294c68y6tyi == \6\k\3\b\1\q\7\1\5\p\q\3\8\a\o\e\p\j\w\0\y\j\w\t\n\m\1\w\8\h\c\7\j\1\i\x\w\y\z\p\m\d\k\g\d\1\w\y\e\d\w\c\k\1\h\s\s\j\t\6\u\m\4\r\8\f\9\4\f\m\b\f\x\6\0\8\3\7\t\p\v\n\4\j\7\6\5\1\7\q\v\7\j\y\c\j\z\u\z\b\z\j\m\s\u\m\v\y\2\d\f\z\g\d\2\8\5\k\x\v\v\z\c\i\t\g\3\r\x\j\a\c\y\c\q\m\z\y\3\d\n\t\j\6\l\6\x\9\n\3\q\l\t\i\q\4\g\5\o\e\2\k\o\i\x\x\3\e\z\r\a\5\3\b\x\8\7\e\7\o\f\j\o\a\m\x\x\j\j\j\c\k\q\7\r\s\s\g\h\3\g\j\x\d\x\z\6\4\u\z\b\z\n\2\8\r\r\q\6\2\8\w\y\r\n\a\t\n\m\a\6\1\v\r\y\b\5\z\s\c\t\r\0\2\u\4\v\5\1\9\e\m\w\j\8\6\r\s\x\6\e\8\c\9\q\y\d\a\3\f\v\5\g\k\n\1\y\3\k\l\8\x\l\r\6\s\8\f\w\c\9\q\z\c\f\q\m\v\7\m\u\3\z\g\4\m\y\c\m\l\1\s\6\8\y\a\r\7\8\y\c\l\o\5\a\2\e\5\c\f\c\0\j\8\y\c\5\u\w\o\w\v\q\d\z\y\c\o\o\t\5\9\x\n\3\s\8\0\w\d\d\g\z\1\b\b\e\7\r\f\d\h\7\n\c\y\5\x\q\n\k\t\z\n\2\y\7\3\n\y\5\0\q\i\8\q\b\x\p\c\5\7\n\p\u\1\x\3\y\r\2\m\0\c\c\x\q\t\f\3\n\9\f\k\t\7\n\u\x\c\j\r\o\2\g\q\u\t\e\w\s\u\7\i\i\s\w\0\0\0\2\a\6\2\n\u\i\p\d\g\o\h\j\h\c\f\0\3\0\m\i\c\e\7\t\t\a\u\1\b\8\s\i\q\p\1\7\1\i\q\2\v\i\9\h\9\z\p\2\0\2\2\9\4\c\6\8\y\6\t\y\i ]] 00:39:03.686 23:24:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:39:03.686 23:24:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:39:03.686 23:24:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:39:03.686 23:24:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:39:03.686 23:24:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:39:03.686 23:24:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:39:03.686 [2024-07-13 23:24:53.018469] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:39:03.686 [2024-07-13 23:24:53.018745] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175617 ] 00:39:03.944 [2024-07-13 23:24:53.168068] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:03.944 [2024-07-13 23:24:53.265774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:04.459  Copying: 512/512 [B] (average 500 kBps) 00:39:04.459 00:39:04.459 23:24:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ vl4sn80x4d3utnvdowwegrzyw398gf8joxiaw9bq3mnklmevnh0wybr375pxff1uf5ko9uxqw0cetc37h0kct20hai6vqqyb6czt3frkm45nppfzwaks1lo6njsf0qukbnm3dmoxbxsm6kxr698283slidqexmy7c9aads2nxictx62qihuemop6qms520oy67ml34k0sj6dm56dzfj3lhqisrfeu9ej5roy0wcjpftwwbvihcid4e63fgvfm49pe7sa2bb66tzgy1wuuoi0zzt7n6kwi87yagrcmxzj5xy7z9a7n4atr9er5xmsmjbyo88vtwk9ibald365fxi4if5o4kdk62wbbvm5x3ieowic0l1fswe9fkehai9dwebr504hgh4faawhfo2434j7e7x8qjfzvjv6g3t42grnye2uodq6fty8h0fct3o35z4af5e5f8mmjlmjywqe6t6lscgsozbyz3wkokyovs74e8b3h5qhqtw91auoqzj9jdlt == \v\l\4\s\n\8\0\x\4\d\3\u\t\n\v\d\o\w\w\e\g\r\z\y\w\3\9\8\g\f\8\j\o\x\i\a\w\9\b\q\3\m\n\k\l\m\e\v\n\h\0\w\y\b\r\3\7\5\p\x\f\f\1\u\f\5\k\o\9\u\x\q\w\0\c\e\t\c\3\7\h\0\k\c\t\2\0\h\a\i\6\v\q\q\y\b\6\c\z\t\3\f\r\k\m\4\5\n\p\p\f\z\w\a\k\s\1\l\o\6\n\j\s\f\0\q\u\k\b\n\m\3\d\m\o\x\b\x\s\m\6\k\x\r\6\9\8\2\8\3\s\l\i\d\q\e\x\m\y\7\c\9\a\a\d\s\2\n\x\i\c\t\x\6\2\q\i\h\u\e\m\o\p\6\q\m\s\5\2\0\o\y\6\7\m\l\3\4\k\0\s\j\6\d\m\5\6\d\z\f\j\3\l\h\q\i\s\r\f\e\u\9\e\j\5\r\o\y\0\w\c\j\p\f\t\w\w\b\v\i\h\c\i\d\4\e\6\3\f\g\v\f\m\4\9\p\e\7\s\a\2\b\b\6\6\t\z\g\y\1\w\u\u\o\i\0\z\z\t\7\n\6\k\w\i\8\7\y\a\g\r\c\m\x\z\j\5\x\y\7\z\9\a\7\n\4\a\t\r\9\e\r\5\x\m\s\m\j\b\y\o\8\8\v\t\w\k\9\i\b\a\l\d\3\6\5\f\x\i\4\i\f\5\o\4\k\d\k\6\2\w\b\b\v\m\5\x\3\i\e\o\w\i\c\0\l\1\f\s\w\e\9\f\k\e\h\a\i\9\d\w\e\b\r\5\0\4\h\g\h\4\f\a\a\w\h\f\o\2\4\3\4\j\7\e\7\x\8\q\j\f\z\v\j\v\6\g\3\t\4\2\g\r\n\y\e\2\u\o\d\q\6\f\t\y\8\h\0\f\c\t\3\o\3\5\z\4\a\f\5\e\5\f\8\m\m\j\l\m\j\y\w\q\e\6\t\6\l\s\c\g\s\o\z\b\y\z\3\w\k\o\k\y\o\v\s\7\4\e\8\b\3\h\5\q\h\q\t\w\9\1\a\u\o\q\z\j\9\j\d\l\t ]] 00:39:04.459 23:24:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:39:04.459 23:24:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:39:04.459 [2024-07-13 23:24:53.715897] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:39:04.459 [2024-07-13 23:24:53.716168] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175630 ] 00:39:04.459 [2024-07-13 23:24:53.863994] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:04.717 [2024-07-13 23:24:53.935381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:04.975  Copying: 512/512 [B] (average 500 kBps) 00:39:04.975 00:39:04.975 23:24:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ vl4sn80x4d3utnvdowwegrzyw398gf8joxiaw9bq3mnklmevnh0wybr375pxff1uf5ko9uxqw0cetc37h0kct20hai6vqqyb6czt3frkm45nppfzwaks1lo6njsf0qukbnm3dmoxbxsm6kxr698283slidqexmy7c9aads2nxictx62qihuemop6qms520oy67ml34k0sj6dm56dzfj3lhqisrfeu9ej5roy0wcjpftwwbvihcid4e63fgvfm49pe7sa2bb66tzgy1wuuoi0zzt7n6kwi87yagrcmxzj5xy7z9a7n4atr9er5xmsmjbyo88vtwk9ibald365fxi4if5o4kdk62wbbvm5x3ieowic0l1fswe9fkehai9dwebr504hgh4faawhfo2434j7e7x8qjfzvjv6g3t42grnye2uodq6fty8h0fct3o35z4af5e5f8mmjlmjywqe6t6lscgsozbyz3wkokyovs74e8b3h5qhqtw91auoqzj9jdlt == \v\l\4\s\n\8\0\x\4\d\3\u\t\n\v\d\o\w\w\e\g\r\z\y\w\3\9\8\g\f\8\j\o\x\i\a\w\9\b\q\3\m\n\k\l\m\e\v\n\h\0\w\y\b\r\3\7\5\p\x\f\f\1\u\f\5\k\o\9\u\x\q\w\0\c\e\t\c\3\7\h\0\k\c\t\2\0\h\a\i\6\v\q\q\y\b\6\c\z\t\3\f\r\k\m\4\5\n\p\p\f\z\w\a\k\s\1\l\o\6\n\j\s\f\0\q\u\k\b\n\m\3\d\m\o\x\b\x\s\m\6\k\x\r\6\9\8\2\8\3\s\l\i\d\q\e\x\m\y\7\c\9\a\a\d\s\2\n\x\i\c\t\x\6\2\q\i\h\u\e\m\o\p\6\q\m\s\5\2\0\o\y\6\7\m\l\3\4\k\0\s\j\6\d\m\5\6\d\z\f\j\3\l\h\q\i\s\r\f\e\u\9\e\j\5\r\o\y\0\w\c\j\p\f\t\w\w\b\v\i\h\c\i\d\4\e\6\3\f\g\v\f\m\4\9\p\e\7\s\a\2\b\b\6\6\t\z\g\y\1\w\u\u\o\i\0\z\z\t\7\n\6\k\w\i\8\7\y\a\g\r\c\m\x\z\j\5\x\y\7\z\9\a\7\n\4\a\t\r\9\e\r\5\x\m\s\m\j\b\y\o\8\8\v\t\w\k\9\i\b\a\l\d\3\6\5\f\x\i\4\i\f\5\o\4\k\d\k\6\2\w\b\b\v\m\5\x\3\i\e\o\w\i\c\0\l\1\f\s\w\e\9\f\k\e\h\a\i\9\d\w\e\b\r\5\0\4\h\g\h\4\f\a\a\w\h\f\o\2\4\3\4\j\7\e\7\x\8\q\j\f\z\v\j\v\6\g\3\t\4\2\g\r\n\y\e\2\u\o\d\q\6\f\t\y\8\h\0\f\c\t\3\o\3\5\z\4\a\f\5\e\5\f\8\m\m\j\l\m\j\y\w\q\e\6\t\6\l\s\c\g\s\o\z\b\y\z\3\w\k\o\k\y\o\v\s\7\4\e\8\b\3\h\5\q\h\q\t\w\9\1\a\u\o\q\z\j\9\j\d\l\t ]] 00:39:04.975 23:24:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:39:04.975 23:24:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:39:04.975 [2024-07-13 23:24:54.356284] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:39:04.975 [2024-07-13 23:24:54.356591] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175642 ] 00:39:05.233 [2024-07-13 23:24:54.504810] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:05.234 [2024-07-13 23:24:54.589248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:05.752  Copying: 512/512 [B] (average 166 kBps) 00:39:05.752 00:39:05.752 23:24:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ vl4sn80x4d3utnvdowwegrzyw398gf8joxiaw9bq3mnklmevnh0wybr375pxff1uf5ko9uxqw0cetc37h0kct20hai6vqqyb6czt3frkm45nppfzwaks1lo6njsf0qukbnm3dmoxbxsm6kxr698283slidqexmy7c9aads2nxictx62qihuemop6qms520oy67ml34k0sj6dm56dzfj3lhqisrfeu9ej5roy0wcjpftwwbvihcid4e63fgvfm49pe7sa2bb66tzgy1wuuoi0zzt7n6kwi87yagrcmxzj5xy7z9a7n4atr9er5xmsmjbyo88vtwk9ibald365fxi4if5o4kdk62wbbvm5x3ieowic0l1fswe9fkehai9dwebr504hgh4faawhfo2434j7e7x8qjfzvjv6g3t42grnye2uodq6fty8h0fct3o35z4af5e5f8mmjlmjywqe6t6lscgsozbyz3wkokyovs74e8b3h5qhqtw91auoqzj9jdlt == \v\l\4\s\n\8\0\x\4\d\3\u\t\n\v\d\o\w\w\e\g\r\z\y\w\3\9\8\g\f\8\j\o\x\i\a\w\9\b\q\3\m\n\k\l\m\e\v\n\h\0\w\y\b\r\3\7\5\p\x\f\f\1\u\f\5\k\o\9\u\x\q\w\0\c\e\t\c\3\7\h\0\k\c\t\2\0\h\a\i\6\v\q\q\y\b\6\c\z\t\3\f\r\k\m\4\5\n\p\p\f\z\w\a\k\s\1\l\o\6\n\j\s\f\0\q\u\k\b\n\m\3\d\m\o\x\b\x\s\m\6\k\x\r\6\9\8\2\8\3\s\l\i\d\q\e\x\m\y\7\c\9\a\a\d\s\2\n\x\i\c\t\x\6\2\q\i\h\u\e\m\o\p\6\q\m\s\5\2\0\o\y\6\7\m\l\3\4\k\0\s\j\6\d\m\5\6\d\z\f\j\3\l\h\q\i\s\r\f\e\u\9\e\j\5\r\o\y\0\w\c\j\p\f\t\w\w\b\v\i\h\c\i\d\4\e\6\3\f\g\v\f\m\4\9\p\e\7\s\a\2\b\b\6\6\t\z\g\y\1\w\u\u\o\i\0\z\z\t\7\n\6\k\w\i\8\7\y\a\g\r\c\m\x\z\j\5\x\y\7\z\9\a\7\n\4\a\t\r\9\e\r\5\x\m\s\m\j\b\y\o\8\8\v\t\w\k\9\i\b\a\l\d\3\6\5\f\x\i\4\i\f\5\o\4\k\d\k\6\2\w\b\b\v\m\5\x\3\i\e\o\w\i\c\0\l\1\f\s\w\e\9\f\k\e\h\a\i\9\d\w\e\b\r\5\0\4\h\g\h\4\f\a\a\w\h\f\o\2\4\3\4\j\7\e\7\x\8\q\j\f\z\v\j\v\6\g\3\t\4\2\g\r\n\y\e\2\u\o\d\q\6\f\t\y\8\h\0\f\c\t\3\o\3\5\z\4\a\f\5\e\5\f\8\m\m\j\l\m\j\y\w\q\e\6\t\6\l\s\c\g\s\o\z\b\y\z\3\w\k\o\k\y\o\v\s\7\4\e\8\b\3\h\5\q\h\q\t\w\9\1\a\u\o\q\z\j\9\j\d\l\t ]] 00:39:05.752 23:24:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:39:05.752 23:24:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:39:05.752 [2024-07-13 23:24:55.019012] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:39:05.752 [2024-07-13 23:24:55.019238] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175652 ] 00:39:06.011 [2024-07-13 23:24:55.165200] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:06.011 [2024-07-13 23:24:55.256960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:06.270  Copying: 512/512 [B] (average 125 kBps) 00:39:06.270 00:39:06.270 23:24:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ vl4sn80x4d3utnvdowwegrzyw398gf8joxiaw9bq3mnklmevnh0wybr375pxff1uf5ko9uxqw0cetc37h0kct20hai6vqqyb6czt3frkm45nppfzwaks1lo6njsf0qukbnm3dmoxbxsm6kxr698283slidqexmy7c9aads2nxictx62qihuemop6qms520oy67ml34k0sj6dm56dzfj3lhqisrfeu9ej5roy0wcjpftwwbvihcid4e63fgvfm49pe7sa2bb66tzgy1wuuoi0zzt7n6kwi87yagrcmxzj5xy7z9a7n4atr9er5xmsmjbyo88vtwk9ibald365fxi4if5o4kdk62wbbvm5x3ieowic0l1fswe9fkehai9dwebr504hgh4faawhfo2434j7e7x8qjfzvjv6g3t42grnye2uodq6fty8h0fct3o35z4af5e5f8mmjlmjywqe6t6lscgsozbyz3wkokyovs74e8b3h5qhqtw91auoqzj9jdlt == \v\l\4\s\n\8\0\x\4\d\3\u\t\n\v\d\o\w\w\e\g\r\z\y\w\3\9\8\g\f\8\j\o\x\i\a\w\9\b\q\3\m\n\k\l\m\e\v\n\h\0\w\y\b\r\3\7\5\p\x\f\f\1\u\f\5\k\o\9\u\x\q\w\0\c\e\t\c\3\7\h\0\k\c\t\2\0\h\a\i\6\v\q\q\y\b\6\c\z\t\3\f\r\k\m\4\5\n\p\p\f\z\w\a\k\s\1\l\o\6\n\j\s\f\0\q\u\k\b\n\m\3\d\m\o\x\b\x\s\m\6\k\x\r\6\9\8\2\8\3\s\l\i\d\q\e\x\m\y\7\c\9\a\a\d\s\2\n\x\i\c\t\x\6\2\q\i\h\u\e\m\o\p\6\q\m\s\5\2\0\o\y\6\7\m\l\3\4\k\0\s\j\6\d\m\5\6\d\z\f\j\3\l\h\q\i\s\r\f\e\u\9\e\j\5\r\o\y\0\w\c\j\p\f\t\w\w\b\v\i\h\c\i\d\4\e\6\3\f\g\v\f\m\4\9\p\e\7\s\a\2\b\b\6\6\t\z\g\y\1\w\u\u\o\i\0\z\z\t\7\n\6\k\w\i\8\7\y\a\g\r\c\m\x\z\j\5\x\y\7\z\9\a\7\n\4\a\t\r\9\e\r\5\x\m\s\m\j\b\y\o\8\8\v\t\w\k\9\i\b\a\l\d\3\6\5\f\x\i\4\i\f\5\o\4\k\d\k\6\2\w\b\b\v\m\5\x\3\i\e\o\w\i\c\0\l\1\f\s\w\e\9\f\k\e\h\a\i\9\d\w\e\b\r\5\0\4\h\g\h\4\f\a\a\w\h\f\o\2\4\3\4\j\7\e\7\x\8\q\j\f\z\v\j\v\6\g\3\t\4\2\g\r\n\y\e\2\u\o\d\q\6\f\t\y\8\h\0\f\c\t\3\o\3\5\z\4\a\f\5\e\5\f\8\m\m\j\l\m\j\y\w\q\e\6\t\6\l\s\c\g\s\o\z\b\y\z\3\w\k\o\k\y\o\v\s\7\4\e\8\b\3\h\5\q\h\q\t\w\9\1\a\u\o\q\z\j\9\j\d\l\t ]] 00:39:06.270 00:39:06.270 real 0m5.390s 00:39:06.270 user 0m2.701s 00:39:06.270 sys 0m1.556s 00:39:06.270 ************************************ 00:39:06.270 END TEST dd_flags_misc_forced_aio 00:39:06.270 ************************************ 00:39:06.270 23:24:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:06.270 23:24:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:39:06.530 23:24:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:39:06.530 23:24:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:39:06.530 23:24:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:39:06.530 23:24:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:39:06.530 00:39:06.530 real 0m23.514s 00:39:06.530 user 0m10.941s 00:39:06.530 sys 0m6.352s 00:39:06.530 23:24:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:06.530 23:24:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:39:06.530 ************************************ 00:39:06.530 END TEST spdk_dd_posix 00:39:06.530 ************************************ 00:39:06.530 23:24:55 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:39:06.530 23:24:55 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:39:06.530 23:24:55 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:06.530 23:24:55 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:06.530 23:24:55 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:39:06.530 ************************************ 00:39:06.530 START TEST spdk_dd_malloc 00:39:06.530 ************************************ 00:39:06.530 23:24:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:39:06.530 * Looking for test storage... 00:39:06.530 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:39:06.530 23:24:55 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:06.530 23:24:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:06.530 23:24:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:06.530 23:24:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:06.530 23:24:55 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:06.530 23:24:55 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:06.530 23:24:55 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:06.530 23:24:55 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:39:06.530 23:24:55 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:06.530 23:24:55 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:39:06.530 23:24:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:06.530 23:24:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:06.530 23:24:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:39:06.530 ************************************ 00:39:06.530 START TEST dd_malloc_copy 00:39:06.530 ************************************ 00:39:06.530 23:24:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # malloc_copy 00:39:06.530 23:24:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:39:06.530 23:24:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:39:06.530 23:24:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:39:06.530 23:24:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:39:06.530 23:24:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:39:06.530 23:24:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:39:06.530 23:24:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:39:06.530 23:24:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:39:06.530 23:24:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:39:06.530 23:24:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:39:06.530 [2024-07-13 23:24:55.898633] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:39:06.530 [2024-07-13 23:24:55.898928] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175731 ] 00:39:06.530 { 00:39:06.530 "subsystems": [ 00:39:06.530 { 00:39:06.530 "subsystem": "bdev", 00:39:06.530 "config": [ 00:39:06.530 { 00:39:06.530 "params": { 00:39:06.530 "block_size": 512, 00:39:06.530 "num_blocks": 1048576, 00:39:06.530 "name": "malloc0" 00:39:06.530 }, 00:39:06.530 "method": "bdev_malloc_create" 00:39:06.530 }, 00:39:06.530 { 00:39:06.530 "params": { 00:39:06.530 "block_size": 512, 00:39:06.530 "num_blocks": 1048576, 00:39:06.530 "name": "malloc1" 00:39:06.530 }, 00:39:06.530 "method": "bdev_malloc_create" 00:39:06.530 }, 00:39:06.530 { 00:39:06.530 "method": "bdev_wait_for_examine" 00:39:06.530 } 00:39:06.530 ] 00:39:06.530 } 00:39:06.530 ] 00:39:06.530 } 00:39:06.789 [2024-07-13 23:24:56.045294] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:06.789 [2024-07-13 23:24:56.129126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:10.753  Copying: 195/512 [MB] (195 MBps) Copying: 389/512 [MB] (193 MBps) Copying: 512/512 [MB] (average 193 MBps) 00:39:10.753 00:39:10.753 23:24:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:39:10.753 23:24:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:39:10.753 23:24:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:39:10.753 23:24:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:39:10.753 { 00:39:10.753 "subsystems": [ 00:39:10.753 { 00:39:10.753 "subsystem": "bdev", 00:39:10.753 "config": [ 00:39:10.753 { 00:39:10.753 "params": { 00:39:10.753 "block_size": 512, 00:39:10.753 "num_blocks": 1048576, 00:39:10.753 "name": "malloc0" 00:39:10.753 }, 00:39:10.753 "method": "bdev_malloc_create" 00:39:10.753 }, 00:39:10.753 { 00:39:10.753 "params": { 00:39:10.753 "block_size": 512, 00:39:10.753 "num_blocks": 1048576, 00:39:10.753 "name": "malloc1" 00:39:10.753 }, 00:39:10.753 "method": "bdev_malloc_create" 00:39:10.753 }, 00:39:10.753 { 00:39:10.753 "method": "bdev_wait_for_examine" 00:39:10.753 } 00:39:10.753 ] 00:39:10.753 } 00:39:10.753 ] 00:39:10.753 } 00:39:10.753 [2024-07-13 23:24:59.869290] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:39:10.753 [2024-07-13 23:24:59.869584] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175788 ] 00:39:10.753 [2024-07-13 23:25:00.015935] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:10.753 [2024-07-13 23:25:00.109637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:14.345  Copying: 197/512 [MB] (197 MBps) Copying: 391/512 [MB] (194 MBps) Copying: 512/512 [MB] (average 197 MBps) 00:39:14.345 00:39:14.603 00:39:14.603 real 0m7.911s 00:39:14.603 user 0m6.831s 00:39:14.603 sys 0m0.940s 00:39:14.603 23:25:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:14.603 23:25:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:39:14.603 ************************************ 00:39:14.603 END TEST dd_malloc_copy 00:39:14.603 ************************************ 00:39:14.603 23:25:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1142 -- # return 0 00:39:14.603 00:39:14.603 real 0m8.046s 00:39:14.603 user 0m6.902s 00:39:14.603 sys 0m1.009s 00:39:14.603 23:25:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:14.603 23:25:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:39:14.603 ************************************ 00:39:14.603 END TEST spdk_dd_malloc 00:39:14.603 ************************************ 00:39:14.603 23:25:03 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:39:14.603 23:25:03 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:39:14.603 23:25:03 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:39:14.603 23:25:03 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:14.603 23:25:03 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:39:14.603 ************************************ 00:39:14.603 START TEST spdk_dd_bdev_to_bdev 00:39:14.603 ************************************ 00:39:14.603 23:25:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:39:14.603 * Looking for test storage... 00:39:14.603 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:39:14.603 23:25:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:14.603 23:25:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:14.603 23:25:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:14.603 23:25:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:14.603 23:25:03 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:14.603 23:25:03 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:14.603 23:25:03 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:14.603 23:25:03 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:39:14.603 23:25:03 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:14.603 23:25:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:39:14.603 23:25:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:39:14.603 23:25:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:39:14.603 23:25:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:39:14.603 23:25:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:39:14.603 23:25:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:39:14.603 23:25:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:10.0 00:39:14.603 23:25:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:39:14.603 23:25:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:39:14.603 23:25:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:39:14.603 23:25:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:39:14.603 23:25:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(['name']='aio1' ['filename']='/home/vagrant/spdk_repo/spdk/test/dd/aio1' ['block_size']='4096') 00:39:14.603 23:25:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:39:14.603 23:25:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:39:14.603 [2024-07-13 23:25:03.980975] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:39:14.603 [2024-07-13 23:25:03.981215] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175902 ] 00:39:14.860 [2024-07-13 23:25:04.128476] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:14.860 [2024-07-13 23:25:04.205210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:15.375  Copying: 256/256 [MB] (average 1254 MBps) 00:39:15.375 00:39:15.375 23:25:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:39:15.375 23:25:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:39:15.375 23:25:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:39:15.375 23:25:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:39:15.375 23:25:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:39:15.375 23:25:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:39:15.375 23:25:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:15.375 23:25:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:15.375 ************************************ 00:39:15.375 START TEST dd_inflate_file 00:39:15.375 ************************************ 00:39:15.375 23:25:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:39:15.633 [2024-07-13 23:25:04.828616] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:39:15.633 [2024-07-13 23:25:04.829481] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175926 ] 00:39:15.633 [2024-07-13 23:25:04.976615] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:15.891 [2024-07-13 23:25:05.071595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:16.149  Copying: 64/64 [MB] (average 1306 MBps) 00:39:16.150 00:39:16.150 00:39:16.150 real 0m0.706s 00:39:16.150 user 0m0.341s 00:39:16.150 sys 0m0.235s 00:39:16.150 23:25:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:16.150 23:25:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:39:16.150 ************************************ 00:39:16.150 END TEST dd_inflate_file 00:39:16.150 ************************************ 00:39:16.150 23:25:05 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:39:16.150 23:25:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:39:16.150 23:25:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:39:16.150 23:25:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:39:16.150 23:25:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:39:16.150 23:25:05 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:39:16.150 23:25:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:39:16.150 23:25:05 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:16.150 23:25:05 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:16.150 23:25:05 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:16.150 ************************************ 00:39:16.150 START TEST dd_copy_to_out_bdev 00:39:16.150 ************************************ 00:39:16.150 23:25:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:39:16.409 [2024-07-13 23:25:05.588305] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:39:16.409 [2024-07-13 23:25:05.588533] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175967 ] 00:39:16.409 { 00:39:16.409 "subsystems": [ 00:39:16.409 { 00:39:16.409 "subsystem": "bdev", 00:39:16.409 "config": [ 00:39:16.409 { 00:39:16.409 "params": { 00:39:16.409 "block_size": 4096, 00:39:16.409 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:39:16.409 "name": "aio1" 00:39:16.409 }, 00:39:16.409 "method": "bdev_aio_create" 00:39:16.409 }, 00:39:16.409 { 00:39:16.409 "params": { 00:39:16.409 "trtype": "pcie", 00:39:16.409 "traddr": "0000:00:10.0", 00:39:16.409 "name": "Nvme0" 00:39:16.409 }, 00:39:16.409 "method": "bdev_nvme_attach_controller" 00:39:16.409 }, 00:39:16.409 { 00:39:16.409 "method": "bdev_wait_for_examine" 00:39:16.409 } 00:39:16.409 ] 00:39:16.409 } 00:39:16.409 ] 00:39:16.409 } 00:39:16.409 [2024-07-13 23:25:05.725769] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:16.409 [2024-07-13 23:25:05.814462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:18.610  Copying: 43/64 [MB] (43 MBps) Copying: 64/64 [MB] (average 43 MBps) 00:39:18.610 00:39:18.610 00:39:18.610 real 0m2.295s 00:39:18.610 user 0m1.938s 00:39:18.610 sys 0m0.230s 00:39:18.610 23:25:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:18.610 23:25:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:18.610 ************************************ 00:39:18.610 END TEST dd_copy_to_out_bdev 00:39:18.610 ************************************ 00:39:18.610 23:25:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:39:18.610 23:25:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:39:18.610 23:25:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:39:18.610 23:25:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:18.610 23:25:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:18.610 23:25:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:18.610 ************************************ 00:39:18.610 START TEST dd_offset_magic 00:39:18.610 ************************************ 00:39:18.610 23:25:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # offset_magic 00:39:18.610 23:25:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:39:18.610 23:25:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:39:18.610 23:25:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:39:18.610 23:25:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:39:18.610 23:25:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:39:18.610 23:25:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:39:18.610 23:25:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:39:18.610 23:25:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:39:18.610 [2024-07-13 23:25:07.956438] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:39:18.610 [2024-07-13 23:25:07.956716] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176019 ] 00:39:18.610 { 00:39:18.610 "subsystems": [ 00:39:18.610 { 00:39:18.610 "subsystem": "bdev", 00:39:18.610 "config": [ 00:39:18.610 { 00:39:18.610 "params": { 00:39:18.610 "block_size": 4096, 00:39:18.610 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:39:18.610 "name": "aio1" 00:39:18.610 }, 00:39:18.610 "method": "bdev_aio_create" 00:39:18.610 }, 00:39:18.610 { 00:39:18.610 "params": { 00:39:18.610 "trtype": "pcie", 00:39:18.610 "traddr": "0000:00:10.0", 00:39:18.610 "name": "Nvme0" 00:39:18.610 }, 00:39:18.610 "method": "bdev_nvme_attach_controller" 00:39:18.610 }, 00:39:18.610 { 00:39:18.610 "method": "bdev_wait_for_examine" 00:39:18.610 } 00:39:18.610 ] 00:39:18.610 } 00:39:18.610 ] 00:39:18.610 } 00:39:18.868 [2024-07-13 23:25:08.104193] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:18.868 [2024-07-13 23:25:08.167993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:20.004  Copying: 65/65 [MB] (average 143 MBps) 00:39:20.004 00:39:20.004 23:25:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:39:20.004 23:25:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:39:20.004 23:25:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:39:20.004 23:25:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:39:20.004 [2024-07-13 23:25:09.193937] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:39:20.004 [2024-07-13 23:25:09.194214] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176040 ] 00:39:20.004 { 00:39:20.004 "subsystems": [ 00:39:20.004 { 00:39:20.004 "subsystem": "bdev", 00:39:20.004 "config": [ 00:39:20.004 { 00:39:20.004 "params": { 00:39:20.004 "block_size": 4096, 00:39:20.004 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:39:20.004 "name": "aio1" 00:39:20.004 }, 00:39:20.004 "method": "bdev_aio_create" 00:39:20.004 }, 00:39:20.004 { 00:39:20.004 "params": { 00:39:20.004 "trtype": "pcie", 00:39:20.004 "traddr": "0000:00:10.0", 00:39:20.004 "name": "Nvme0" 00:39:20.004 }, 00:39:20.004 "method": "bdev_nvme_attach_controller" 00:39:20.004 }, 00:39:20.004 { 00:39:20.004 "method": "bdev_wait_for_examine" 00:39:20.004 } 00:39:20.004 ] 00:39:20.004 } 00:39:20.004 ] 00:39:20.004 } 00:39:20.004 [2024-07-13 23:25:09.344469] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:20.262 [2024-07-13 23:25:09.444887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:20.840  Copying: 1024/1024 [kB] (average 1000 MBps) 00:39:20.840 00:39:20.840 23:25:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:39:20.840 23:25:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:39:20.840 23:25:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:39:20.840 23:25:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:39:20.840 23:25:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:39:20.840 23:25:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:39:20.840 23:25:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:39:20.840 [2024-07-13 23:25:10.081053] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:39:20.840 [2024-07-13 23:25:10.081334] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176061 ] 00:39:20.840 { 00:39:20.840 "subsystems": [ 00:39:20.840 { 00:39:20.840 "subsystem": "bdev", 00:39:20.840 "config": [ 00:39:20.840 { 00:39:20.840 "params": { 00:39:20.840 "block_size": 4096, 00:39:20.840 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:39:20.840 "name": "aio1" 00:39:20.840 }, 00:39:20.840 "method": "bdev_aio_create" 00:39:20.840 }, 00:39:20.840 { 00:39:20.840 "params": { 00:39:20.840 "trtype": "pcie", 00:39:20.840 "traddr": "0000:00:10.0", 00:39:20.840 "name": "Nvme0" 00:39:20.840 }, 00:39:20.840 "method": "bdev_nvme_attach_controller" 00:39:20.840 }, 00:39:20.840 { 00:39:20.840 "method": "bdev_wait_for_examine" 00:39:20.840 } 00:39:20.840 ] 00:39:20.840 } 00:39:20.840 ] 00:39:20.840 } 00:39:20.840 [2024-07-13 23:25:10.231422] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:21.098 [2024-07-13 23:25:10.332146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:21.923  Copying: 65/65 [MB] (average 167 MBps) 00:39:21.923 00:39:21.923 23:25:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:39:21.923 23:25:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:39:21.923 23:25:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:39:21.923 23:25:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:39:21.923 [2024-07-13 23:25:11.302380] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:39:21.923 [2024-07-13 23:25:11.302653] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176084 ] 00:39:21.923 { 00:39:21.923 "subsystems": [ 00:39:21.923 { 00:39:21.923 "subsystem": "bdev", 00:39:21.923 "config": [ 00:39:21.923 { 00:39:21.923 "params": { 00:39:21.923 "block_size": 4096, 00:39:21.923 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:39:21.923 "name": "aio1" 00:39:21.923 }, 00:39:21.923 "method": "bdev_aio_create" 00:39:21.923 }, 00:39:21.923 { 00:39:21.923 "params": { 00:39:21.923 "trtype": "pcie", 00:39:21.923 "traddr": "0000:00:10.0", 00:39:21.923 "name": "Nvme0" 00:39:21.923 }, 00:39:21.923 "method": "bdev_nvme_attach_controller" 00:39:21.923 }, 00:39:21.923 { 00:39:21.923 "method": "bdev_wait_for_examine" 00:39:21.923 } 00:39:21.923 ] 00:39:21.923 } 00:39:21.923 ] 00:39:21.923 } 00:39:22.181 [2024-07-13 23:25:11.452222] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:22.181 [2024-07-13 23:25:11.559507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:23.088  Copying: 1024/1024 [kB] (average 1000 MBps) 00:39:23.088 00:39:23.088 23:25:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:39:23.088 23:25:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:39:23.088 00:39:23.088 real 0m4.218s 00:39:23.088 user 0m2.147s 00:39:23.088 sys 0m0.958s 00:39:23.088 23:25:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:23.088 23:25:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:39:23.088 ************************************ 00:39:23.088 END TEST dd_offset_magic 00:39:23.088 ************************************ 00:39:23.088 23:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:39:23.088 23:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:39:23.088 23:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:39:23.088 23:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:39:23.088 23:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:39:23.088 23:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:39:23.088 23:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:39:23.088 23:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:39:23.088 23:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:39:23.088 23:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:39:23.088 23:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:39:23.088 23:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:23.088 [2024-07-13 23:25:12.215709] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:39:23.088 [2024-07-13 23:25:12.216243] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176122 ] 00:39:23.088 { 00:39:23.088 "subsystems": [ 00:39:23.088 { 00:39:23.088 "subsystem": "bdev", 00:39:23.088 "config": [ 00:39:23.088 { 00:39:23.088 "params": { 00:39:23.088 "block_size": 4096, 00:39:23.088 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:39:23.088 "name": "aio1" 00:39:23.088 }, 00:39:23.088 "method": "bdev_aio_create" 00:39:23.088 }, 00:39:23.088 { 00:39:23.088 "params": { 00:39:23.088 "trtype": "pcie", 00:39:23.088 "traddr": "0000:00:10.0", 00:39:23.088 "name": "Nvme0" 00:39:23.088 }, 00:39:23.088 "method": "bdev_nvme_attach_controller" 00:39:23.088 }, 00:39:23.088 { 00:39:23.088 "method": "bdev_wait_for_examine" 00:39:23.088 } 00:39:23.088 ] 00:39:23.088 } 00:39:23.088 ] 00:39:23.088 } 00:39:23.088 [2024-07-13 23:25:12.365757] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:23.088 [2024-07-13 23:25:12.461041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:23.606  Copying: 5120/5120 [kB] (average 1000 MBps) 00:39:23.606 00:39:23.606 23:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:39:23.606 23:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=aio1 00:39:23.606 23:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:39:23.606 23:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:39:23.606 23:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:39:23.606 23:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:39:23.606 23:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:39:23.606 23:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:39:23.606 23:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:39:23.606 23:25:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:23.865 [2024-07-13 23:25:13.041378] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:39:23.865 [2024-07-13 23:25:13.041869] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176140 ] 00:39:23.865 { 00:39:23.865 "subsystems": [ 00:39:23.865 { 00:39:23.865 "subsystem": "bdev", 00:39:23.865 "config": [ 00:39:23.865 { 00:39:23.865 "params": { 00:39:23.865 "block_size": 4096, 00:39:23.865 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:39:23.865 "name": "aio1" 00:39:23.865 }, 00:39:23.865 "method": "bdev_aio_create" 00:39:23.865 }, 00:39:23.865 { 00:39:23.865 "params": { 00:39:23.865 "trtype": "pcie", 00:39:23.865 "traddr": "0000:00:10.0", 00:39:23.865 "name": "Nvme0" 00:39:23.865 }, 00:39:23.865 "method": "bdev_nvme_attach_controller" 00:39:23.865 }, 00:39:23.865 { 00:39:23.865 "method": "bdev_wait_for_examine" 00:39:23.865 } 00:39:23.865 ] 00:39:23.865 } 00:39:23.865 ] 00:39:23.865 } 00:39:23.865 [2024-07-13 23:25:13.188882] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:24.123 [2024-07-13 23:25:13.278367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:24.689  Copying: 5120/5120 [kB] (average 1250 MBps) 00:39:24.689 00:39:24.689 23:25:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:39:24.689 ************************************ 00:39:24.689 END TEST spdk_dd_bdev_to_bdev 00:39:24.689 ************************************ 00:39:24.689 00:39:24.689 real 0m10.033s 00:39:24.689 user 0m5.914s 00:39:24.689 sys 0m2.376s 00:39:24.689 23:25:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:24.689 23:25:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:24.689 23:25:13 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:39:24.689 23:25:13 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:39:24.689 23:25:13 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:39:24.689 23:25:13 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:24.689 23:25:13 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:24.689 23:25:13 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:39:24.689 ************************************ 00:39:24.689 START TEST spdk_dd_sparse 00:39:24.689 ************************************ 00:39:24.689 23:25:13 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:39:24.689 * Looking for test storage... 00:39:24.689 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:39:24.689 23:25:14 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:24.689 23:25:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:24.689 23:25:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:24.689 23:25:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:24.689 23:25:14 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:24.689 23:25:14 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:24.689 23:25:14 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:24.690 23:25:14 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:39:24.690 23:25:14 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:24.690 23:25:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:39:24.690 23:25:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:39:24.690 23:25:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:39:24.690 23:25:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:39:24.690 23:25:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:39:24.690 23:25:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:39:24.690 23:25:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:39:24.690 23:25:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:39:24.690 23:25:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:39:24.690 23:25:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:39:24.690 23:25:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:39:24.690 1+0 records in 00:39:24.690 1+0 records out 00:39:24.690 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00816642 s, 514 MB/s 00:39:24.690 23:25:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:39:24.690 1+0 records in 00:39:24.690 1+0 records out 00:39:24.690 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00871255 s, 481 MB/s 00:39:24.690 23:25:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:39:24.690 1+0 records in 00:39:24.690 1+0 records out 00:39:24.690 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0103369 s, 406 MB/s 00:39:24.690 23:25:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:39:24.690 23:25:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:24.690 23:25:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:24.690 23:25:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:39:24.690 ************************************ 00:39:24.690 START TEST dd_sparse_file_to_file 00:39:24.690 ************************************ 00:39:24.690 23:25:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # file_to_file 00:39:24.690 23:25:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:39:24.690 23:25:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:39:24.690 23:25:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:39:24.690 23:25:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:39:24.690 23:25:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:39:24.690 23:25:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:39:24.690 23:25:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:39:24.690 23:25:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:39:24.690 23:25:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:39:24.690 23:25:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:39:24.949 [2024-07-13 23:25:14.130240] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:39:24.949 [2024-07-13 23:25:14.130704] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176219 ] 00:39:24.949 { 00:39:24.949 "subsystems": [ 00:39:24.949 { 00:39:24.949 "subsystem": "bdev", 00:39:24.949 "config": [ 00:39:24.949 { 00:39:24.949 "params": { 00:39:24.949 "block_size": 4096, 00:39:24.949 "filename": "dd_sparse_aio_disk", 00:39:24.949 "name": "dd_aio" 00:39:24.949 }, 00:39:24.949 "method": "bdev_aio_create" 00:39:24.949 }, 00:39:24.949 { 00:39:24.949 "params": { 00:39:24.949 "lvs_name": "dd_lvstore", 00:39:24.949 "bdev_name": "dd_aio" 00:39:24.949 }, 00:39:24.949 "method": "bdev_lvol_create_lvstore" 00:39:24.949 }, 00:39:24.949 { 00:39:24.949 "method": "bdev_wait_for_examine" 00:39:24.949 } 00:39:24.949 ] 00:39:24.949 } 00:39:24.949 ] 00:39:24.949 } 00:39:24.949 [2024-07-13 23:25:14.276469] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:24.949 [2024-07-13 23:25:14.342053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:25.466  Copying: 12/36 [MB] (average 1200 MBps) 00:39:25.466 00:39:25.466 23:25:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:39:25.466 23:25:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:39:25.466 23:25:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:39:25.466 23:25:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:39:25.466 23:25:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:39:25.466 23:25:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:39:25.466 23:25:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:39:25.466 23:25:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:39:25.725 23:25:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:39:25.725 23:25:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:39:25.725 00:39:25.725 real 0m0.802s 00:39:25.725 user 0m0.422s 00:39:25.725 sys 0m0.235s 00:39:25.725 23:25:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:25.725 23:25:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:39:25.725 ************************************ 00:39:25.725 END TEST dd_sparse_file_to_file 00:39:25.725 ************************************ 00:39:25.725 23:25:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:39:25.725 23:25:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:39:25.725 23:25:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:25.725 23:25:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:25.725 23:25:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:39:25.725 ************************************ 00:39:25.725 START TEST dd_sparse_file_to_bdev 00:39:25.725 ************************************ 00:39:25.725 23:25:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # file_to_bdev 00:39:25.725 23:25:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:39:25.725 23:25:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:39:25.725 23:25:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:39:25.725 23:25:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:39:25.725 23:25:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:39:25.725 23:25:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:39:25.725 23:25:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:39:25.725 23:25:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:25.725 [2024-07-13 23:25:14.986179] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:39:25.725 [2024-07-13 23:25:14.986694] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176266 ] 00:39:25.725 { 00:39:25.725 "subsystems": [ 00:39:25.725 { 00:39:25.725 "subsystem": "bdev", 00:39:25.726 "config": [ 00:39:25.726 { 00:39:25.726 "params": { 00:39:25.726 "block_size": 4096, 00:39:25.726 "filename": "dd_sparse_aio_disk", 00:39:25.726 "name": "dd_aio" 00:39:25.726 }, 00:39:25.726 "method": "bdev_aio_create" 00:39:25.726 }, 00:39:25.726 { 00:39:25.726 "params": { 00:39:25.726 "lvs_name": "dd_lvstore", 00:39:25.726 "lvol_name": "dd_lvol", 00:39:25.726 "size_in_mib": 36, 00:39:25.726 "thin_provision": true 00:39:25.726 }, 00:39:25.726 "method": "bdev_lvol_create" 00:39:25.726 }, 00:39:25.726 { 00:39:25.726 "method": "bdev_wait_for_examine" 00:39:25.726 } 00:39:25.726 ] 00:39:25.726 } 00:39:25.726 ] 00:39:25.726 } 00:39:25.985 [2024-07-13 23:25:15.134988] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:25.985 [2024-07-13 23:25:15.235129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:26.553  Copying: 12/36 [MB] (average 500 MBps) 00:39:26.553 00:39:26.553 ************************************ 00:39:26.553 END TEST dd_sparse_file_to_bdev 00:39:26.553 ************************************ 00:39:26.553 00:39:26.553 real 0m0.823s 00:39:26.553 user 0m0.473s 00:39:26.553 sys 0m0.223s 00:39:26.553 23:25:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:26.553 23:25:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:26.553 23:25:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:39:26.553 23:25:15 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:39:26.553 23:25:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:26.553 23:25:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:26.553 23:25:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:39:26.553 ************************************ 00:39:26.553 START TEST dd_sparse_bdev_to_file 00:39:26.553 ************************************ 00:39:26.553 23:25:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # bdev_to_file 00:39:26.553 23:25:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:39:26.553 23:25:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:39:26.553 23:25:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:39:26.553 23:25:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:39:26.553 23:25:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:39:26.553 23:25:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:39:26.553 23:25:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:39:26.553 23:25:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:39:26.553 [2024-07-13 23:25:15.868787] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:39:26.553 [2024-07-13 23:25:15.869300] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176311 ] 00:39:26.553 { 00:39:26.553 "subsystems": [ 00:39:26.553 { 00:39:26.553 "subsystem": "bdev", 00:39:26.553 "config": [ 00:39:26.553 { 00:39:26.553 "params": { 00:39:26.553 "block_size": 4096, 00:39:26.553 "filename": "dd_sparse_aio_disk", 00:39:26.553 "name": "dd_aio" 00:39:26.553 }, 00:39:26.553 "method": "bdev_aio_create" 00:39:26.553 }, 00:39:26.553 { 00:39:26.553 "method": "bdev_wait_for_examine" 00:39:26.553 } 00:39:26.553 ] 00:39:26.553 } 00:39:26.553 ] 00:39:26.553 } 00:39:26.812 [2024-07-13 23:25:16.014562] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:26.812 [2024-07-13 23:25:16.102505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:27.330  Copying: 12/36 [MB] (average 1000 MBps) 00:39:27.330 00:39:27.330 23:25:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:39:27.330 23:25:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:39:27.330 23:25:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:39:27.330 23:25:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:39:27.330 23:25:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:39:27.330 23:25:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:39:27.330 23:25:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:39:27.330 23:25:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:39:27.330 ************************************ 00:39:27.330 END TEST dd_sparse_bdev_to_file 00:39:27.330 ************************************ 00:39:27.330 23:25:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:39:27.330 23:25:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:39:27.330 00:39:27.330 real 0m0.790s 00:39:27.330 user 0m0.427s 00:39:27.330 sys 0m0.255s 00:39:27.330 23:25:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:27.330 23:25:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:39:27.330 23:25:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:39:27.330 23:25:16 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:39:27.330 23:25:16 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:39:27.330 23:25:16 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:39:27.330 23:25:16 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:39:27.330 23:25:16 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:39:27.330 ************************************ 00:39:27.330 END TEST spdk_dd_sparse 00:39:27.330 ************************************ 00:39:27.330 00:39:27.330 real 0m2.739s 00:39:27.330 user 0m1.462s 00:39:27.330 sys 0m0.890s 00:39:27.330 23:25:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:27.330 23:25:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:39:27.330 23:25:16 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:39:27.330 23:25:16 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:39:27.330 23:25:16 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:27.330 23:25:16 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:27.330 23:25:16 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:39:27.330 ************************************ 00:39:27.330 START TEST spdk_dd_negative 00:39:27.330 ************************************ 00:39:27.330 23:25:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:39:27.590 * Looking for test storage... 00:39:27.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:39:27.590 23:25:16 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:27.590 23:25:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:27.590 23:25:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:27.590 23:25:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:27.590 23:25:16 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:27.590 23:25:16 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:27.590 23:25:16 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:27.590 23:25:16 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:39:27.590 23:25:16 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:27.590 23:25:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:39:27.590 23:25:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:39:27.590 23:25:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:39:27.590 23:25:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:39:27.590 23:25:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:39:27.590 23:25:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:27.590 23:25:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:27.590 23:25:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:27.590 ************************************ 00:39:27.590 START TEST dd_invalid_arguments 00:39:27.590 ************************************ 00:39:27.590 23:25:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # invalid_arguments 00:39:27.590 23:25:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:39:27.590 23:25:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:39:27.590 23:25:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:39:27.590 23:25:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:27.590 23:25:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:27.590 23:25:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:27.590 23:25:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:27.590 23:25:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:27.590 23:25:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:27.590 23:25:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:27.590 23:25:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:27.590 23:25:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:39:27.590 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:39:27.590 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:39:27.590 00:39:27.590 CPU options: 00:39:27.590 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:39:27.590 (like [0,1,10]) 00:39:27.590 --lcores lcore to CPU mapping list. The list is in the format: 00:39:27.590 [<,lcores[@CPUs]>...] 00:39:27.590 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:39:27.590 Within the group, '-' is used for range separator, 00:39:27.590 ',' is used for single number separator. 00:39:27.590 '( )' can be omitted for single element group, 00:39:27.590 '@' can be omitted if cpus and lcores have the same value 00:39:27.590 --disable-cpumask-locks Disable CPU core lock files. 00:39:27.590 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:39:27.590 pollers in the app support interrupt mode) 00:39:27.590 -p, --main-core main (primary) core for DPDK 00:39:27.590 00:39:27.590 Configuration options: 00:39:27.590 -c, --config, --json JSON config file 00:39:27.590 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:39:27.590 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:39:27.590 --wait-for-rpc wait for RPCs to initialize subsystems 00:39:27.590 --rpcs-allowed comma-separated list of permitted RPCS 00:39:27.590 --json-ignore-init-errors don't exit on invalid config entry 00:39:27.590 00:39:27.590 Memory options: 00:39:27.590 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:39:27.590 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:39:27.591 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:39:27.591 -R, --huge-unlink unlink huge files after initialization 00:39:27.591 -n, --mem-channels number of memory channels used for DPDK 00:39:27.591 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:39:27.591 --msg-mempool-size global message memory pool size in count (default: 262143) 00:39:27.591 --no-huge run without using hugepages 00:39:27.591 -i, --shm-id shared memory ID (optional) 00:39:27.591 -g, --single-file-segments force creating just one hugetlbfs file 00:39:27.591 00:39:27.591 PCI options: 00:39:27.591 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:39:27.591 -B, --pci-blocked pci addr to block (can be used more than once) 00:39:27.591 -u, --no-pci disable PCI access 00:39:27.591 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:39:27.591 00:39:27.591 Log options: 00:39:27.591 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:39:27.591 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:39:27.591 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, 00:39:27.591 bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, 00:39:27.591 blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:39:27.591 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:39:27.591 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:39:27.591 sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, 00:39:27.591 vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, 00:39:27.591 vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:39:27.591 virtio_vfio_user, vmd) 00:39:27.591 --silence-noticelog disable notice level logging to stderr 00:39:27.591 00:39:27.591 Trace options: 00:39:27.591 --num-trace-entries number of trace entries for each core, must be power of 2, 00:39:27.591 setting 0 to disable trace (default 32768) 00:39:27.591 Tracepoints vary in size and can use more than one trace entry. 00:39:27.591 -e, --tpoint-group [:] 00:39:27.591 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:39:27.591 [2024-07-13 23:25:16.883150] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:39:27.591 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:39:27.591 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:39:27.591 a tracepoint group. First tpoint inside a group can be enabled by 00:39:27.591 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:39:27.591 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:39:27.591 in /include/spdk_internal/trace_defs.h 00:39:27.591 00:39:27.591 Other options: 00:39:27.591 -h, --help show this usage 00:39:27.591 -v, --version print SPDK version 00:39:27.591 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:39:27.591 --env-context Opaque context for use of the env implementation 00:39:27.591 00:39:27.591 Application specific: 00:39:27.591 [--------- DD Options ---------] 00:39:27.591 --if Input file. Must specify either --if or --ib. 00:39:27.591 --ib Input bdev. Must specifier either --if or --ib 00:39:27.591 --of Output file. Must specify either --of or --ob. 00:39:27.591 --ob Output bdev. Must specify either --of or --ob. 00:39:27.591 --iflag Input file flags. 00:39:27.591 --oflag Output file flags. 00:39:27.591 --bs I/O unit size (default: 4096) 00:39:27.591 --qd Queue depth (default: 2) 00:39:27.591 --count I/O unit count. The number of I/O units to copy. (default: all) 00:39:27.591 --skip Skip this many I/O units at start of input. (default: 0) 00:39:27.591 --seek Skip this many I/O units at start of output. (default: 0) 00:39:27.591 --aio Force usage of AIO. (by default io_uring is used if available) 00:39:27.591 --sparse Enable hole skipping in input target 00:39:27.591 Available iflag and oflag values: 00:39:27.591 append - append mode 00:39:27.591 direct - use direct I/O for data 00:39:27.591 directory - fail unless a directory 00:39:27.591 dsync - use synchronized I/O for data 00:39:27.591 noatime - do not update access time 00:39:27.591 noctty - do not assign controlling terminal from file 00:39:27.591 nofollow - do not follow symlinks 00:39:27.591 nonblock - use non-blocking I/O 00:39:27.591 sync - use synchronized I/O for data and metadata 00:39:27.591 23:25:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:39:27.591 23:25:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:27.591 23:25:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:27.591 23:25:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:27.591 00:39:27.591 real 0m0.103s 00:39:27.591 user 0m0.065s 00:39:27.591 sys 0m0.038s 00:39:27.591 23:25:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:27.591 23:25:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:39:27.591 ************************************ 00:39:27.591 END TEST dd_invalid_arguments 00:39:27.591 ************************************ 00:39:27.591 23:25:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:39:27.591 23:25:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:39:27.591 23:25:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:27.591 23:25:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:27.591 23:25:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:27.591 ************************************ 00:39:27.591 START TEST dd_double_input 00:39:27.591 ************************************ 00:39:27.591 23:25:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1123 -- # double_input 00:39:27.591 23:25:16 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:39:27.591 23:25:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:39:27.591 23:25:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:39:27.591 23:25:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:27.591 23:25:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:27.591 23:25:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:27.591 23:25:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:27.591 23:25:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:27.591 23:25:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:27.591 23:25:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:27.591 23:25:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:27.591 23:25:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:39:27.850 [2024-07-13 23:25:17.036118] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:39:27.850 23:25:17 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:39:27.850 23:25:17 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:27.850 23:25:17 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:27.850 23:25:17 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:27.850 00:39:27.850 real 0m0.100s 00:39:27.850 user 0m0.064s 00:39:27.850 sys 0m0.037s 00:39:27.850 23:25:17 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:27.850 23:25:17 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:39:27.850 ************************************ 00:39:27.850 END TEST dd_double_input 00:39:27.850 ************************************ 00:39:27.850 23:25:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:39:27.850 23:25:17 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:39:27.850 23:25:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:27.850 23:25:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:27.850 23:25:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:27.850 ************************************ 00:39:27.850 START TEST dd_double_output 00:39:27.850 ************************************ 00:39:27.850 23:25:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # double_output 00:39:27.850 23:25:17 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:39:27.850 23:25:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:39:27.850 23:25:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:39:27.850 23:25:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:27.850 23:25:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:27.850 23:25:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:27.850 23:25:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:27.850 23:25:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:27.850 23:25:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:27.850 23:25:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:27.851 23:25:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:27.851 23:25:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:39:27.851 [2024-07-13 23:25:17.185892] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:39:27.851 23:25:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:39:27.851 23:25:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:27.851 23:25:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:27.851 23:25:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:27.851 00:39:27.851 real 0m0.102s 00:39:27.851 user 0m0.062s 00:39:27.851 sys 0m0.041s 00:39:27.851 23:25:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:27.851 23:25:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:39:27.851 ************************************ 00:39:27.851 END TEST dd_double_output 00:39:27.851 ************************************ 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:28.110 ************************************ 00:39:28.110 START TEST dd_no_input 00:39:28.110 ************************************ 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # no_input 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:39:28.110 [2024-07-13 23:25:17.340691] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:28.110 00:39:28.110 real 0m0.102s 00:39:28.110 user 0m0.066s 00:39:28.110 sys 0m0.037s 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:39:28.110 ************************************ 00:39:28.110 END TEST dd_no_input 00:39:28.110 ************************************ 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:28.110 ************************************ 00:39:28.110 START TEST dd_no_output 00:39:28.110 ************************************ 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # no_output 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:28.110 23:25:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:39:28.110 [2024-07-13 23:25:17.497130] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:28.370 00:39:28.370 real 0m0.101s 00:39:28.370 user 0m0.059s 00:39:28.370 sys 0m0.043s 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:39:28.370 ************************************ 00:39:28.370 END TEST dd_no_output 00:39:28.370 ************************************ 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:28.370 ************************************ 00:39:28.370 START TEST dd_wrong_blocksize 00:39:28.370 ************************************ 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # wrong_blocksize 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:39:28.370 [2024-07-13 23:25:17.654618] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:28.370 00:39:28.370 real 0m0.093s 00:39:28.370 user 0m0.059s 00:39:28.370 sys 0m0.034s 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:39:28.370 ************************************ 00:39:28.370 END TEST dd_wrong_blocksize 00:39:28.370 ************************************ 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:28.370 ************************************ 00:39:28.370 START TEST dd_smaller_blocksize 00:39:28.370 ************************************ 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # smaller_blocksize 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:28.370 23:25:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:39:28.629 [2024-07-13 23:25:17.807566] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:39:28.629 [2024-07-13 23:25:17.807837] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176569 ] 00:39:28.629 [2024-07-13 23:25:17.957322] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:28.887 [2024-07-13 23:25:18.059477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:28.887 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:39:28.887 [2024-07-13 23:25:18.224136] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:39:28.887 [2024-07-13 23:25:18.224538] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:29.145 [2024-07-13 23:25:18.351243] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:39:29.145 23:25:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:39:29.145 23:25:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:29.145 23:25:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:39:29.145 23:25:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:39:29.145 23:25:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:39:29.145 23:25:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:29.145 00:39:29.145 real 0m0.725s 00:39:29.145 user 0m0.404s 00:39:29.145 sys 0m0.220s 00:39:29.145 23:25:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:29.145 23:25:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:39:29.145 ************************************ 00:39:29.145 END TEST dd_smaller_blocksize 00:39:29.145 ************************************ 00:39:29.145 23:25:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:39:29.145 23:25:18 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:39:29.145 23:25:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:29.145 23:25:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:29.145 23:25:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:29.145 ************************************ 00:39:29.145 START TEST dd_invalid_count 00:39:29.145 ************************************ 00:39:29.145 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # invalid_count 00:39:29.145 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:39:29.145 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:39:29.145 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:39:29.145 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:29.145 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:29.145 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:29.145 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:29.145 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:29.145 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:29.145 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:29.146 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:29.146 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:39:29.404 [2024-07-13 23:25:18.585161] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:39:29.404 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:39:29.404 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:29.404 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:29.404 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:29.404 00:39:29.404 real 0m0.105s 00:39:29.404 user 0m0.061s 00:39:29.404 sys 0m0.043s 00:39:29.404 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:29.404 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:39:29.404 ************************************ 00:39:29.404 END TEST dd_invalid_count 00:39:29.404 ************************************ 00:39:29.404 23:25:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:39:29.404 23:25:18 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:39:29.404 23:25:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:29.404 23:25:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:29.404 23:25:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:29.404 ************************************ 00:39:29.404 START TEST dd_invalid_oflag 00:39:29.404 ************************************ 00:39:29.404 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # invalid_oflag 00:39:29.404 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:39:29.404 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:39:29.404 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:39:29.404 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:29.404 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:29.404 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:29.404 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:29.404 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:29.404 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:29.404 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:29.404 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:29.404 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:39:29.404 [2024-07-13 23:25:18.730103] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:39:29.404 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:39:29.404 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:29.404 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:29.404 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:29.404 00:39:29.404 real 0m0.087s 00:39:29.404 user 0m0.046s 00:39:29.404 sys 0m0.042s 00:39:29.404 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:29.404 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:39:29.404 ************************************ 00:39:29.404 END TEST dd_invalid_oflag 00:39:29.404 ************************************ 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:29.664 ************************************ 00:39:29.664 START TEST dd_invalid_iflag 00:39:29.664 ************************************ 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # invalid_iflag 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:39:29.664 [2024-07-13 23:25:18.886666] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:29.664 00:39:29.664 real 0m0.105s 00:39:29.664 user 0m0.051s 00:39:29.664 sys 0m0.054s 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:39:29.664 ************************************ 00:39:29.664 END TEST dd_invalid_iflag 00:39:29.664 ************************************ 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:29.664 ************************************ 00:39:29.664 START TEST dd_unknown_flag 00:39:29.664 ************************************ 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # unknown_flag 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:29.664 23:25:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:39:29.664 [2024-07-13 23:25:19.041059] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:39:29.664 [2024-07-13 23:25:19.041360] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176682 ] 00:39:29.923 [2024-07-13 23:25:19.188276] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:29.923 [2024-07-13 23:25:19.282263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:30.182 [2024-07-13 23:25:19.367996] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:39:30.182 [2024-07-13 23:25:19.368418] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:30.182  Copying: 0/0 [B] (average 0 Bps)[2024-07-13 23:25:19.368763] app.c:1039:app_stop: *NOTICE*: spdk_app_stop called twice 00:39:30.182 [2024-07-13 23:25:19.489152] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:39:30.441 00:39:30.441 00:39:30.441 23:25:19 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:39:30.441 23:25:19 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:30.441 23:25:19 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:39:30.441 23:25:19 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:39:30.441 23:25:19 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:39:30.441 23:25:19 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:30.441 00:39:30.441 real 0m0.681s 00:39:30.441 user 0m0.345s 00:39:30.441 sys 0m0.195s 00:39:30.441 23:25:19 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:30.441 ************************************ 00:39:30.441 END TEST dd_unknown_flag 00:39:30.441 ************************************ 00:39:30.441 23:25:19 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:39:30.441 23:25:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:39:30.441 23:25:19 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:39:30.441 23:25:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:30.441 23:25:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:30.441 23:25:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:30.441 ************************************ 00:39:30.441 START TEST dd_invalid_json 00:39:30.441 ************************************ 00:39:30.441 23:25:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # invalid_json 00:39:30.441 23:25:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:39:30.441 23:25:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:39:30.441 23:25:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:39:30.441 23:25:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:39:30.441 23:25:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:30.441 23:25:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:30.441 23:25:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:30.441 23:25:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:30.441 23:25:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:30.441 23:25:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:30.441 23:25:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:30.441 23:25:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:30.441 23:25:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:39:30.441 [2024-07-13 23:25:19.781405] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:39:30.441 [2024-07-13 23:25:19.781681] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176717 ] 00:39:30.700 [2024-07-13 23:25:19.929005] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:30.700 [2024-07-13 23:25:20.026247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:30.700 [2024-07-13 23:25:20.026611] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:39:30.700 [2024-07-13 23:25:20.026766] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:39:30.700 [2024-07-13 23:25:20.026898] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:30.700 [2024-07-13 23:25:20.027043] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:39:30.958 ************************************ 00:39:30.958 END TEST dd_invalid_json 00:39:30.958 ************************************ 00:39:30.958 23:25:20 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:39:30.958 23:25:20 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:30.958 23:25:20 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:39:30.958 23:25:20 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:39:30.958 23:25:20 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:39:30.958 23:25:20 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:30.958 00:39:30.958 real 0m0.431s 00:39:30.958 user 0m0.198s 00:39:30.958 sys 0m0.133s 00:39:30.958 23:25:20 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:30.958 23:25:20 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:39:30.958 23:25:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:39:30.958 ************************************ 00:39:30.958 END TEST spdk_dd_negative 00:39:30.958 ************************************ 00:39:30.958 00:39:30.958 real 0m3.475s 00:39:30.958 user 0m1.864s 00:39:30.958 sys 0m1.240s 00:39:30.958 23:25:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:30.958 23:25:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:30.958 23:25:20 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:39:30.958 00:39:30.958 real 1m9.014s 00:39:30.958 user 0m40.449s 00:39:30.958 sys 0m17.963s 00:39:30.958 23:25:20 spdk_dd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:30.958 23:25:20 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:39:30.958 ************************************ 00:39:30.958 END TEST spdk_dd 00:39:30.958 ************************************ 00:39:30.958 23:25:20 -- common/autotest_common.sh@1142 -- # return 0 00:39:30.958 23:25:20 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:39:30.958 23:25:20 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:39:30.958 23:25:20 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:39:30.958 23:25:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:30.958 23:25:20 -- common/autotest_common.sh@10 -- # set +x 00:39:30.958 ************************************ 00:39:30.958 START TEST blockdev_nvme 00:39:30.958 ************************************ 00:39:30.958 23:25:20 blockdev_nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:39:30.958 * Looking for test storage... 00:39:31.217 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:39:31.217 23:25:20 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:39:31.217 23:25:20 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:39:31.217 23:25:20 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:39:31.217 23:25:20 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:39:31.217 23:25:20 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:39:31.217 23:25:20 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:39:31.217 23:25:20 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:39:31.217 23:25:20 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:39:31.217 23:25:20 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:39:31.217 23:25:20 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:39:31.217 23:25:20 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:39:31.217 23:25:20 blockdev_nvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:39:31.217 23:25:20 blockdev_nvme -- bdev/blockdev.sh@674 -- # uname -s 00:39:31.217 23:25:20 blockdev_nvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:39:31.217 23:25:20 blockdev_nvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:39:31.217 23:25:20 blockdev_nvme -- bdev/blockdev.sh@682 -- # test_type=nvme 00:39:31.217 23:25:20 blockdev_nvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:39:31.217 23:25:20 blockdev_nvme -- bdev/blockdev.sh@684 -- # dek= 00:39:31.217 23:25:20 blockdev_nvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:39:31.217 23:25:20 blockdev_nvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:39:31.217 23:25:20 blockdev_nvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:39:31.217 23:25:20 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:39:31.217 23:25:20 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:39:31.217 23:25:20 blockdev_nvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:39:31.217 23:25:20 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=176803 00:39:31.217 23:25:20 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:39:31.217 23:25:20 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:39:31.217 23:25:20 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 176803 00:39:31.217 23:25:20 blockdev_nvme -- common/autotest_common.sh@829 -- # '[' -z 176803 ']' 00:39:31.217 23:25:20 blockdev_nvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:31.217 23:25:20 blockdev_nvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:31.217 23:25:20 blockdev_nvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:31.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:31.217 23:25:20 blockdev_nvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:31.217 23:25:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:31.217 [2024-07-13 23:25:20.449154] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:39:31.217 [2024-07-13 23:25:20.449434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176803 ] 00:39:31.217 [2024-07-13 23:25:20.594232] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:31.476 [2024-07-13 23:25:20.691842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:32.044 23:25:21 blockdev_nvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:32.044 23:25:21 blockdev_nvme -- common/autotest_common.sh@862 -- # return 0 00:39:32.044 23:25:21 blockdev_nvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:39:32.044 23:25:21 blockdev_nvme -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:39:32.044 23:25:21 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:39:32.044 23:25:21 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:39:32.044 23:25:21 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:39:32.302 23:25:21 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:39:32.302 23:25:21 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:32.302 23:25:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:32.302 23:25:21 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:32.302 23:25:21 blockdev_nvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:39:32.302 23:25:21 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:32.302 23:25:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:32.302 23:25:21 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:32.302 23:25:21 blockdev_nvme -- bdev/blockdev.sh@740 -- # cat 00:39:32.302 23:25:21 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:39:32.302 23:25:21 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:32.302 23:25:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:32.302 23:25:21 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:32.302 23:25:21 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:39:32.302 23:25:21 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:32.302 23:25:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:32.302 23:25:21 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:32.302 23:25:21 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:39:32.302 23:25:21 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:32.302 23:25:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:32.302 23:25:21 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:32.302 23:25:21 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:39:32.302 23:25:21 blockdev_nvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:39:32.302 23:25:21 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:32.302 23:25:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:32.302 23:25:21 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:39:32.302 23:25:21 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:32.302 23:25:21 blockdev_nvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:39:32.302 23:25:21 blockdev_nvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "9395fc8e-8422-448a-a49d-7cc31f176469"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "9395fc8e-8422-448a-a49d-7cc31f176469",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:39:32.302 23:25:21 blockdev_nvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:39:32.560 23:25:21 blockdev_nvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:39:32.560 23:25:21 blockdev_nvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:39:32.560 23:25:21 blockdev_nvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:39:32.560 23:25:21 blockdev_nvme -- bdev/blockdev.sh@754 -- # killprocess 176803 00:39:32.560 23:25:21 blockdev_nvme -- common/autotest_common.sh@948 -- # '[' -z 176803 ']' 00:39:32.560 23:25:21 blockdev_nvme -- common/autotest_common.sh@952 -- # kill -0 176803 00:39:32.560 23:25:21 blockdev_nvme -- common/autotest_common.sh@953 -- # uname 00:39:32.560 23:25:21 blockdev_nvme -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:32.560 23:25:21 blockdev_nvme -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 176803 00:39:32.560 23:25:21 blockdev_nvme -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:32.560 killing process with pid 176803 00:39:32.560 23:25:21 blockdev_nvme -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:32.560 23:25:21 blockdev_nvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 176803' 00:39:32.560 23:25:21 blockdev_nvme -- common/autotest_common.sh@967 -- # kill 176803 00:39:32.560 23:25:21 blockdev_nvme -- common/autotest_common.sh@972 -- # wait 176803 00:39:32.818 23:25:22 blockdev_nvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:39:32.818 23:25:22 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:39:32.818 23:25:22 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:39:32.818 23:25:22 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:32.818 23:25:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:32.818 ************************************ 00:39:32.818 START TEST bdev_hello_world 00:39:32.818 ************************************ 00:39:32.818 23:25:22 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:39:33.075 [2024-07-13 23:25:22.260996] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:39:33.075 [2024-07-13 23:25:22.261230] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176872 ] 00:39:33.075 [2024-07-13 23:25:22.400205] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:33.075 [2024-07-13 23:25:22.473938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:33.333 [2024-07-13 23:25:22.685971] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:39:33.333 [2024-07-13 23:25:22.686103] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:39:33.333 [2024-07-13 23:25:22.686168] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:39:33.333 [2024-07-13 23:25:22.688810] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:39:33.333 [2024-07-13 23:25:22.689400] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:39:33.333 [2024-07-13 23:25:22.689455] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:39:33.333 [2024-07-13 23:25:22.689712] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:39:33.333 00:39:33.333 [2024-07-13 23:25:22.689771] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:39:33.590 00:39:33.590 real 0m0.742s 00:39:33.590 user 0m0.463s 00:39:33.590 sys 0m0.180s 00:39:33.590 23:25:22 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:33.590 ************************************ 00:39:33.590 END TEST bdev_hello_world 00:39:33.590 ************************************ 00:39:33.590 23:25:22 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:39:33.590 23:25:22 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:39:33.590 23:25:22 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:39:33.590 23:25:22 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:39:33.590 23:25:22 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:33.590 23:25:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:33.849 ************************************ 00:39:33.849 START TEST bdev_bounds 00:39:33.849 ************************************ 00:39:33.849 23:25:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:39:33.849 23:25:23 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=176911 00:39:33.849 Process bdevio pid: 176911 00:39:33.849 23:25:23 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:39:33.849 23:25:23 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 176911' 00:39:33.849 23:25:23 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:39:33.849 23:25:23 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 176911 00:39:33.849 23:25:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 176911 ']' 00:39:33.849 23:25:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:33.849 23:25:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:33.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:33.849 23:25:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:33.849 23:25:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:33.849 23:25:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:39:33.849 [2024-07-13 23:25:23.062424] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:39:33.849 [2024-07-13 23:25:23.062713] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176911 ] 00:39:33.849 [2024-07-13 23:25:23.222819] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:34.107 [2024-07-13 23:25:23.317529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:34.107 [2024-07-13 23:25:23.317624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:34.107 [2024-07-13 23:25:23.317616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:39:34.674 23:25:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:34.674 23:25:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:39:34.674 23:25:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:39:34.933 I/O targets: 00:39:34.933 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:39:34.933 00:39:34.933 00:39:34.933 CUnit - A unit testing framework for C - Version 2.1-3 00:39:34.933 http://cunit.sourceforge.net/ 00:39:34.933 00:39:34.933 00:39:34.933 Suite: bdevio tests on: Nvme0n1 00:39:34.933 Test: blockdev write read block ...passed 00:39:34.933 Test: blockdev write zeroes read block ...passed 00:39:34.933 Test: blockdev write zeroes read no split ...passed 00:39:34.933 Test: blockdev write zeroes read split ...passed 00:39:34.933 Test: blockdev write zeroes read split partial ...passed 00:39:34.933 Test: blockdev reset ...[2024-07-13 23:25:24.145045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:39:34.933 [2024-07-13 23:25:24.147442] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:34.933 passed 00:39:34.933 Test: blockdev write read 8 blocks ...passed 00:39:34.933 Test: blockdev write read size > 128k ...passed 00:39:34.933 Test: blockdev write read invalid size ...passed 00:39:34.933 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:34.933 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:34.933 Test: blockdev write read max offset ...passed 00:39:34.933 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:34.933 Test: blockdev writev readv 8 blocks ...passed 00:39:34.933 Test: blockdev writev readv 30 x 1block ...passed 00:39:34.933 Test: blockdev writev readv block ...passed 00:39:34.933 Test: blockdev writev readv size > 128k ...passed 00:39:34.933 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:34.933 Test: blockdev comparev and writev ...[2024-07-13 23:25:24.153334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x1700d000 len:0x1000 00:39:34.933 [2024-07-13 23:25:24.153458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:39:34.933 passed 00:39:34.933 Test: blockdev nvme passthru rw ...passed 00:39:34.933 Test: blockdev nvme passthru vendor specific ...[2024-07-13 23:25:24.154369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:39:34.933 [2024-07-13 23:25:24.154446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:39:34.933 passed 00:39:34.933 Test: blockdev nvme admin passthru ...passed 00:39:34.933 Test: blockdev copy ...passed 00:39:34.933 00:39:34.933 Run Summary: Type Total Ran Passed Failed Inactive 00:39:34.933 suites 1 1 n/a 0 0 00:39:34.933 tests 23 23 23 0 0 00:39:34.933 asserts 152 152 152 0 n/a 00:39:34.933 00:39:34.933 Elapsed time = 0.056 seconds 00:39:34.933 0 00:39:34.933 23:25:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 176911 00:39:34.933 23:25:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 176911 ']' 00:39:34.933 23:25:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 176911 00:39:34.933 23:25:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:39:34.933 23:25:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:34.933 23:25:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 176911 00:39:34.933 23:25:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:34.933 killing process with pid 176911 00:39:34.933 23:25:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:34.933 23:25:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 176911' 00:39:34.933 23:25:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 176911 00:39:34.933 23:25:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 176911 00:39:35.192 23:25:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:39:35.192 00:39:35.192 real 0m1.426s 00:39:35.192 user 0m3.619s 00:39:35.192 sys 0m0.287s 00:39:35.192 23:25:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:35.192 23:25:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:39:35.192 ************************************ 00:39:35.192 END TEST bdev_bounds 00:39:35.192 ************************************ 00:39:35.192 23:25:24 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:39:35.192 23:25:24 blockdev_nvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:39:35.192 23:25:24 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:39:35.192 23:25:24 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:35.192 23:25:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:35.192 ************************************ 00:39:35.192 START TEST bdev_nbd 00:39:35.192 ************************************ 00:39:35.192 23:25:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:39:35.192 23:25:24 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:39:35.192 23:25:24 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:39:35.192 23:25:24 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:35.192 23:25:24 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:39:35.192 23:25:24 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1') 00:39:35.192 23:25:24 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:39:35.192 23:25:24 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=1 00:39:35.192 23:25:24 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:39:35.192 23:25:24 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:39:35.192 23:25:24 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:39:35.192 23:25:24 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=1 00:39:35.192 23:25:24 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0') 00:39:35.192 23:25:24 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:39:35.192 23:25:24 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1') 00:39:35.192 23:25:24 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:39:35.192 23:25:24 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=176961 00:39:35.192 23:25:24 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:39:35.192 23:25:24 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 176961 /var/tmp/spdk-nbd.sock 00:39:35.192 23:25:24 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:39:35.192 23:25:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 176961 ']' 00:39:35.192 23:25:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:39:35.192 23:25:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:35.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:39:35.192 23:25:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:39:35.192 23:25:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:35.192 23:25:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:39:35.192 [2024-07-13 23:25:24.534624] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:39:35.192 [2024-07-13 23:25:24.534879] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:35.451 [2024-07-13 23:25:24.676898] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:35.451 [2024-07-13 23:25:24.775088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:36.385 23:25:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:36.385 23:25:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:39:36.385 23:25:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:39:36.385 23:25:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:36.385 23:25:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1') 00:39:36.385 23:25:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:39:36.385 23:25:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:39:36.385 23:25:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:36.385 23:25:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1') 00:39:36.385 23:25:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:39:36.385 23:25:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:39:36.385 23:25:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:39:36.385 23:25:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:39:36.385 23:25:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:39:36.385 23:25:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:39:36.643 23:25:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:39:36.643 23:25:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:39:36.643 23:25:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:39:36.643 23:25:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:39:36.643 23:25:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:39:36.643 23:25:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:39:36.643 23:25:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:39:36.643 23:25:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:39:36.643 23:25:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:39:36.643 23:25:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:39:36.643 23:25:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:39:36.643 23:25:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:36.643 1+0 records in 00:39:36.643 1+0 records out 00:39:36.643 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428782 s, 9.6 MB/s 00:39:36.643 23:25:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:36.643 23:25:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:39:36.643 23:25:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:36.643 23:25:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:39:36.643 23:25:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:39:36.643 23:25:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:39:36.643 23:25:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:39:36.643 23:25:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:36.900 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:39:36.900 { 00:39:36.900 "nbd_device": "/dev/nbd0", 00:39:36.900 "bdev_name": "Nvme0n1" 00:39:36.900 } 00:39:36.900 ]' 00:39:36.900 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:39:36.900 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:39:36.900 { 00:39:36.900 "nbd_device": "/dev/nbd0", 00:39:36.900 "bdev_name": "Nvme0n1" 00:39:36.900 } 00:39:36.900 ]' 00:39:36.900 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:39:36.900 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:39:36.900 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:36.900 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:39:36.900 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:36.900 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:39:36.900 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:36.900 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:39:37.157 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:37.157 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:37.157 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:37.157 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:37.157 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:37.157 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:37.157 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:37.157 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:37.157 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:39:37.157 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:37.157 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:37.414 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:39:37.415 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:39:37.415 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:39:37.415 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:39:37.415 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:39:37.415 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:39:37.415 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:39:37.415 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:39:37.415 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:39:37.415 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:39:37.415 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:39:37.415 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:39:37.415 23:25:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:39:37.415 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:37.415 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1') 00:39:37.415 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:39:37.415 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:39:37.415 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:39:37.415 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:39:37.415 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:37.415 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1') 00:39:37.415 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:37.415 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:39:37.415 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:37.415 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:39:37.415 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:37.415 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:37.415 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:39:37.674 /dev/nbd0 00:39:37.674 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:39:37.674 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:39:37.674 23:25:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:39:37.674 23:25:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:39:37.674 23:25:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:39:37.674 23:25:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:39:37.674 23:25:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:39:37.674 23:25:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:39:37.674 23:25:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:39:37.674 23:25:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:39:37.674 23:25:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:37.674 1+0 records in 00:39:37.674 1+0 records out 00:39:37.674 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000660134 s, 6.2 MB/s 00:39:37.674 23:25:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:37.674 23:25:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:39:37.674 23:25:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:37.674 23:25:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:39:37.674 23:25:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:39:37.674 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:37.674 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:37.674 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:39:37.674 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:37.674 23:25:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:37.932 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:39:37.932 { 00:39:37.932 "nbd_device": "/dev/nbd0", 00:39:37.932 "bdev_name": "Nvme0n1" 00:39:37.932 } 00:39:37.932 ]' 00:39:37.932 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:39:37.932 { 00:39:37.932 "nbd_device": "/dev/nbd0", 00:39:37.932 "bdev_name": "Nvme0n1" 00:39:37.932 } 00:39:37.932 ]' 00:39:37.932 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:39:37.932 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:39:37.932 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:39:37.932 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:39:37.932 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:39:37.932 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:39:37.932 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:39:37.932 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:39:37.932 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:39:37.932 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:39:37.932 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:39:37.932 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:39:37.932 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:39:37.932 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:39:37.932 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:39:37.932 256+0 records in 00:39:37.932 256+0 records out 00:39:37.932 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00767465 s, 137 MB/s 00:39:37.932 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:39:37.932 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:39:38.193 256+0 records in 00:39:38.193 256+0 records out 00:39:38.193 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0628796 s, 16.7 MB/s 00:39:38.193 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:39:38.193 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:39:38.193 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:39:38.193 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:39:38.193 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:39:38.193 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:39:38.193 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:39:38.193 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:39:38.194 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:39:38.194 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:39:38.194 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:39:38.194 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:38.194 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:39:38.194 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:38.194 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:39:38.194 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:38.194 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:39:38.451 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:38.451 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:38.451 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:38.451 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:38.451 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:38.451 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:38.451 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:38.451 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:38.451 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:39:38.451 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:38.451 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:38.708 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:39:38.708 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:39:38.708 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:39:38.708 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:39:38.708 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:39:38.708 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:39:38.708 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:39:38.708 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:39:38.708 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:39:38.708 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:39:38.708 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:39:38.708 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:39:38.708 23:25:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:39:38.708 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:38.708 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:39:38.708 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:39:38.708 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:39:38.708 23:25:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:39:38.974 malloc_lvol_verify 00:39:38.974 23:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:39:39.249 d3fa91c3-1dcb-4b0c-bdd7-2bffb59e30e2 00:39:39.249 23:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:39:39.507 829cfd9e-3d33-45f6-8bab-62b3f2a8c3d0 00:39:39.507 23:25:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:39:39.765 /dev/nbd0 00:39:39.765 23:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:39:39.765 mke2fs 1.46.5 (30-Dec-2021) 00:39:39.765 00:39:39.765 Filesystem too small for a journal 00:39:39.765 Discarding device blocks: 0/1024 done 00:39:39.765 Creating filesystem with 1024 4k blocks and 1024 inodes 00:39:39.765 00:39:39.765 Allocating group tables: 0/1 done 00:39:39.765 Writing inode tables: 0/1 done 00:39:39.765 Writing superblocks and filesystem accounting information: 0/1 done 00:39:39.765 00:39:39.765 23:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:39:39.765 23:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:39:39.765 23:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:39.765 23:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:39:39.765 23:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:39.765 23:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:39:39.765 23:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:39.765 23:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:39:40.024 23:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:40.024 23:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:40.024 23:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:40.024 23:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:40.024 23:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:40.024 23:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:40.024 23:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:40.024 23:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:40.024 23:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:39:40.024 23:25:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:39:40.024 23:25:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 176961 00:39:40.024 23:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 176961 ']' 00:39:40.024 23:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 176961 00:39:40.024 23:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:39:40.024 23:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:40.024 23:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 176961 00:39:40.024 23:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:40.024 23:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:40.024 23:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 176961' 00:39:40.024 killing process with pid 176961 00:39:40.024 23:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@967 -- # kill 176961 00:39:40.024 23:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # wait 176961 00:39:40.283 23:25:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:39:40.283 00:39:40.283 real 0m5.095s 00:39:40.283 user 0m7.884s 00:39:40.283 sys 0m1.178s 00:39:40.283 23:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:40.283 23:25:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:39:40.283 ************************************ 00:39:40.283 END TEST bdev_nbd 00:39:40.283 ************************************ 00:39:40.283 23:25:29 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:39:40.283 23:25:29 blockdev_nvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:39:40.283 23:25:29 blockdev_nvme -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:39:40.283 skipping fio tests on NVMe due to multi-ns failures. 00:39:40.283 23:25:29 blockdev_nvme -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:39:40.283 23:25:29 blockdev_nvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:39:40.283 23:25:29 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:39:40.283 23:25:29 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:39:40.283 23:25:29 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:40.283 23:25:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:40.283 ************************************ 00:39:40.283 START TEST bdev_verify 00:39:40.283 ************************************ 00:39:40.283 23:25:29 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:39:40.283 [2024-07-13 23:25:29.685768] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:39:40.283 [2024-07-13 23:25:29.686772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177142 ] 00:39:40.542 [2024-07-13 23:25:29.836265] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:40.542 [2024-07-13 23:25:29.925792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:40.542 [2024-07-13 23:25:29.925792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:40.801 Running I/O for 5 seconds... 00:39:46.067 00:39:46.067 Latency(us) 00:39:46.067 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:46.067 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:46.067 Verification LBA range: start 0x0 length 0xa0000 00:39:46.067 Nvme0n1 : 5.01 11213.84 43.80 0.00 0.00 11352.44 841.54 16205.27 00:39:46.067 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:39:46.067 Verification LBA range: start 0xa0000 length 0xa0000 00:39:46.067 Nvme0n1 : 5.01 11189.42 43.71 0.00 0.00 11375.46 774.52 16920.20 00:39:46.067 =================================================================================================================== 00:39:46.067 Total : 22403.25 87.51 0.00 0.00 11363.94 774.52 16920.20 00:39:46.325 00:39:46.325 real 0m6.060s 00:39:46.325 user 0m11.354s 00:39:46.325 sys 0m0.221s 00:39:46.325 ************************************ 00:39:46.325 END TEST bdev_verify 00:39:46.325 ************************************ 00:39:46.325 23:25:35 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:46.325 23:25:35 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:39:46.325 23:25:35 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:39:46.325 23:25:35 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:39:46.325 23:25:35 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:39:46.325 23:25:35 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:46.325 23:25:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:46.584 ************************************ 00:39:46.584 START TEST bdev_verify_big_io 00:39:46.584 ************************************ 00:39:46.584 23:25:35 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:39:46.584 [2024-07-13 23:25:35.794912] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:39:46.584 [2024-07-13 23:25:35.795184] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177233 ] 00:39:46.584 [2024-07-13 23:25:35.948146] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:46.842 [2024-07-13 23:25:36.022489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:46.842 [2024-07-13 23:25:36.022487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:46.842 Running I/O for 5 seconds... 00:39:52.144 00:39:52.144 Latency(us) 00:39:52.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:52.144 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:39:52.144 Verification LBA range: start 0x0 length 0xa000 00:39:52.144 Nvme0n1 : 5.07 850.39 53.15 0.00 0.00 147044.35 487.80 164912.41 00:39:52.144 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:39:52.144 Verification LBA range: start 0xa000 length 0xa000 00:39:52.144 Nvme0n1 : 5.07 854.33 53.40 0.00 0.00 146318.63 543.65 234499.72 00:39:52.144 =================================================================================================================== 00:39:52.144 Total : 1704.71 106.54 0.00 0.00 146680.82 487.80 234499.72 00:39:52.711 00:39:52.711 real 0m6.137s 00:39:52.711 user 0m11.546s 00:39:52.711 sys 0m0.230s 00:39:52.711 23:25:41 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:52.711 ************************************ 00:39:52.711 END TEST bdev_verify_big_io 00:39:52.711 ************************************ 00:39:52.711 23:25:41 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:39:52.711 23:25:41 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:39:52.711 23:25:41 blockdev_nvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:39:52.711 23:25:41 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:39:52.711 23:25:41 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:52.711 23:25:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:52.711 ************************************ 00:39:52.711 START TEST bdev_write_zeroes 00:39:52.711 ************************************ 00:39:52.711 23:25:41 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:39:52.711 [2024-07-13 23:25:41.981091] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:39:52.711 [2024-07-13 23:25:41.981526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177322 ] 00:39:52.969 [2024-07-13 23:25:42.121294] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:52.969 [2024-07-13 23:25:42.190246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:53.226 Running I/O for 1 seconds... 00:39:54.158 00:39:54.158 Latency(us) 00:39:54.158 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:54.158 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:39:54.158 Nvme0n1 : 1.00 57054.59 222.87 0.00 0.00 2237.74 618.12 13464.67 00:39:54.158 =================================================================================================================== 00:39:54.158 Total : 57054.59 222.87 0.00 0.00 2237.74 618.12 13464.67 00:39:54.416 00:39:54.416 real 0m1.755s 00:39:54.416 user 0m1.470s 00:39:54.416 sys 0m0.184s 00:39:54.416 23:25:43 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:54.416 ************************************ 00:39:54.416 END TEST bdev_write_zeroes 00:39:54.416 ************************************ 00:39:54.416 23:25:43 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:39:54.416 23:25:43 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:39:54.416 23:25:43 blockdev_nvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:39:54.416 23:25:43 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:39:54.416 23:25:43 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:54.416 23:25:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:54.416 ************************************ 00:39:54.416 START TEST bdev_json_nonenclosed 00:39:54.416 ************************************ 00:39:54.416 23:25:43 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:39:54.416 [2024-07-13 23:25:43.794030] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:39:54.416 [2024-07-13 23:25:43.794483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177369 ] 00:39:54.674 [2024-07-13 23:25:43.938966] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:54.674 [2024-07-13 23:25:44.043049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:54.674 [2024-07-13 23:25:44.043481] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:39:54.674 [2024-07-13 23:25:44.043684] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:39:54.674 [2024-07-13 23:25:44.043847] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:54.932 00:39:54.932 real 0m0.424s 00:39:54.932 user 0m0.195s 00:39:54.932 sys 0m0.127s 00:39:54.932 23:25:44 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:39:54.932 23:25:44 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:54.932 ************************************ 00:39:54.932 END TEST bdev_json_nonenclosed 00:39:54.932 ************************************ 00:39:54.932 23:25:44 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:39:54.932 23:25:44 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:39:54.932 23:25:44 blockdev_nvme -- bdev/blockdev.sh@782 -- # true 00:39:54.932 23:25:44 blockdev_nvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:39:54.932 23:25:44 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:39:54.932 23:25:44 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:54.932 23:25:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:54.932 ************************************ 00:39:54.932 START TEST bdev_json_nonarray 00:39:54.932 ************************************ 00:39:54.932 23:25:44 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:39:54.932 [2024-07-13 23:25:44.270976] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:39:54.932 [2024-07-13 23:25:44.271881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177408 ] 00:39:55.191 [2024-07-13 23:25:44.416705] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:55.191 [2024-07-13 23:25:44.511412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:55.191 [2024-07-13 23:25:44.511626] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:39:55.191 [2024-07-13 23:25:44.511690] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:39:55.191 [2024-07-13 23:25:44.511729] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:55.449 00:39:55.449 real 0m0.419s 00:39:55.449 user 0m0.191s 00:39:55.449 sys 0m0.125s 00:39:55.449 23:25:44 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:39:55.449 23:25:44 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:55.449 23:25:44 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:39:55.449 ************************************ 00:39:55.449 END TEST bdev_json_nonarray 00:39:55.449 ************************************ 00:39:55.449 23:25:44 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:39:55.449 23:25:44 blockdev_nvme -- bdev/blockdev.sh@785 -- # true 00:39:55.449 23:25:44 blockdev_nvme -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:39:55.449 23:25:44 blockdev_nvme -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:39:55.449 23:25:44 blockdev_nvme -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:39:55.449 23:25:44 blockdev_nvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:39:55.449 23:25:44 blockdev_nvme -- bdev/blockdev.sh@811 -- # cleanup 00:39:55.449 23:25:44 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:39:55.449 23:25:44 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:39:55.449 23:25:44 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:39:55.449 23:25:44 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:39:55.449 23:25:44 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:39:55.449 23:25:44 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:39:55.449 00:39:55.449 real 0m24.405s 00:39:55.449 user 0m39.053s 00:39:55.449 sys 0m3.205s 00:39:55.449 23:25:44 blockdev_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:55.449 23:25:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:55.449 ************************************ 00:39:55.449 END TEST blockdev_nvme 00:39:55.449 ************************************ 00:39:55.449 23:25:44 -- common/autotest_common.sh@1142 -- # return 0 00:39:55.449 23:25:44 -- spdk/autotest.sh@213 -- # uname -s 00:39:55.449 23:25:44 -- spdk/autotest.sh@213 -- # [[ Linux == Linux ]] 00:39:55.449 23:25:44 -- spdk/autotest.sh@214 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:39:55.449 23:25:44 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:39:55.449 23:25:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:55.449 23:25:44 -- common/autotest_common.sh@10 -- # set +x 00:39:55.449 ************************************ 00:39:55.449 START TEST blockdev_nvme_gpt 00:39:55.449 ************************************ 00:39:55.449 23:25:44 blockdev_nvme_gpt -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:39:55.450 * Looking for test storage... 00:39:55.450 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:39:55.450 23:25:44 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:39:55.450 23:25:44 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:39:55.450 23:25:44 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:39:55.450 23:25:44 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:39:55.450 23:25:44 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:39:55.450 23:25:44 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:39:55.450 23:25:44 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:39:55.450 23:25:44 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:39:55.450 23:25:44 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:39:55.450 23:25:44 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:39:55.450 23:25:44 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:39:55.450 23:25:44 blockdev_nvme_gpt -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:39:55.450 23:25:44 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # uname -s 00:39:55.450 23:25:44 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:39:55.450 23:25:44 blockdev_nvme_gpt -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:39:55.450 23:25:44 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # test_type=gpt 00:39:55.450 23:25:44 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # crypto_device= 00:39:55.450 23:25:44 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # dek= 00:39:55.450 23:25:44 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # env_ctx= 00:39:55.450 23:25:44 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:39:55.450 23:25:44 blockdev_nvme_gpt -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:39:55.450 23:25:44 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == bdev ]] 00:39:55.450 23:25:44 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == crypto_* ]] 00:39:55.450 23:25:44 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:39:55.450 23:25:44 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=177478 00:39:55.450 23:25:44 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:39:55.450 23:25:44 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:39:55.450 23:25:44 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 177478 00:39:55.450 23:25:44 blockdev_nvme_gpt -- common/autotest_common.sh@829 -- # '[' -z 177478 ']' 00:39:55.450 23:25:44 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:55.450 23:25:44 blockdev_nvme_gpt -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:55.450 23:25:44 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:55.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:55.450 23:25:44 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:55.450 23:25:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:39:55.708 [2024-07-13 23:25:44.901159] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:39:55.708 [2024-07-13 23:25:44.901382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177478 ] 00:39:55.708 [2024-07-13 23:25:45.043130] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:55.967 [2024-07-13 23:25:45.133241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:56.533 23:25:45 blockdev_nvme_gpt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:56.533 23:25:45 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # return 0 00:39:56.533 23:25:45 blockdev_nvme_gpt -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:39:56.533 23:25:45 blockdev_nvme_gpt -- bdev/blockdev.sh@702 -- # setup_gpt_conf 00:39:56.533 23:25:45 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:39:56.792 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:39:57.050 Waiting for block devices as requested 00:39:57.050 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:39:57.050 23:25:46 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:39:57.050 23:25:46 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:39:57.050 23:25:46 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:39:57.050 23:25:46 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # local nvme bdf 00:39:57.050 23:25:46 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:39:57.050 23:25:46 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:39:57.050 23:25:46 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:39:57.051 23:25:46 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:39:57.051 23:25:46 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:39:57.051 23:25:46 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:10.0/nvme/nvme0/nvme0n1') 00:39:57.051 23:25:46 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # local nvme_devs nvme_dev 00:39:57.051 23:25:46 blockdev_nvme_gpt -- bdev/blockdev.sh@108 -- # gpt_nvme= 00:39:57.051 23:25:46 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # for nvme_dev in "${nvme_devs[@]}" 00:39:57.051 23:25:46 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # [[ -z '' ]] 00:39:57.051 23:25:46 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # dev=/dev/nvme0n1 00:39:57.051 23:25:46 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # parted /dev/nvme0n1 -ms print 00:39:57.051 23:25:46 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:39:57.051 BYT; 00:39:57.051 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:39:57.051 23:25:46 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:39:57.051 BYT; 00:39:57.051 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:39:57.051 23:25:46 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # gpt_nvme=/dev/nvme0n1 00:39:57.051 23:25:46 blockdev_nvme_gpt -- bdev/blockdev.sh@116 -- # break 00:39:57.051 23:25:46 blockdev_nvme_gpt -- bdev/blockdev.sh@119 -- # [[ -n /dev/nvme0n1 ]] 00:39:57.051 23:25:46 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:39:57.051 23:25:46 blockdev_nvme_gpt -- bdev/blockdev.sh@125 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:39:57.051 23:25:46 blockdev_nvme_gpt -- bdev/blockdev.sh@128 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:39:57.618 23:25:46 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt_old 00:39:57.618 23:25:46 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:39:57.618 23:25:46 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:39:57.618 23:25:46 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:39:57.618 23:25:46 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:39:57.618 23:25:46 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:39:57.618 23:25:46 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:39:57.618 23:25:46 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:39:57.618 23:25:46 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:39:57.618 23:25:46 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:39:57.618 23:25:46 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:39:57.618 23:25:46 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # get_spdk_gpt 00:39:57.618 23:25:46 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:39:57.618 23:25:46 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:39:57.618 23:25:46 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:39:57.618 23:25:46 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:39:57.618 23:25:46 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:39:57.618 23:25:46 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:39:57.618 23:25:46 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:39:57.618 23:25:46 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:39:57.618 23:25:46 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:39:57.618 23:25:46 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:39:57.618 23:25:46 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:39:58.554 The operation has completed successfully. 00:39:58.554 23:25:47 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:39:59.488 The operation has completed successfully. 00:39:59.488 23:25:48 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:40:00.055 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:40:00.055 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:40:00.991 23:25:50 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # rpc_cmd bdev_get_bdevs 00:40:00.991 23:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:00.991 23:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:40:00.991 [] 00:40:00.991 23:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:00.991 23:25:50 blockdev_nvme_gpt -- bdev/blockdev.sh@136 -- # setup_nvme_conf 00:40:00.991 23:25:50 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:40:00.991 23:25:50 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:40:00.991 23:25:50 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:40:00.991 23:25:50 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:40:00.991 23:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:00.991 23:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:40:00.991 23:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:00.991 23:25:50 blockdev_nvme_gpt -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:40:00.991 23:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:00.991 23:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:40:00.991 23:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:00.991 23:25:50 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # cat 00:40:00.991 23:25:50 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:40:00.991 23:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:00.991 23:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:40:00.991 23:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:00.991 23:25:50 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:40:00.991 23:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:00.991 23:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:40:00.991 23:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:00.991 23:25:50 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:40:00.991 23:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:00.991 23:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:40:00.991 23:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:00.991 23:25:50 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:40:00.991 23:25:50 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:40:00.991 23:25:50 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:40:00.991 23:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:00.991 23:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:40:00.991 23:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:01.250 23:25:50 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:40:01.250 23:25:50 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # jq -r .name 00:40:01.250 23:25:50 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:40:01.250 23:25:50 blockdev_nvme_gpt -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:40:01.250 23:25:50 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1p1 00:40:01.250 23:25:50 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:40:01.250 23:25:50 blockdev_nvme_gpt -- bdev/blockdev.sh@754 -- # killprocess 177478 00:40:01.250 23:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@948 -- # '[' -z 177478 ']' 00:40:01.250 23:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # kill -0 177478 00:40:01.250 23:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # uname 00:40:01.250 23:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:01.250 23:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 177478 00:40:01.250 23:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:01.250 killing process with pid 177478 00:40:01.250 23:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:01.250 23:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 177478' 00:40:01.250 23:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@967 -- # kill 177478 00:40:01.250 23:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # wait 177478 00:40:01.818 23:25:50 blockdev_nvme_gpt -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:40:01.818 23:25:50 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:40:01.818 23:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:40:01.818 23:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:01.818 23:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:40:01.818 ************************************ 00:40:01.818 START TEST bdev_hello_world 00:40:01.818 ************************************ 00:40:01.818 23:25:50 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:40:01.818 [2024-07-13 23:25:50.990841] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:40:01.818 [2024-07-13 23:25:50.991106] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177887 ] 00:40:01.818 [2024-07-13 23:25:51.136487] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:01.818 [2024-07-13 23:25:51.206171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:02.077 [2024-07-13 23:25:51.422360] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:40:02.077 [2024-07-13 23:25:51.422451] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:40:02.077 [2024-07-13 23:25:51.422539] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:40:02.077 [2024-07-13 23:25:51.424976] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:40:02.077 [2024-07-13 23:25:51.425560] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:40:02.077 [2024-07-13 23:25:51.425625] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:40:02.077 [2024-07-13 23:25:51.425969] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:40:02.077 00:40:02.077 [2024-07-13 23:25:51.426034] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:40:02.336 00:40:02.336 real 0m0.751s 00:40:02.336 user 0m0.434s 00:40:02.336 sys 0m0.217s 00:40:02.336 23:25:51 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:02.336 23:25:51 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:40:02.336 ************************************ 00:40:02.336 END TEST bdev_hello_world 00:40:02.336 ************************************ 00:40:02.336 23:25:51 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:40:02.336 23:25:51 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:40:02.336 23:25:51 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:40:02.336 23:25:51 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:02.336 23:25:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:40:02.596 ************************************ 00:40:02.596 START TEST bdev_bounds 00:40:02.596 ************************************ 00:40:02.596 23:25:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:40:02.596 23:25:51 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=177925 00:40:02.596 23:25:51 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:40:02.596 23:25:51 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 177925' 00:40:02.596 Process bdevio pid: 177925 00:40:02.596 23:25:51 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 177925 00:40:02.596 23:25:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 177925 ']' 00:40:02.596 23:25:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:02.596 23:25:51 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:40:02.596 23:25:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:02.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:02.596 23:25:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:02.596 23:25:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:02.596 23:25:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:40:02.596 [2024-07-13 23:25:51.800501] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:40:02.596 [2024-07-13 23:25:51.801043] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177925 ] 00:40:02.596 [2024-07-13 23:25:51.974420] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:02.855 [2024-07-13 23:25:52.072569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:02.855 [2024-07-13 23:25:52.072714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:02.855 [2024-07-13 23:25:52.072710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:40:03.443 23:25:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:03.443 23:25:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:40:03.443 23:25:52 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:40:03.703 I/O targets: 00:40:03.703 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:40:03.703 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:40:03.703 00:40:03.703 00:40:03.703 CUnit - A unit testing framework for C - Version 2.1-3 00:40:03.703 http://cunit.sourceforge.net/ 00:40:03.703 00:40:03.703 00:40:03.703 Suite: bdevio tests on: Nvme0n1p2 00:40:03.703 Test: blockdev write read block ...passed 00:40:03.703 Test: blockdev write zeroes read block ...passed 00:40:03.703 Test: blockdev write zeroes read no split ...passed 00:40:03.703 Test: blockdev write zeroes read split ...passed 00:40:03.703 Test: blockdev write zeroes read split partial ...passed 00:40:03.703 Test: blockdev reset ...[2024-07-13 23:25:52.881336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:40:03.703 [2024-07-13 23:25:52.883687] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:40:03.703 passed 00:40:03.703 Test: blockdev write read 8 blocks ...passed 00:40:03.703 Test: blockdev write read size > 128k ...passed 00:40:03.703 Test: blockdev write read invalid size ...passed 00:40:03.703 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:03.703 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:03.703 Test: blockdev write read max offset ...passed 00:40:03.703 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:03.703 Test: blockdev writev readv 8 blocks ...passed 00:40:03.703 Test: blockdev writev readv 30 x 1block ...passed 00:40:03.703 Test: blockdev writev readv block ...passed 00:40:03.703 Test: blockdev writev readv size > 128k ...passed 00:40:03.703 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:03.703 Test: blockdev comparev and writev ...[2024-07-13 23:25:52.891088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x7980d000 len:0x1000 00:40:03.703 [2024-07-13 23:25:52.891221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:40:03.703 passed 00:40:03.703 Test: blockdev nvme passthru rw ...passed 00:40:03.703 Test: blockdev nvme passthru vendor specific ...passed 00:40:03.703 Test: blockdev nvme admin passthru ...passed 00:40:03.703 Test: blockdev copy ...passed 00:40:03.703 Suite: bdevio tests on: Nvme0n1p1 00:40:03.703 Test: blockdev write read block ...passed 00:40:03.703 Test: blockdev write zeroes read block ...passed 00:40:03.703 Test: blockdev write zeroes read no split ...passed 00:40:03.703 Test: blockdev write zeroes read split ...passed 00:40:03.703 Test: blockdev write zeroes read split partial ...passed 00:40:03.703 Test: blockdev reset ...[2024-07-13 23:25:52.905355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:40:03.703 [2024-07-13 23:25:52.907415] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:40:03.703 passed 00:40:03.703 Test: blockdev write read 8 blocks ...passed 00:40:03.703 Test: blockdev write read size > 128k ...passed 00:40:03.703 Test: blockdev write read invalid size ...passed 00:40:03.703 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:03.703 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:03.703 Test: blockdev write read max offset ...passed 00:40:03.703 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:03.703 Test: blockdev writev readv 8 blocks ...passed 00:40:03.703 Test: blockdev writev readv 30 x 1block ...passed 00:40:03.703 Test: blockdev writev readv block ...passed 00:40:03.703 Test: blockdev writev readv size > 128k ...passed 00:40:03.703 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:03.703 Test: blockdev comparev and writev ...[2024-07-13 23:25:52.913823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x79809000 len:0x1000 00:40:03.703 [2024-07-13 23:25:52.913924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:40:03.703 passed 00:40:03.703 Test: blockdev nvme passthru rw ...passed 00:40:03.703 Test: blockdev nvme passthru vendor specific ...passed 00:40:03.703 Test: blockdev nvme admin passthru ...passed 00:40:03.703 Test: blockdev copy ...passed 00:40:03.703 00:40:03.703 Run Summary: Type Total Ran Passed Failed Inactive 00:40:03.703 suites 2 2 n/a 0 0 00:40:03.703 tests 46 46 46 0 0 00:40:03.703 asserts 284 284 284 0 n/a 00:40:03.703 00:40:03.703 Elapsed time = 0.113 seconds 00:40:03.703 0 00:40:03.703 23:25:52 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 177925 00:40:03.703 23:25:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 177925 ']' 00:40:03.703 23:25:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 177925 00:40:03.703 23:25:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:40:03.703 23:25:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:03.703 23:25:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 177925 00:40:03.703 23:25:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:03.703 killing process with pid 177925 00:40:03.703 23:25:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:03.703 23:25:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 177925' 00:40:03.703 23:25:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@967 -- # kill 177925 00:40:03.703 23:25:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # wait 177925 00:40:03.968 23:25:53 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:40:03.968 00:40:03.968 real 0m1.438s 00:40:03.968 user 0m3.605s 00:40:03.968 sys 0m0.306s 00:40:03.968 23:25:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:03.968 23:25:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:40:03.968 ************************************ 00:40:03.968 END TEST bdev_bounds 00:40:03.968 ************************************ 00:40:03.968 23:25:53 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:40:03.968 23:25:53 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:40:03.968 23:25:53 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:40:03.968 23:25:53 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:03.968 23:25:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:40:03.968 ************************************ 00:40:03.968 START TEST bdev_nbd 00:40:03.968 ************************************ 00:40:03.968 23:25:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:40:03.968 23:25:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:40:03.968 23:25:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:40:03.968 23:25:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:03.968 23:25:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:40:03.968 23:25:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2') 00:40:03.968 23:25:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:40:03.968 23:25:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=2 00:40:03.968 23:25:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:40:03.968 23:25:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:40:03.968 23:25:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:40:03.968 23:25:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=2 00:40:03.968 23:25:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:40:03.968 23:25:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:40:03.968 23:25:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:40:03.968 23:25:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:40:03.968 23:25:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=177978 00:40:03.968 23:25:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:40:03.968 23:25:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 177978 /var/tmp/spdk-nbd.sock 00:40:03.968 23:25:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:40:03.968 23:25:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 177978 ']' 00:40:03.968 23:25:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:40:03.968 23:25:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:03.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:40:03.968 23:25:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:40:03.968 23:25:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:03.968 23:25:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:40:03.968 [2024-07-13 23:25:53.282533] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:40:03.968 [2024-07-13 23:25:53.282788] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:04.226 [2024-07-13 23:25:53.425646] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:04.226 [2024-07-13 23:25:53.520098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:05.159 23:25:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:05.159 23:25:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:40:05.159 23:25:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:40:05.159 23:25:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:05.159 23:25:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:40:05.159 23:25:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:40:05.159 23:25:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:40:05.159 23:25:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:05.159 23:25:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:40:05.159 23:25:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:40:05.159 23:25:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:40:05.159 23:25:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:40:05.159 23:25:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:40:05.159 23:25:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:40:05.159 23:25:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:40:05.159 23:25:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:40:05.159 23:25:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:40:05.417 23:25:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:40:05.417 23:25:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:40:05.417 23:25:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:40:05.417 23:25:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:40:05.417 23:25:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:40:05.417 23:25:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:40:05.417 23:25:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:40:05.417 23:25:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:40:05.417 23:25:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:40:05.417 23:25:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:05.417 1+0 records in 00:40:05.417 1+0 records out 00:40:05.417 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000828378 s, 4.9 MB/s 00:40:05.417 23:25:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:05.417 23:25:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:40:05.417 23:25:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:05.417 23:25:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:40:05.417 23:25:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:40:05.417 23:25:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:40:05.417 23:25:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:40:05.417 23:25:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:40:05.674 23:25:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:40:05.674 23:25:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:40:05.674 23:25:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:40:05.674 23:25:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:40:05.674 23:25:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:40:05.674 23:25:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:40:05.674 23:25:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:40:05.674 23:25:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:40:05.675 23:25:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:40:05.675 23:25:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:40:05.675 23:25:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:40:05.675 23:25:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:05.675 1+0 records in 00:40:05.675 1+0 records out 00:40:05.675 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000594588 s, 6.9 MB/s 00:40:05.675 23:25:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:05.675 23:25:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:40:05.675 23:25:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:05.675 23:25:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:40:05.675 23:25:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:40:05.675 23:25:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:40:05.675 23:25:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:40:05.675 23:25:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:40:05.933 23:25:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:40:05.933 { 00:40:05.933 "nbd_device": "/dev/nbd0", 00:40:05.933 "bdev_name": "Nvme0n1p1" 00:40:05.933 }, 00:40:05.933 { 00:40:05.933 "nbd_device": "/dev/nbd1", 00:40:05.933 "bdev_name": "Nvme0n1p2" 00:40:05.933 } 00:40:05.933 ]' 00:40:05.933 23:25:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:40:05.933 23:25:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:40:05.933 23:25:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:40:05.933 { 00:40:05.933 "nbd_device": "/dev/nbd0", 00:40:05.933 "bdev_name": "Nvme0n1p1" 00:40:05.933 }, 00:40:05.933 { 00:40:05.933 "nbd_device": "/dev/nbd1", 00:40:05.933 "bdev_name": "Nvme0n1p2" 00:40:05.933 } 00:40:05.933 ]' 00:40:05.933 23:25:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:40:05.933 23:25:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:05.933 23:25:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:40:05.933 23:25:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:05.933 23:25:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:40:05.933 23:25:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:05.933 23:25:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:40:06.192 23:25:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:06.192 23:25:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:06.192 23:25:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:06.192 23:25:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:06.192 23:25:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:06.192 23:25:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:06.192 23:25:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:06.192 23:25:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:06.192 23:25:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:06.192 23:25:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:40:06.450 23:25:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:40:06.450 23:25:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:40:06.450 23:25:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:40:06.450 23:25:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:06.450 23:25:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:06.450 23:25:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:40:06.450 23:25:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:06.450 23:25:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:06.450 23:25:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:40:06.450 23:25:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:06.450 23:25:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:40:06.709 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:40:06.709 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:40:06.709 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:40:06.709 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:40:06.709 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:40:06.709 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:40:06.709 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:40:06.709 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:40:06.709 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:40:06.709 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:40:06.709 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:40:06.709 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:40:06.709 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:40:06.709 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:06.709 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:40:06.709 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:40:06.709 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:40:06.709 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:40:06.709 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:40:06.709 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:06.709 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:40:06.709 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:40:06.709 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:40:06.709 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:40:06.709 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:40:06.709 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:40:06.709 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:40:06.709 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:40:06.968 /dev/nbd0 00:40:06.968 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:40:06.968 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:40:06.968 23:25:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:40:06.968 23:25:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:40:06.968 23:25:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:40:06.968 23:25:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:40:06.968 23:25:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:40:06.968 23:25:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:40:06.968 23:25:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:40:06.968 23:25:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:40:06.968 23:25:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:06.968 1+0 records in 00:40:06.968 1+0 records out 00:40:06.968 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000823047 s, 5.0 MB/s 00:40:06.968 23:25:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:06.968 23:25:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:40:06.968 23:25:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:06.968 23:25:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:40:06.968 23:25:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:40:06.968 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:06.968 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:40:06.968 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:40:07.226 /dev/nbd1 00:40:07.227 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:40:07.227 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:40:07.227 23:25:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:40:07.227 23:25:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:40:07.227 23:25:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:40:07.227 23:25:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:40:07.227 23:25:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:40:07.227 23:25:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:40:07.227 23:25:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:40:07.227 23:25:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:40:07.227 23:25:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:07.227 1+0 records in 00:40:07.227 1+0 records out 00:40:07.227 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000758426 s, 5.4 MB/s 00:40:07.227 23:25:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:07.227 23:25:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:40:07.227 23:25:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:07.227 23:25:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:40:07.227 23:25:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:40:07.227 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:07.227 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:40:07.227 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:40:07.227 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:07.227 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:40:07.485 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:40:07.485 { 00:40:07.485 "nbd_device": "/dev/nbd0", 00:40:07.485 "bdev_name": "Nvme0n1p1" 00:40:07.485 }, 00:40:07.485 { 00:40:07.485 "nbd_device": "/dev/nbd1", 00:40:07.485 "bdev_name": "Nvme0n1p2" 00:40:07.485 } 00:40:07.485 ]' 00:40:07.485 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:40:07.485 { 00:40:07.485 "nbd_device": "/dev/nbd0", 00:40:07.485 "bdev_name": "Nvme0n1p1" 00:40:07.485 }, 00:40:07.485 { 00:40:07.485 "nbd_device": "/dev/nbd1", 00:40:07.485 "bdev_name": "Nvme0n1p2" 00:40:07.485 } 00:40:07.485 ]' 00:40:07.485 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:40:07.743 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:40:07.743 /dev/nbd1' 00:40:07.743 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:40:07.743 /dev/nbd1' 00:40:07.743 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:40:07.743 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=2 00:40:07.743 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 2 00:40:07.743 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=2 00:40:07.743 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:40:07.743 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:40:07.743 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:40:07.743 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:40:07.743 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:40:07.743 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:40:07.743 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:40:07.743 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:40:07.743 256+0 records in 00:40:07.743 256+0 records out 00:40:07.743 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113017 s, 92.8 MB/s 00:40:07.743 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:40:07.743 23:25:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:40:07.743 256+0 records in 00:40:07.743 256+0 records out 00:40:07.743 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.105472 s, 9.9 MB/s 00:40:07.743 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:40:07.743 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:40:08.001 256+0 records in 00:40:08.001 256+0 records out 00:40:08.001 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0902568 s, 11.6 MB/s 00:40:08.001 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:40:08.001 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:40:08.001 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:40:08.001 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:40:08.001 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:40:08.001 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:40:08.001 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:40:08.001 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:40:08.001 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:40:08.001 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:40:08.001 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:40:08.001 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:40:08.001 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:40:08.001 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:08.001 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:40:08.001 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:08.001 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:40:08.001 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:08.001 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:40:08.259 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:08.259 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:08.259 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:08.259 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:08.259 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:08.259 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:08.259 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:08.259 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:08.259 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:08.259 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:40:08.516 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:40:08.516 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:40:08.516 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:40:08.516 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:08.516 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:08.516 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:40:08.516 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:08.516 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:08.517 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:40:08.517 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:08.517 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:40:08.774 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:40:08.774 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:40:08.774 23:25:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:40:08.774 23:25:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:40:08.774 23:25:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:40:08.774 23:25:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:40:08.774 23:25:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:40:08.774 23:25:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:40:08.774 23:25:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:40:08.774 23:25:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:40:08.774 23:25:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:40:08.774 23:25:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:40:08.774 23:25:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:40:08.774 23:25:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:08.774 23:25:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:40:08.774 23:25:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:40:08.774 23:25:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:40:08.774 23:25:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:40:09.032 malloc_lvol_verify 00:40:09.032 23:25:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:40:09.290 44aa952b-81c9-4e2b-aec9-961d80d72296 00:40:09.290 23:25:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:40:09.549 892def02-79c9-4f13-8862-95781b235e34 00:40:09.549 23:25:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:40:09.808 /dev/nbd0 00:40:09.808 23:25:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:40:09.808 mke2fs 1.46.5 (30-Dec-2021) 00:40:09.808 00:40:09.808 Filesystem too small for a journal 00:40:09.808 Discarding device blocks: 0/1024 done 00:40:09.808 Creating filesystem with 1024 4k blocks and 1024 inodes 00:40:09.808 00:40:09.808 Allocating group tables: 0/1 done 00:40:09.808 Writing inode tables: 0/1 done 00:40:09.808 Writing superblocks and filesystem accounting information: 0/1 done 00:40:09.808 00:40:09.808 23:25:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:40:09.808 23:25:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:40:09.808 23:25:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:09.808 23:25:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:40:09.808 23:25:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:09.809 23:25:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:40:09.809 23:25:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:09.809 23:25:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:40:10.068 23:25:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:10.068 23:25:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:10.068 23:25:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:10.068 23:25:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:10.068 23:25:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:10.068 23:25:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:10.068 23:25:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:10.068 23:25:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:10.068 23:25:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:40:10.068 23:25:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:40:10.068 23:25:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 177978 00:40:10.068 23:25:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 177978 ']' 00:40:10.068 23:25:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 177978 00:40:10.068 23:25:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:40:10.068 23:25:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:10.068 23:25:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 177978 00:40:10.068 23:25:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:10.068 23:25:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:10.068 23:25:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 177978' 00:40:10.068 killing process with pid 177978 00:40:10.068 23:25:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@967 -- # kill 177978 00:40:10.068 23:25:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # wait 177978 00:40:10.327 23:25:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:40:10.327 00:40:10.327 real 0m6.415s 00:40:10.327 user 0m9.747s 00:40:10.327 sys 0m1.719s 00:40:10.327 23:25:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:10.327 23:25:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:40:10.327 ************************************ 00:40:10.327 END TEST bdev_nbd 00:40:10.327 ************************************ 00:40:10.327 23:25:59 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:40:10.327 23:25:59 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:40:10.327 23:25:59 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = nvme ']' 00:40:10.327 23:25:59 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = gpt ']' 00:40:10.327 23:25:59 blockdev_nvme_gpt -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:40:10.327 skipping fio tests on NVMe due to multi-ns failures. 00:40:10.327 23:25:59 blockdev_nvme_gpt -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:40:10.327 23:25:59 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:40:10.327 23:25:59 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:40:10.327 23:25:59 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:10.327 23:25:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:40:10.327 ************************************ 00:40:10.327 START TEST bdev_verify 00:40:10.327 ************************************ 00:40:10.327 23:25:59 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:40:10.586 [2024-07-13 23:25:59.754901] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:40:10.586 [2024-07-13 23:25:59.755131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178224 ] 00:40:10.586 [2024-07-13 23:25:59.899803] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:10.586 [2024-07-13 23:25:59.980549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:10.586 [2024-07-13 23:25:59.980552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:10.845 Running I/O for 5 seconds... 00:40:16.111 00:40:16.111 Latency(us) 00:40:16.111 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:16.111 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:16.111 Verification LBA range: start 0x0 length 0x4ff80 00:40:16.111 Nvme0n1p1 : 5.01 4492.40 17.55 0.00 0.00 28417.67 4498.15 30980.65 00:40:16.111 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:40:16.111 Verification LBA range: start 0x4ff80 length 0x4ff80 00:40:16.111 Nvme0n1p1 : 5.02 4449.46 17.38 0.00 0.00 28666.21 1563.93 33125.47 00:40:16.111 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:16.111 Verification LBA range: start 0x0 length 0x4ff7f 00:40:16.111 Nvme0n1p2 : 5.02 4490.85 17.54 0.00 0.00 28370.29 2085.24 31218.97 00:40:16.111 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:40:16.111 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:40:16.111 Nvme0n1p2 : 5.02 4458.46 17.42 0.00 0.00 28565.75 1087.30 33602.09 00:40:16.111 =================================================================================================================== 00:40:16.111 Total : 17891.16 69.89 0.00 0.00 28504.55 1087.30 33602.09 00:40:16.370 ************************************ 00:40:16.370 END TEST bdev_verify 00:40:16.370 ************************************ 00:40:16.370 00:40:16.370 real 0m6.000s 00:40:16.370 user 0m11.274s 00:40:16.370 sys 0m0.233s 00:40:16.370 23:26:05 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:16.370 23:26:05 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:40:16.370 23:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:40:16.370 23:26:05 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:40:16.370 23:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:40:16.370 23:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:16.370 23:26:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:40:16.370 ************************************ 00:40:16.370 START TEST bdev_verify_big_io 00:40:16.370 ************************************ 00:40:16.370 23:26:05 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:40:16.639 [2024-07-13 23:26:05.818193] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:40:16.639 [2024-07-13 23:26:05.818472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178315 ] 00:40:16.639 [2024-07-13 23:26:05.969380] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:16.913 [2024-07-13 23:26:06.041891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:16.913 [2024-07-13 23:26:06.041897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:16.913 Running I/O for 5 seconds... 00:40:22.176 00:40:22.176 Latency(us) 00:40:22.176 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:22.176 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:40:22.176 Verification LBA range: start 0x0 length 0x4ff8 00:40:22.176 Nvme0n1p1 : 5.08 428.44 26.78 0.00 0.00 293371.56 51475.55 255471.24 00:40:22.176 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:40:22.176 Verification LBA range: start 0x4ff8 length 0x4ff8 00:40:22.176 Nvme0n1p1 : 5.24 440.10 27.51 0.00 0.00 286351.10 13166.78 234499.72 00:40:22.176 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:40:22.176 Verification LBA range: start 0x0 length 0x4ff7 00:40:22.176 Nvme0n1p2 : 5.23 448.08 28.00 0.00 0.00 273547.45 3932.16 276442.76 00:40:22.176 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:40:22.176 Verification LBA range: start 0x4ff7 length 0x4ff7 00:40:22.176 Nvme0n1p2 : 5.24 439.79 27.49 0.00 0.00 279978.69 4289.63 229733.47 00:40:22.176 =================================================================================================================== 00:40:22.176 Total : 1756.42 109.78 0.00 0.00 283127.30 3932.16 276442.76 00:40:22.739 ************************************ 00:40:22.739 END TEST bdev_verify_big_io 00:40:22.739 ************************************ 00:40:22.739 00:40:22.739 real 0m6.344s 00:40:22.739 user 0m11.963s 00:40:22.739 sys 0m0.209s 00:40:22.739 23:26:12 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:22.739 23:26:12 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:40:22.996 23:26:12 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:40:22.996 23:26:12 blockdev_nvme_gpt -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:22.996 23:26:12 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:40:22.996 23:26:12 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:22.996 23:26:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:40:22.996 ************************************ 00:40:22.996 START TEST bdev_write_zeroes 00:40:22.996 ************************************ 00:40:22.996 23:26:12 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:22.996 [2024-07-13 23:26:12.222506] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:40:22.996 [2024-07-13 23:26:12.222793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178404 ] 00:40:22.996 [2024-07-13 23:26:12.367827] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:23.254 [2024-07-13 23:26:12.429097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:23.254 Running I/O for 1 seconds... 00:40:24.620 00:40:24.620 Latency(us) 00:40:24.620 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:24.620 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:40:24.621 Nvme0n1p1 : 1.01 25629.03 100.11 0.00 0.00 4983.03 2398.02 17873.45 00:40:24.621 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:40:24.621 Nvme0n1p2 : 1.01 25599.14 100.00 0.00 0.00 4981.81 2755.49 17754.30 00:40:24.621 =================================================================================================================== 00:40:24.621 Total : 51228.16 200.11 0.00 0.00 4982.42 2398.02 17873.45 00:40:24.621 00:40:24.621 real 0m1.730s 00:40:24.621 user 0m1.454s 00:40:24.621 sys 0m0.177s 00:40:24.621 23:26:13 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:24.621 23:26:13 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:40:24.621 ************************************ 00:40:24.621 END TEST bdev_write_zeroes 00:40:24.621 ************************************ 00:40:24.621 23:26:13 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:40:24.621 23:26:13 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:24.621 23:26:13 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:40:24.621 23:26:13 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:24.621 23:26:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:40:24.621 ************************************ 00:40:24.621 START TEST bdev_json_nonenclosed 00:40:24.621 ************************************ 00:40:24.621 23:26:13 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:24.621 [2024-07-13 23:26:13.999827] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:40:24.621 [2024-07-13 23:26:14.000089] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178452 ] 00:40:24.878 [2024-07-13 23:26:14.147525] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:24.878 [2024-07-13 23:26:14.208909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:24.878 [2024-07-13 23:26:14.209067] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:40:24.878 [2024-07-13 23:26:14.209109] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:40:24.878 [2024-07-13 23:26:14.209140] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:25.137 00:40:25.137 real 0m0.375s 00:40:25.137 user 0m0.165s 00:40:25.137 sys 0m0.109s 00:40:25.137 23:26:14 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:40:25.137 23:26:14 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:25.137 23:26:14 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:40:25.137 ************************************ 00:40:25.137 END TEST bdev_json_nonenclosed 00:40:25.137 ************************************ 00:40:25.137 23:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:40:25.137 23:26:14 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # true 00:40:25.137 23:26:14 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:25.137 23:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:40:25.137 23:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:25.137 23:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:40:25.137 ************************************ 00:40:25.137 START TEST bdev_json_nonarray 00:40:25.137 ************************************ 00:40:25.137 23:26:14 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:25.137 [2024-07-13 23:26:14.425130] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:40:25.137 [2024-07-13 23:26:14.425377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178474 ] 00:40:25.395 [2024-07-13 23:26:14.572422] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:25.395 [2024-07-13 23:26:14.643925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:25.395 [2024-07-13 23:26:14.644084] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:40:25.395 [2024-07-13 23:26:14.644132] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:40:25.395 [2024-07-13 23:26:14.644163] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:25.395 00:40:25.395 real 0m0.393s 00:40:25.395 user 0m0.196s 00:40:25.395 sys 0m0.096s 00:40:25.395 23:26:14 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:40:25.395 23:26:14 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:25.395 23:26:14 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:40:25.395 ************************************ 00:40:25.396 END TEST bdev_json_nonarray 00:40:25.396 ************************************ 00:40:25.655 23:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:40:25.655 23:26:14 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # true 00:40:25.655 23:26:14 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # [[ gpt == bdev ]] 00:40:25.655 23:26:14 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # [[ gpt == gpt ]] 00:40:25.655 23:26:14 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:40:25.655 23:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:25.655 23:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:25.655 23:26:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:40:25.655 ************************************ 00:40:25.655 START TEST bdev_gpt_uuid 00:40:25.655 ************************************ 00:40:25.655 23:26:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1123 -- # bdev_gpt_uuid 00:40:25.655 23:26:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@614 -- # local bdev 00:40:25.655 23:26:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@616 -- # start_spdk_tgt 00:40:25.655 23:26:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=178502 00:40:25.655 23:26:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:40:25.655 23:26:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 178502 00:40:25.655 23:26:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@829 -- # '[' -z 178502 ']' 00:40:25.655 23:26:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:25.655 23:26:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:25.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:25.655 23:26:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:25.655 23:26:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:25.655 23:26:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:40:25.655 23:26:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:40:25.655 [2024-07-13 23:26:14.884282] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:40:25.655 [2024-07-13 23:26:14.884790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178502 ] 00:40:25.655 [2024-07-13 23:26:15.032814] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:25.914 [2024-07-13 23:26:15.107460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:26.481 23:26:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:26.481 23:26:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # return 0 00:40:26.481 23:26:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:40:26.481 23:26:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:26.481 23:26:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:40:26.740 Some configs were skipped because the RPC state that can call them passed over. 00:40:26.740 23:26:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:26.740 23:26:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_wait_for_examine 00:40:26.740 23:26:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:26.740 23:26:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:40:26.740 23:26:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:26.740 23:26:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:40:26.740 23:26:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:26.740 23:26:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:40:26.740 23:26:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:26.740 23:26:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # bdev='[ 00:40:26.740 { 00:40:26.740 "name": "Nvme0n1p1", 00:40:26.740 "aliases": [ 00:40:26.740 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:40:26.740 ], 00:40:26.740 "product_name": "GPT Disk", 00:40:26.740 "block_size": 4096, 00:40:26.740 "num_blocks": 655104, 00:40:26.740 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:40:26.740 "assigned_rate_limits": { 00:40:26.740 "rw_ios_per_sec": 0, 00:40:26.740 "rw_mbytes_per_sec": 0, 00:40:26.740 "r_mbytes_per_sec": 0, 00:40:26.740 "w_mbytes_per_sec": 0 00:40:26.740 }, 00:40:26.740 "claimed": false, 00:40:26.740 "zoned": false, 00:40:26.740 "supported_io_types": { 00:40:26.740 "read": true, 00:40:26.740 "write": true, 00:40:26.740 "unmap": true, 00:40:26.740 "flush": true, 00:40:26.740 "reset": true, 00:40:26.740 "nvme_admin": false, 00:40:26.740 "nvme_io": false, 00:40:26.740 "nvme_io_md": false, 00:40:26.740 "write_zeroes": true, 00:40:26.740 "zcopy": false, 00:40:26.740 "get_zone_info": false, 00:40:26.740 "zone_management": false, 00:40:26.740 "zone_append": false, 00:40:26.740 "compare": true, 00:40:26.740 "compare_and_write": false, 00:40:26.740 "abort": true, 00:40:26.740 "seek_hole": false, 00:40:26.740 "seek_data": false, 00:40:26.740 "copy": true, 00:40:26.740 "nvme_iov_md": false 00:40:26.740 }, 00:40:26.740 "driver_specific": { 00:40:26.740 "gpt": { 00:40:26.740 "base_bdev": "Nvme0n1", 00:40:26.740 "offset_blocks": 256, 00:40:26.740 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:40:26.740 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:40:26.740 "partition_name": "SPDK_TEST_first" 00:40:26.740 } 00:40:26.740 } 00:40:26.740 } 00:40:26.740 ]' 00:40:26.740 23:26:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r length 00:40:26.740 23:26:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 1 == \1 ]] 00:40:26.740 23:26:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].aliases[0]' 00:40:26.740 23:26:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:40:26.740 23:26:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:40:26.999 23:26:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:40:26.999 23:26:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:40:26.999 23:26:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:26.999 23:26:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:40:26.999 23:26:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:26.999 23:26:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # bdev='[ 00:40:26.999 { 00:40:26.999 "name": "Nvme0n1p2", 00:40:26.999 "aliases": [ 00:40:26.999 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:40:26.999 ], 00:40:26.999 "product_name": "GPT Disk", 00:40:26.999 "block_size": 4096, 00:40:26.999 "num_blocks": 655103, 00:40:26.999 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:40:26.999 "assigned_rate_limits": { 00:40:26.999 "rw_ios_per_sec": 0, 00:40:26.999 "rw_mbytes_per_sec": 0, 00:40:26.999 "r_mbytes_per_sec": 0, 00:40:26.999 "w_mbytes_per_sec": 0 00:40:26.999 }, 00:40:26.999 "claimed": false, 00:40:26.999 "zoned": false, 00:40:26.999 "supported_io_types": { 00:40:26.999 "read": true, 00:40:26.999 "write": true, 00:40:26.999 "unmap": true, 00:40:26.999 "flush": true, 00:40:26.999 "reset": true, 00:40:26.999 "nvme_admin": false, 00:40:26.999 "nvme_io": false, 00:40:26.999 "nvme_io_md": false, 00:40:26.999 "write_zeroes": true, 00:40:26.999 "zcopy": false, 00:40:26.999 "get_zone_info": false, 00:40:26.999 "zone_management": false, 00:40:26.999 "zone_append": false, 00:40:26.999 "compare": true, 00:40:26.999 "compare_and_write": false, 00:40:26.999 "abort": true, 00:40:26.999 "seek_hole": false, 00:40:26.999 "seek_data": false, 00:40:26.999 "copy": true, 00:40:26.999 "nvme_iov_md": false 00:40:26.999 }, 00:40:26.999 "driver_specific": { 00:40:26.999 "gpt": { 00:40:26.999 "base_bdev": "Nvme0n1", 00:40:26.999 "offset_blocks": 655360, 00:40:26.999 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:40:26.999 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:40:26.999 "partition_name": "SPDK_TEST_second" 00:40:26.999 } 00:40:26.999 } 00:40:26.999 } 00:40:26.999 ]' 00:40:26.999 23:26:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r length 00:40:26.999 23:26:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ 1 == \1 ]] 00:40:26.999 23:26:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].aliases[0]' 00:40:26.999 23:26:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:40:27.000 23:26:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:40:27.000 23:26:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:40:27.000 23:26:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@631 -- # killprocess 178502 00:40:27.000 23:26:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@948 -- # '[' -z 178502 ']' 00:40:27.000 23:26:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # kill -0 178502 00:40:27.000 23:26:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # uname 00:40:27.000 23:26:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:27.000 23:26:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 178502 00:40:27.000 23:26:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:27.000 23:26:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:27.000 killing process with pid 178502 00:40:27.000 23:26:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 178502' 00:40:27.000 23:26:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@967 -- # kill 178502 00:40:27.000 23:26:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # wait 178502 00:40:27.568 00:40:27.568 real 0m1.961s 00:40:27.568 user 0m2.274s 00:40:27.568 sys 0m0.418s 00:40:27.568 23:26:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:27.568 23:26:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:40:27.568 ************************************ 00:40:27.568 END TEST bdev_gpt_uuid 00:40:27.568 ************************************ 00:40:27.568 23:26:16 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:40:27.568 23:26:16 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # [[ gpt == crypto_sw ]] 00:40:27.568 23:26:16 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:40:27.568 23:26:16 blockdev_nvme_gpt -- bdev/blockdev.sh@811 -- # cleanup 00:40:27.568 23:26:16 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:40:27.568 23:26:16 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:40:27.568 23:26:16 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:40:27.568 23:26:16 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:40:27.568 23:26:16 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:40:27.568 23:26:16 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:40:27.827 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:40:27.827 Waiting for block devices as requested 00:40:27.827 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:40:28.086 23:26:17 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:40:28.086 23:26:17 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:40:28.086 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:40:28.086 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:40:28.086 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:40:28.086 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:40:28.086 23:26:17 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:40:28.086 00:40:28.086 real 0m32.540s 00:40:28.086 user 0m48.454s 00:40:28.086 sys 0m5.789s 00:40:28.086 23:26:17 blockdev_nvme_gpt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:28.086 23:26:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:40:28.086 ************************************ 00:40:28.086 END TEST blockdev_nvme_gpt 00:40:28.086 ************************************ 00:40:28.086 23:26:17 -- common/autotest_common.sh@1142 -- # return 0 00:40:28.086 23:26:17 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:40:28.086 23:26:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:28.086 23:26:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:28.086 23:26:17 -- common/autotest_common.sh@10 -- # set +x 00:40:28.086 ************************************ 00:40:28.086 START TEST nvme 00:40:28.086 ************************************ 00:40:28.086 23:26:17 nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:40:28.086 * Looking for test storage... 00:40:28.086 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:40:28.086 23:26:17 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:40:28.654 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:40:28.654 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:40:29.591 23:26:18 nvme -- nvme/nvme.sh@79 -- # uname 00:40:29.591 23:26:18 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:40:29.591 23:26:18 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:40:29.591 23:26:18 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:40:29.591 23:26:18 nvme -- common/autotest_common.sh@1080 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:40:29.591 23:26:18 nvme -- common/autotest_common.sh@1066 -- # _randomize_va_space=2 00:40:29.591 23:26:18 nvme -- common/autotest_common.sh@1067 -- # echo 0 00:40:29.591 23:26:18 nvme -- common/autotest_common.sh@1069 -- # stubpid=178890 00:40:29.591 Waiting for stub to ready for secondary processes... 00:40:29.591 23:26:18 nvme -- common/autotest_common.sh@1068 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:40:29.591 23:26:18 nvme -- common/autotest_common.sh@1070 -- # echo Waiting for stub to ready for secondary processes... 00:40:29.591 23:26:18 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:40:29.591 23:26:18 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/178890 ]] 00:40:29.591 23:26:18 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:40:29.591 [2024-07-13 23:26:18.949071] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:40:29.591 [2024-07-13 23:26:18.949318] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:40:30.524 23:26:19 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:40:30.524 23:26:19 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/178890 ]] 00:40:30.524 23:26:19 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:40:31.088 [2024-07-13 23:26:20.200247] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:31.088 [2024-07-13 23:26:20.273886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:40:31.088 [2024-07-13 23:26:20.273979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:31.088 [2024-07-13 23:26:20.274299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:40:31.088 [2024-07-13 23:26:20.282878] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:40:31.088 [2024-07-13 23:26:20.283002] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:40:31.089 [2024-07-13 23:26:20.291544] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:40:31.089 [2024-07-13 23:26:20.291769] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:40:31.653 23:26:20 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:40:31.653 23:26:20 nvme -- common/autotest_common.sh@1076 -- # echo done. 00:40:31.653 done. 00:40:31.653 23:26:20 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:40:31.653 23:26:20 nvme -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:40:31.653 23:26:20 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:31.653 23:26:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:31.653 ************************************ 00:40:31.653 START TEST nvme_reset 00:40:31.653 ************************************ 00:40:31.653 23:26:20 nvme.nvme_reset -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:40:31.911 Initializing NVMe Controllers 00:40:31.911 Skipping QEMU NVMe SSD at 0000:00:10.0 00:40:31.911 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:40:31.911 00:40:31.911 real 0m0.278s 00:40:31.911 user 0m0.097s 00:40:31.911 sys 0m0.105s 00:40:31.911 23:26:21 nvme.nvme_reset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:31.911 23:26:21 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:40:31.911 ************************************ 00:40:31.911 END TEST nvme_reset 00:40:31.911 ************************************ 00:40:31.911 23:26:21 nvme -- common/autotest_common.sh@1142 -- # return 0 00:40:31.911 23:26:21 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:40:31.911 23:26:21 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:31.911 23:26:21 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:31.911 23:26:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:31.911 ************************************ 00:40:31.911 START TEST nvme_identify 00:40:31.911 ************************************ 00:40:31.911 23:26:21 nvme.nvme_identify -- common/autotest_common.sh@1123 -- # nvme_identify 00:40:31.911 23:26:21 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:40:31.912 23:26:21 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:40:31.912 23:26:21 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:40:31.912 23:26:21 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:40:31.912 23:26:21 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:40:31.912 23:26:21 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:40:31.912 23:26:21 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:31.912 23:26:21 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:40:31.912 23:26:21 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:40:31.912 23:26:21 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:40:31.912 23:26:21 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:40:31.912 23:26:21 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:40:32.171 [2024-07-13 23:26:21.516353] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 178929 terminated unexpected 00:40:32.171 ===================================================== 00:40:32.171 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:40:32.171 ===================================================== 00:40:32.171 Controller Capabilities/Features 00:40:32.171 ================================ 00:40:32.171 Vendor ID: 1b36 00:40:32.171 Subsystem Vendor ID: 1af4 00:40:32.171 Serial Number: 12340 00:40:32.171 Model Number: QEMU NVMe Ctrl 00:40:32.171 Firmware Version: 8.0.0 00:40:32.171 Recommended Arb Burst: 6 00:40:32.171 IEEE OUI Identifier: 00 54 52 00:40:32.171 Multi-path I/O 00:40:32.171 May have multiple subsystem ports: No 00:40:32.171 May have multiple controllers: No 00:40:32.171 Associated with SR-IOV VF: No 00:40:32.171 Max Data Transfer Size: 524288 00:40:32.171 Max Number of Namespaces: 256 00:40:32.171 Max Number of I/O Queues: 64 00:40:32.171 NVMe Specification Version (VS): 1.4 00:40:32.171 NVMe Specification Version (Identify): 1.4 00:40:32.171 Maximum Queue Entries: 2048 00:40:32.171 Contiguous Queues Required: Yes 00:40:32.171 Arbitration Mechanisms Supported 00:40:32.171 Weighted Round Robin: Not Supported 00:40:32.171 Vendor Specific: Not Supported 00:40:32.171 Reset Timeout: 7500 ms 00:40:32.171 Doorbell Stride: 4 bytes 00:40:32.171 NVM Subsystem Reset: Not Supported 00:40:32.171 Command Sets Supported 00:40:32.171 NVM Command Set: Supported 00:40:32.171 Boot Partition: Not Supported 00:40:32.171 Memory Page Size Minimum: 4096 bytes 00:40:32.171 Memory Page Size Maximum: 65536 bytes 00:40:32.171 Persistent Memory Region: Not Supported 00:40:32.171 Optional Asynchronous Events Supported 00:40:32.171 Namespace Attribute Notices: Supported 00:40:32.171 Firmware Activation Notices: Not Supported 00:40:32.171 ANA Change Notices: Not Supported 00:40:32.171 PLE Aggregate Log Change Notices: Not Supported 00:40:32.171 LBA Status Info Alert Notices: Not Supported 00:40:32.171 EGE Aggregate Log Change Notices: Not Supported 00:40:32.171 Normal NVM Subsystem Shutdown event: Not Supported 00:40:32.171 Zone Descriptor Change Notices: Not Supported 00:40:32.171 Discovery Log Change Notices: Not Supported 00:40:32.171 Controller Attributes 00:40:32.171 128-bit Host Identifier: Not Supported 00:40:32.171 Non-Operational Permissive Mode: Not Supported 00:40:32.171 NVM Sets: Not Supported 00:40:32.171 Read Recovery Levels: Not Supported 00:40:32.171 Endurance Groups: Not Supported 00:40:32.171 Predictable Latency Mode: Not Supported 00:40:32.171 Traffic Based Keep ALive: Not Supported 00:40:32.171 Namespace Granularity: Not Supported 00:40:32.171 SQ Associations: Not Supported 00:40:32.171 UUID List: Not Supported 00:40:32.171 Multi-Domain Subsystem: Not Supported 00:40:32.171 Fixed Capacity Management: Not Supported 00:40:32.171 Variable Capacity Management: Not Supported 00:40:32.171 Delete Endurance Group: Not Supported 00:40:32.171 Delete NVM Set: Not Supported 00:40:32.171 Extended LBA Formats Supported: Supported 00:40:32.171 Flexible Data Placement Supported: Not Supported 00:40:32.171 00:40:32.171 Controller Memory Buffer Support 00:40:32.171 ================================ 00:40:32.171 Supported: No 00:40:32.171 00:40:32.171 Persistent Memory Region Support 00:40:32.171 ================================ 00:40:32.171 Supported: No 00:40:32.171 00:40:32.171 Admin Command Set Attributes 00:40:32.171 ============================ 00:40:32.171 Security Send/Receive: Not Supported 00:40:32.171 Format NVM: Supported 00:40:32.171 Firmware Activate/Download: Not Supported 00:40:32.171 Namespace Management: Supported 00:40:32.171 Device Self-Test: Not Supported 00:40:32.171 Directives: Supported 00:40:32.171 NVMe-MI: Not Supported 00:40:32.171 Virtualization Management: Not Supported 00:40:32.171 Doorbell Buffer Config: Supported 00:40:32.171 Get LBA Status Capability: Not Supported 00:40:32.171 Command & Feature Lockdown Capability: Not Supported 00:40:32.171 Abort Command Limit: 4 00:40:32.171 Async Event Request Limit: 4 00:40:32.171 Number of Firmware Slots: N/A 00:40:32.171 Firmware Slot 1 Read-Only: N/A 00:40:32.171 Firmware Activation Without Reset: N/A 00:40:32.171 Multiple Update Detection Support: N/A 00:40:32.171 Firmware Update Granularity: No Information Provided 00:40:32.171 Per-Namespace SMART Log: Yes 00:40:32.171 Asymmetric Namespace Access Log Page: Not Supported 00:40:32.171 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:40:32.171 Command Effects Log Page: Supported 00:40:32.171 Get Log Page Extended Data: Supported 00:40:32.171 Telemetry Log Pages: Not Supported 00:40:32.171 Persistent Event Log Pages: Not Supported 00:40:32.171 Supported Log Pages Log Page: May Support 00:40:32.171 Commands Supported & Effects Log Page: Not Supported 00:40:32.171 Feature Identifiers & Effects Log Page:May Support 00:40:32.171 NVMe-MI Commands & Effects Log Page: May Support 00:40:32.171 Data Area 4 for Telemetry Log: Not Supported 00:40:32.171 Error Log Page Entries Supported: 1 00:40:32.171 Keep Alive: Not Supported 00:40:32.171 00:40:32.171 NVM Command Set Attributes 00:40:32.171 ========================== 00:40:32.171 Submission Queue Entry Size 00:40:32.171 Max: 64 00:40:32.171 Min: 64 00:40:32.171 Completion Queue Entry Size 00:40:32.171 Max: 16 00:40:32.171 Min: 16 00:40:32.171 Number of Namespaces: 256 00:40:32.171 Compare Command: Supported 00:40:32.171 Write Uncorrectable Command: Not Supported 00:40:32.171 Dataset Management Command: Supported 00:40:32.171 Write Zeroes Command: Supported 00:40:32.171 Set Features Save Field: Supported 00:40:32.171 Reservations: Not Supported 00:40:32.171 Timestamp: Supported 00:40:32.171 Copy: Supported 00:40:32.171 Volatile Write Cache: Present 00:40:32.171 Atomic Write Unit (Normal): 1 00:40:32.171 Atomic Write Unit (PFail): 1 00:40:32.171 Atomic Compare & Write Unit: 1 00:40:32.171 Fused Compare & Write: Not Supported 00:40:32.171 Scatter-Gather List 00:40:32.171 SGL Command Set: Supported 00:40:32.171 SGL Keyed: Not Supported 00:40:32.172 SGL Bit Bucket Descriptor: Not Supported 00:40:32.172 SGL Metadata Pointer: Not Supported 00:40:32.172 Oversized SGL: Not Supported 00:40:32.172 SGL Metadata Address: Not Supported 00:40:32.172 SGL Offset: Not Supported 00:40:32.172 Transport SGL Data Block: Not Supported 00:40:32.172 Replay Protected Memory Block: Not Supported 00:40:32.172 00:40:32.172 Firmware Slot Information 00:40:32.172 ========================= 00:40:32.172 Active slot: 1 00:40:32.172 Slot 1 Firmware Revision: 1.0 00:40:32.172 00:40:32.172 00:40:32.172 Commands Supported and Effects 00:40:32.172 ============================== 00:40:32.172 Admin Commands 00:40:32.172 -------------- 00:40:32.172 Delete I/O Submission Queue (00h): Supported 00:40:32.172 Create I/O Submission Queue (01h): Supported 00:40:32.172 Get Log Page (02h): Supported 00:40:32.172 Delete I/O Completion Queue (04h): Supported 00:40:32.172 Create I/O Completion Queue (05h): Supported 00:40:32.172 Identify (06h): Supported 00:40:32.172 Abort (08h): Supported 00:40:32.172 Set Features (09h): Supported 00:40:32.172 Get Features (0Ah): Supported 00:40:32.172 Asynchronous Event Request (0Ch): Supported 00:40:32.172 Namespace Attachment (15h): Supported NS-Inventory-Change 00:40:32.172 Directive Send (19h): Supported 00:40:32.172 Directive Receive (1Ah): Supported 00:40:32.172 Virtualization Management (1Ch): Supported 00:40:32.172 Doorbell Buffer Config (7Ch): Supported 00:40:32.172 Format NVM (80h): Supported LBA-Change 00:40:32.172 I/O Commands 00:40:32.172 ------------ 00:40:32.172 Flush (00h): Supported LBA-Change 00:40:32.172 Write (01h): Supported LBA-Change 00:40:32.172 Read (02h): Supported 00:40:32.172 Compare (05h): Supported 00:40:32.172 Write Zeroes (08h): Supported LBA-Change 00:40:32.172 Dataset Management (09h): Supported LBA-Change 00:40:32.172 Unknown (0Ch): Supported 00:40:32.172 Unknown (12h): Supported 00:40:32.172 Copy (19h): Supported LBA-Change 00:40:32.172 Unknown (1Dh): Supported LBA-Change 00:40:32.172 00:40:32.172 Error Log 00:40:32.172 ========= 00:40:32.172 00:40:32.172 Arbitration 00:40:32.172 =========== 00:40:32.172 Arbitration Burst: no limit 00:40:32.172 00:40:32.172 Power Management 00:40:32.172 ================ 00:40:32.172 Number of Power States: 1 00:40:32.172 Current Power State: Power State #0 00:40:32.172 Power State #0: 00:40:32.172 Max Power: 25.00 W 00:40:32.172 Non-Operational State: Operational 00:40:32.172 Entry Latency: 16 microseconds 00:40:32.172 Exit Latency: 4 microseconds 00:40:32.172 Relative Read Throughput: 0 00:40:32.172 Relative Read Latency: 0 00:40:32.172 Relative Write Throughput: 0 00:40:32.172 Relative Write Latency: 0 00:40:32.172 Idle Power: Not Reported 00:40:32.172 Active Power: Not Reported 00:40:32.172 Non-Operational Permissive Mode: Not Supported 00:40:32.172 00:40:32.172 Health Information 00:40:32.172 ================== 00:40:32.172 Critical Warnings: 00:40:32.172 Available Spare Space: OK 00:40:32.172 Temperature: OK 00:40:32.172 Device Reliability: OK 00:40:32.172 Read Only: No 00:40:32.172 Volatile Memory Backup: OK 00:40:32.172 Current Temperature: 323 Kelvin (50 Celsius) 00:40:32.172 Temperature Threshold: 343 Kelvin (70 Celsius) 00:40:32.172 Available Spare: 0% 00:40:32.172 Available Spare Threshold: 0% 00:40:32.172 Life Percentage Used: 0% 00:40:32.172 Data Units Read: 4408 00:40:32.172 Data Units Written: 4065 00:40:32.172 Host Read Commands: 224260 00:40:32.172 Host Write Commands: 237208 00:40:32.172 Controller Busy Time: 0 minutes 00:40:32.172 Power Cycles: 0 00:40:32.172 Power On Hours: 0 hours 00:40:32.172 Unsafe Shutdowns: 0 00:40:32.172 Unrecoverable Media Errors: 0 00:40:32.172 Lifetime Error Log Entries: 0 00:40:32.172 Warning Temperature Time: 0 minutes 00:40:32.172 Critical Temperature Time: 0 minutes 00:40:32.172 00:40:32.172 Number of Queues 00:40:32.172 ================ 00:40:32.172 Number of I/O Submission Queues: 64 00:40:32.172 Number of I/O Completion Queues: 64 00:40:32.172 00:40:32.172 ZNS Specific Controller Data 00:40:32.172 ============================ 00:40:32.172 Zone Append Size Limit: 0 00:40:32.172 00:40:32.172 00:40:32.172 Active Namespaces 00:40:32.172 ================= 00:40:32.172 Namespace ID:1 00:40:32.172 Error Recovery Timeout: Unlimited 00:40:32.172 Command Set Identifier: NVM (00h) 00:40:32.172 Deallocate: Supported 00:40:32.172 Deallocated/Unwritten Error: Supported 00:40:32.172 Deallocated Read Value: All 0x00 00:40:32.172 Deallocate in Write Zeroes: Not Supported 00:40:32.172 Deallocated Guard Field: 0xFFFF 00:40:32.172 Flush: Supported 00:40:32.172 Reservation: Not Supported 00:40:32.172 Namespace Sharing Capabilities: Private 00:40:32.172 Size (in LBAs): 1310720 (5GiB) 00:40:32.172 Capacity (in LBAs): 1310720 (5GiB) 00:40:32.172 Utilization (in LBAs): 1310720 (5GiB) 00:40:32.172 Thin Provisioning: Not Supported 00:40:32.172 Per-NS Atomic Units: No 00:40:32.172 Maximum Single Source Range Length: 128 00:40:32.172 Maximum Copy Length: 128 00:40:32.172 Maximum Source Range Count: 128 00:40:32.172 NGUID/EUI64 Never Reused: No 00:40:32.172 Namespace Write Protected: No 00:40:32.172 Number of LBA Formats: 8 00:40:32.172 Current LBA Format: LBA Format #04 00:40:32.172 LBA Format #00: Data Size: 512 Metadata Size: 0 00:40:32.172 LBA Format #01: Data Size: 512 Metadata Size: 8 00:40:32.172 LBA Format #02: Data Size: 512 Metadata Size: 16 00:40:32.172 LBA Format #03: Data Size: 512 Metadata Size: 64 00:40:32.172 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:40:32.172 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:40:32.172 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:40:32.172 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:40:32.172 00:40:32.172 NVM Specific Namespace Data 00:40:32.172 =========================== 00:40:32.172 Logical Block Storage Tag Mask: 0 00:40:32.172 Protection Information Capabilities: 00:40:32.172 16b Guard Protection Information Storage Tag Support: No 00:40:32.172 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:40:32.172 Storage Tag Check Read Support: No 00:40:32.172 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:40:32.172 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:40:32.172 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:40:32.172 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:40:32.172 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:40:32.172 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:40:32.172 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:40:32.172 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:40:32.172 23:26:21 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:40:32.172 23:26:21 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:40:32.431 ===================================================== 00:40:32.431 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:40:32.431 ===================================================== 00:40:32.431 Controller Capabilities/Features 00:40:32.431 ================================ 00:40:32.431 Vendor ID: 1b36 00:40:32.431 Subsystem Vendor ID: 1af4 00:40:32.431 Serial Number: 12340 00:40:32.431 Model Number: QEMU NVMe Ctrl 00:40:32.431 Firmware Version: 8.0.0 00:40:32.431 Recommended Arb Burst: 6 00:40:32.431 IEEE OUI Identifier: 00 54 52 00:40:32.431 Multi-path I/O 00:40:32.431 May have multiple subsystem ports: No 00:40:32.431 May have multiple controllers: No 00:40:32.431 Associated with SR-IOV VF: No 00:40:32.431 Max Data Transfer Size: 524288 00:40:32.431 Max Number of Namespaces: 256 00:40:32.431 Max Number of I/O Queues: 64 00:40:32.431 NVMe Specification Version (VS): 1.4 00:40:32.431 NVMe Specification Version (Identify): 1.4 00:40:32.431 Maximum Queue Entries: 2048 00:40:32.431 Contiguous Queues Required: Yes 00:40:32.431 Arbitration Mechanisms Supported 00:40:32.431 Weighted Round Robin: Not Supported 00:40:32.431 Vendor Specific: Not Supported 00:40:32.431 Reset Timeout: 7500 ms 00:40:32.431 Doorbell Stride: 4 bytes 00:40:32.431 NVM Subsystem Reset: Not Supported 00:40:32.431 Command Sets Supported 00:40:32.431 NVM Command Set: Supported 00:40:32.431 Boot Partition: Not Supported 00:40:32.431 Memory Page Size Minimum: 4096 bytes 00:40:32.431 Memory Page Size Maximum: 65536 bytes 00:40:32.431 Persistent Memory Region: Not Supported 00:40:32.431 Optional Asynchronous Events Supported 00:40:32.431 Namespace Attribute Notices: Supported 00:40:32.431 Firmware Activation Notices: Not Supported 00:40:32.431 ANA Change Notices: Not Supported 00:40:32.431 PLE Aggregate Log Change Notices: Not Supported 00:40:32.431 LBA Status Info Alert Notices: Not Supported 00:40:32.431 EGE Aggregate Log Change Notices: Not Supported 00:40:32.431 Normal NVM Subsystem Shutdown event: Not Supported 00:40:32.431 Zone Descriptor Change Notices: Not Supported 00:40:32.431 Discovery Log Change Notices: Not Supported 00:40:32.431 Controller Attributes 00:40:32.431 128-bit Host Identifier: Not Supported 00:40:32.431 Non-Operational Permissive Mode: Not Supported 00:40:32.431 NVM Sets: Not Supported 00:40:32.431 Read Recovery Levels: Not Supported 00:40:32.431 Endurance Groups: Not Supported 00:40:32.431 Predictable Latency Mode: Not Supported 00:40:32.431 Traffic Based Keep ALive: Not Supported 00:40:32.431 Namespace Granularity: Not Supported 00:40:32.431 SQ Associations: Not Supported 00:40:32.431 UUID List: Not Supported 00:40:32.431 Multi-Domain Subsystem: Not Supported 00:40:32.431 Fixed Capacity Management: Not Supported 00:40:32.431 Variable Capacity Management: Not Supported 00:40:32.431 Delete Endurance Group: Not Supported 00:40:32.431 Delete NVM Set: Not Supported 00:40:32.431 Extended LBA Formats Supported: Supported 00:40:32.431 Flexible Data Placement Supported: Not Supported 00:40:32.431 00:40:32.431 Controller Memory Buffer Support 00:40:32.431 ================================ 00:40:32.431 Supported: No 00:40:32.431 00:40:32.431 Persistent Memory Region Support 00:40:32.431 ================================ 00:40:32.431 Supported: No 00:40:32.431 00:40:32.431 Admin Command Set Attributes 00:40:32.431 ============================ 00:40:32.431 Security Send/Receive: Not Supported 00:40:32.431 Format NVM: Supported 00:40:32.431 Firmware Activate/Download: Not Supported 00:40:32.431 Namespace Management: Supported 00:40:32.431 Device Self-Test: Not Supported 00:40:32.431 Directives: Supported 00:40:32.431 NVMe-MI: Not Supported 00:40:32.431 Virtualization Management: Not Supported 00:40:32.431 Doorbell Buffer Config: Supported 00:40:32.431 Get LBA Status Capability: Not Supported 00:40:32.431 Command & Feature Lockdown Capability: Not Supported 00:40:32.431 Abort Command Limit: 4 00:40:32.431 Async Event Request Limit: 4 00:40:32.431 Number of Firmware Slots: N/A 00:40:32.431 Firmware Slot 1 Read-Only: N/A 00:40:32.431 Firmware Activation Without Reset: N/A 00:40:32.431 Multiple Update Detection Support: N/A 00:40:32.431 Firmware Update Granularity: No Information Provided 00:40:32.431 Per-Namespace SMART Log: Yes 00:40:32.431 Asymmetric Namespace Access Log Page: Not Supported 00:40:32.431 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:40:32.431 Command Effects Log Page: Supported 00:40:32.431 Get Log Page Extended Data: Supported 00:40:32.431 Telemetry Log Pages: Not Supported 00:40:32.431 Persistent Event Log Pages: Not Supported 00:40:32.431 Supported Log Pages Log Page: May Support 00:40:32.431 Commands Supported & Effects Log Page: Not Supported 00:40:32.431 Feature Identifiers & Effects Log Page:May Support 00:40:32.431 NVMe-MI Commands & Effects Log Page: May Support 00:40:32.431 Data Area 4 for Telemetry Log: Not Supported 00:40:32.431 Error Log Page Entries Supported: 1 00:40:32.431 Keep Alive: Not Supported 00:40:32.431 00:40:32.431 NVM Command Set Attributes 00:40:32.431 ========================== 00:40:32.431 Submission Queue Entry Size 00:40:32.431 Max: 64 00:40:32.431 Min: 64 00:40:32.431 Completion Queue Entry Size 00:40:32.431 Max: 16 00:40:32.431 Min: 16 00:40:32.431 Number of Namespaces: 256 00:40:32.431 Compare Command: Supported 00:40:32.431 Write Uncorrectable Command: Not Supported 00:40:32.431 Dataset Management Command: Supported 00:40:32.431 Write Zeroes Command: Supported 00:40:32.431 Set Features Save Field: Supported 00:40:32.431 Reservations: Not Supported 00:40:32.431 Timestamp: Supported 00:40:32.431 Copy: Supported 00:40:32.431 Volatile Write Cache: Present 00:40:32.431 Atomic Write Unit (Normal): 1 00:40:32.431 Atomic Write Unit (PFail): 1 00:40:32.431 Atomic Compare & Write Unit: 1 00:40:32.431 Fused Compare & Write: Not Supported 00:40:32.431 Scatter-Gather List 00:40:32.431 SGL Command Set: Supported 00:40:32.431 SGL Keyed: Not Supported 00:40:32.431 SGL Bit Bucket Descriptor: Not Supported 00:40:32.431 SGL Metadata Pointer: Not Supported 00:40:32.431 Oversized SGL: Not Supported 00:40:32.431 SGL Metadata Address: Not Supported 00:40:32.431 SGL Offset: Not Supported 00:40:32.431 Transport SGL Data Block: Not Supported 00:40:32.431 Replay Protected Memory Block: Not Supported 00:40:32.431 00:40:32.431 Firmware Slot Information 00:40:32.431 ========================= 00:40:32.431 Active slot: 1 00:40:32.431 Slot 1 Firmware Revision: 1.0 00:40:32.431 00:40:32.431 00:40:32.432 Commands Supported and Effects 00:40:32.432 ============================== 00:40:32.432 Admin Commands 00:40:32.432 -------------- 00:40:32.432 Delete I/O Submission Queue (00h): Supported 00:40:32.432 Create I/O Submission Queue (01h): Supported 00:40:32.432 Get Log Page (02h): Supported 00:40:32.432 Delete I/O Completion Queue (04h): Supported 00:40:32.432 Create I/O Completion Queue (05h): Supported 00:40:32.432 Identify (06h): Supported 00:40:32.432 Abort (08h): Supported 00:40:32.432 Set Features (09h): Supported 00:40:32.432 Get Features (0Ah): Supported 00:40:32.432 Asynchronous Event Request (0Ch): Supported 00:40:32.432 Namespace Attachment (15h): Supported NS-Inventory-Change 00:40:32.432 Directive Send (19h): Supported 00:40:32.432 Directive Receive (1Ah): Supported 00:40:32.432 Virtualization Management (1Ch): Supported 00:40:32.432 Doorbell Buffer Config (7Ch): Supported 00:40:32.432 Format NVM (80h): Supported LBA-Change 00:40:32.432 I/O Commands 00:40:32.432 ------------ 00:40:32.432 Flush (00h): Supported LBA-Change 00:40:32.432 Write (01h): Supported LBA-Change 00:40:32.432 Read (02h): Supported 00:40:32.432 Compare (05h): Supported 00:40:32.432 Write Zeroes (08h): Supported LBA-Change 00:40:32.432 Dataset Management (09h): Supported LBA-Change 00:40:32.432 Unknown (0Ch): Supported 00:40:32.432 Unknown (12h): Supported 00:40:32.432 Copy (19h): Supported LBA-Change 00:40:32.432 Unknown (1Dh): Supported LBA-Change 00:40:32.432 00:40:32.432 Error Log 00:40:32.432 ========= 00:40:32.432 00:40:32.432 Arbitration 00:40:32.432 =========== 00:40:32.432 Arbitration Burst: no limit 00:40:32.432 00:40:32.432 Power Management 00:40:32.432 ================ 00:40:32.432 Number of Power States: 1 00:40:32.432 Current Power State: Power State #0 00:40:32.432 Power State #0: 00:40:32.432 Max Power: 25.00 W 00:40:32.432 Non-Operational State: Operational 00:40:32.432 Entry Latency: 16 microseconds 00:40:32.432 Exit Latency: 4 microseconds 00:40:32.432 Relative Read Throughput: 0 00:40:32.432 Relative Read Latency: 0 00:40:32.432 Relative Write Throughput: 0 00:40:32.432 Relative Write Latency: 0 00:40:32.696 Idle Power: Not Reported 00:40:32.696 Active Power: Not Reported 00:40:32.696 Non-Operational Permissive Mode: Not Supported 00:40:32.696 00:40:32.696 Health Information 00:40:32.696 ================== 00:40:32.696 Critical Warnings: 00:40:32.696 Available Spare Space: OK 00:40:32.696 Temperature: OK 00:40:32.696 Device Reliability: OK 00:40:32.696 Read Only: No 00:40:32.696 Volatile Memory Backup: OK 00:40:32.696 Current Temperature: 323 Kelvin (50 Celsius) 00:40:32.696 Temperature Threshold: 343 Kelvin (70 Celsius) 00:40:32.696 Available Spare: 0% 00:40:32.696 Available Spare Threshold: 0% 00:40:32.696 Life Percentage Used: 0% 00:40:32.696 Data Units Read: 4408 00:40:32.696 Data Units Written: 4065 00:40:32.696 Host Read Commands: 224260 00:40:32.696 Host Write Commands: 237208 00:40:32.696 Controller Busy Time: 0 minutes 00:40:32.696 Power Cycles: 0 00:40:32.696 Power On Hours: 0 hours 00:40:32.696 Unsafe Shutdowns: 0 00:40:32.696 Unrecoverable Media Errors: 0 00:40:32.696 Lifetime Error Log Entries: 0 00:40:32.696 Warning Temperature Time: 0 minutes 00:40:32.696 Critical Temperature Time: 0 minutes 00:40:32.696 00:40:32.696 Number of Queues 00:40:32.696 ================ 00:40:32.696 Number of I/O Submission Queues: 64 00:40:32.696 Number of I/O Completion Queues: 64 00:40:32.696 00:40:32.696 ZNS Specific Controller Data 00:40:32.696 ============================ 00:40:32.696 Zone Append Size Limit: 0 00:40:32.696 00:40:32.696 00:40:32.696 Active Namespaces 00:40:32.696 ================= 00:40:32.696 Namespace ID:1 00:40:32.696 Error Recovery Timeout: Unlimited 00:40:32.696 Command Set Identifier: NVM (00h) 00:40:32.696 Deallocate: Supported 00:40:32.696 Deallocated/Unwritten Error: Supported 00:40:32.696 Deallocated Read Value: All 0x00 00:40:32.696 Deallocate in Write Zeroes: Not Supported 00:40:32.696 Deallocated Guard Field: 0xFFFF 00:40:32.696 Flush: Supported 00:40:32.696 Reservation: Not Supported 00:40:32.696 Namespace Sharing Capabilities: Private 00:40:32.696 Size (in LBAs): 1310720 (5GiB) 00:40:32.696 Capacity (in LBAs): 1310720 (5GiB) 00:40:32.696 Utilization (in LBAs): 1310720 (5GiB) 00:40:32.696 Thin Provisioning: Not Supported 00:40:32.696 Per-NS Atomic Units: No 00:40:32.696 Maximum Single Source Range Length: 128 00:40:32.696 Maximum Copy Length: 128 00:40:32.696 Maximum Source Range Count: 128 00:40:32.696 NGUID/EUI64 Never Reused: No 00:40:32.696 Namespace Write Protected: No 00:40:32.696 Number of LBA Formats: 8 00:40:32.696 Current LBA Format: LBA Format #04 00:40:32.696 LBA Format #00: Data Size: 512 Metadata Size: 0 00:40:32.696 LBA Format #01: Data Size: 512 Metadata Size: 8 00:40:32.696 LBA Format #02: Data Size: 512 Metadata Size: 16 00:40:32.696 LBA Format #03: Data Size: 512 Metadata Size: 64 00:40:32.696 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:40:32.696 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:40:32.696 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:40:32.696 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:40:32.696 00:40:32.696 NVM Specific Namespace Data 00:40:32.696 =========================== 00:40:32.696 Logical Block Storage Tag Mask: 0 00:40:32.696 Protection Information Capabilities: 00:40:32.696 16b Guard Protection Information Storage Tag Support: No 00:40:32.696 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:40:32.696 Storage Tag Check Read Support: No 00:40:32.696 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:40:32.696 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:40:32.696 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:40:32.696 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:40:32.696 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:40:32.696 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:40:32.696 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:40:32.696 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:40:32.696 00:40:32.696 real 0m0.586s 00:40:32.696 user 0m0.255s 00:40:32.696 sys 0m0.242s 00:40:32.696 23:26:21 nvme.nvme_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:32.696 23:26:21 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:40:32.696 ************************************ 00:40:32.696 END TEST nvme_identify 00:40:32.696 ************************************ 00:40:32.696 23:26:21 nvme -- common/autotest_common.sh@1142 -- # return 0 00:40:32.696 23:26:21 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:40:32.696 23:26:21 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:32.696 23:26:21 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:32.696 23:26:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:32.696 ************************************ 00:40:32.696 START TEST nvme_perf 00:40:32.696 ************************************ 00:40:32.696 23:26:21 nvme.nvme_perf -- common/autotest_common.sh@1123 -- # nvme_perf 00:40:32.696 23:26:21 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:40:34.104 Initializing NVMe Controllers 00:40:34.104 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:40:34.104 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:40:34.104 Initialization complete. Launching workers. 00:40:34.104 ======================================================== 00:40:34.104 Latency(us) 00:40:34.104 Device Information : IOPS MiB/s Average min max 00:40:34.104 PCIE (0000:00:10.0) NSID 1 from core 0: 86072.04 1008.66 1485.83 727.59 5084.07 00:40:34.104 ======================================================== 00:40:34.104 Total : 86072.04 1008.66 1485.83 727.59 5084.07 00:40:34.104 00:40:34.104 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:40:34.104 ================================================================================= 00:40:34.104 1.00000% : 878.778us 00:40:34.104 10.00000% : 1020.276us 00:40:34.104 25.00000% : 1191.564us 00:40:34.104 50.00000% : 1444.771us 00:40:34.104 75.00000% : 1705.425us 00:40:34.104 90.00000% : 1936.291us 00:40:34.104 95.00000% : 2204.393us 00:40:34.104 98.00000% : 2695.913us 00:40:34.104 99.00000% : 2919.331us 00:40:34.104 99.50000% : 3127.855us 00:40:34.104 99.90000% : 3961.949us 00:40:34.104 99.99000% : 4915.200us 00:40:34.104 99.99900% : 5093.935us 00:40:34.104 99.99990% : 5093.935us 00:40:34.104 99.99999% : 5093.935us 00:40:34.104 00:40:34.104 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:40:34.104 ============================================================================== 00:40:34.104 Range in us Cumulative IO count 00:40:34.104 726.109 - 729.833: 0.0012% ( 1) 00:40:34.104 729.833 - 733.556: 0.0023% ( 1) 00:40:34.104 741.004 - 744.727: 0.0046% ( 2) 00:40:34.104 744.727 - 748.451: 0.0070% ( 2) 00:40:34.104 748.451 - 752.175: 0.0093% ( 2) 00:40:34.104 752.175 - 755.898: 0.0104% ( 1) 00:40:34.104 755.898 - 759.622: 0.0116% ( 1) 00:40:34.104 759.622 - 763.345: 0.0163% ( 4) 00:40:34.104 763.345 - 767.069: 0.0197% ( 3) 00:40:34.104 767.069 - 770.793: 0.0209% ( 1) 00:40:34.104 770.793 - 774.516: 0.0232% ( 2) 00:40:34.104 774.516 - 778.240: 0.0279% ( 4) 00:40:34.104 778.240 - 781.964: 0.0325% ( 4) 00:40:34.104 781.964 - 785.687: 0.0348% ( 2) 00:40:34.104 785.687 - 789.411: 0.0395% ( 4) 00:40:34.104 789.411 - 793.135: 0.0453% ( 5) 00:40:34.104 793.135 - 796.858: 0.0546% ( 8) 00:40:34.104 796.858 - 800.582: 0.0592% ( 4) 00:40:34.104 800.582 - 804.305: 0.0720% ( 11) 00:40:34.104 804.305 - 808.029: 0.0871% ( 13) 00:40:34.104 808.029 - 811.753: 0.1010% ( 12) 00:40:34.104 811.753 - 815.476: 0.1161% ( 13) 00:40:34.104 815.476 - 819.200: 0.1370% ( 18) 00:40:34.104 819.200 - 822.924: 0.1602% ( 20) 00:40:34.104 822.924 - 826.647: 0.1764% ( 14) 00:40:34.104 826.647 - 830.371: 0.2066% ( 26) 00:40:34.104 830.371 - 834.095: 0.2310% ( 21) 00:40:34.104 834.095 - 837.818: 0.2693% ( 33) 00:40:34.104 837.818 - 841.542: 0.3192% ( 43) 00:40:34.104 841.542 - 845.265: 0.3703% ( 44) 00:40:34.104 845.265 - 848.989: 0.4051% ( 30) 00:40:34.104 848.989 - 852.713: 0.4539% ( 42) 00:40:34.104 852.713 - 856.436: 0.5166% ( 54) 00:40:34.104 856.436 - 860.160: 0.6071% ( 78) 00:40:34.104 860.160 - 863.884: 0.6768% ( 60) 00:40:34.104 863.884 - 867.607: 0.7290% ( 45) 00:40:34.104 867.607 - 871.331: 0.8103% ( 70) 00:40:34.104 871.331 - 875.055: 0.9066% ( 83) 00:40:34.104 875.055 - 878.778: 1.0088% ( 88) 00:40:34.104 878.778 - 882.502: 1.1144% ( 91) 00:40:34.104 882.502 - 886.225: 1.2050% ( 78) 00:40:34.104 886.225 - 889.949: 1.3373% ( 114) 00:40:34.104 889.949 - 893.673: 1.4696% ( 114) 00:40:34.104 893.673 - 897.396: 1.6055% ( 117) 00:40:34.104 897.396 - 901.120: 1.7331% ( 110) 00:40:34.104 901.120 - 904.844: 1.8806% ( 127) 00:40:34.104 904.844 - 908.567: 2.0419% ( 139) 00:40:34.104 908.567 - 912.291: 2.2103% ( 145) 00:40:34.104 912.291 - 916.015: 2.3913% ( 156) 00:40:34.104 916.015 - 919.738: 2.5806% ( 163) 00:40:34.104 919.738 - 923.462: 2.7884% ( 179) 00:40:34.104 923.462 - 927.185: 3.0066% ( 188) 00:40:34.104 927.185 - 930.909: 3.2399% ( 201) 00:40:34.104 930.909 - 934.633: 3.4756% ( 203) 00:40:34.104 934.633 - 938.356: 3.7356% ( 224) 00:40:34.104 938.356 - 942.080: 3.9446% ( 180) 00:40:34.104 942.080 - 945.804: 4.2104% ( 229) 00:40:34.104 945.804 - 949.527: 4.4727% ( 226) 00:40:34.104 949.527 - 953.251: 4.7537% ( 242) 00:40:34.104 953.251 - 960.698: 5.3027% ( 473) 00:40:34.104 960.698 - 968.145: 5.8507% ( 472) 00:40:34.104 968.145 - 975.593: 6.4206% ( 491) 00:40:34.104 975.593 - 983.040: 7.0220% ( 518) 00:40:34.104 983.040 - 990.487: 7.6500% ( 541) 00:40:34.104 990.487 - 997.935: 8.2850% ( 547) 00:40:34.104 997.935 - 1005.382: 8.8828% ( 515) 00:40:34.104 1005.382 - 1012.829: 9.5247% ( 553) 00:40:34.104 1012.829 - 1020.276: 10.1423% ( 532) 00:40:34.104 1020.276 - 1027.724: 10.7239% ( 501) 00:40:34.104 1027.724 - 1035.171: 11.3809% ( 566) 00:40:34.104 1035.171 - 1042.618: 12.0252% ( 555) 00:40:34.104 1042.618 - 1050.065: 12.6230% ( 515) 00:40:34.104 1050.065 - 1057.513: 13.2685% ( 556) 00:40:34.104 1057.513 - 1064.960: 13.9069% ( 550) 00:40:34.104 1064.960 - 1072.407: 14.5431% ( 548) 00:40:34.104 1072.407 - 1079.855: 15.2025% ( 568) 00:40:34.104 1079.855 - 1087.302: 15.8734% ( 578) 00:40:34.104 1087.302 - 1094.749: 16.5119% ( 550) 00:40:34.104 1094.749 - 1102.196: 17.1898% ( 584) 00:40:34.104 1102.196 - 1109.644: 17.8318% ( 553) 00:40:34.104 1109.644 - 1117.091: 18.5202% ( 593) 00:40:34.104 1117.091 - 1124.538: 19.2155% ( 599) 00:40:34.104 1124.538 - 1131.985: 19.9062% ( 595) 00:40:34.104 1131.985 - 1139.433: 20.5470% ( 552) 00:40:34.104 1139.433 - 1146.880: 21.2632% ( 617) 00:40:34.104 1146.880 - 1154.327: 21.9191% ( 565) 00:40:34.104 1154.327 - 1161.775: 22.6063% ( 592) 00:40:34.104 1161.775 - 1169.222: 23.3365% ( 629) 00:40:34.104 1169.222 - 1176.669: 24.0179% ( 587) 00:40:34.104 1176.669 - 1184.116: 24.7376% ( 620) 00:40:34.104 1184.116 - 1191.564: 25.4609% ( 623) 00:40:34.104 1191.564 - 1199.011: 26.1759% ( 616) 00:40:34.104 1199.011 - 1206.458: 26.8864% ( 612) 00:40:34.104 1206.458 - 1213.905: 27.6398% ( 649) 00:40:34.104 1213.905 - 1221.353: 28.3374% ( 601) 00:40:34.104 1221.353 - 1228.800: 29.0966% ( 654) 00:40:34.104 1228.800 - 1236.247: 29.8303% ( 632) 00:40:34.104 1236.247 - 1243.695: 30.5477% ( 618) 00:40:34.104 1243.695 - 1251.142: 31.2813% ( 632) 00:40:34.104 1251.142 - 1258.589: 31.9895% ( 610) 00:40:34.104 1258.589 - 1266.036: 32.7359% ( 643) 00:40:34.104 1266.036 - 1273.484: 33.4440% ( 610) 00:40:34.104 1273.484 - 1280.931: 34.2032% ( 654) 00:40:34.104 1280.931 - 1288.378: 34.9334% ( 629) 00:40:34.104 1288.378 - 1295.825: 35.6403% ( 609) 00:40:34.104 1295.825 - 1303.273: 36.3914% ( 647) 00:40:34.104 1303.273 - 1310.720: 37.1378% ( 643) 00:40:34.104 1310.720 - 1318.167: 37.8691% ( 630) 00:40:34.104 1318.167 - 1325.615: 38.6028% ( 632) 00:40:34.104 1325.615 - 1333.062: 39.3469% ( 641) 00:40:34.104 1333.062 - 1340.509: 40.1177% ( 664) 00:40:34.104 1340.509 - 1347.956: 40.8572% ( 637) 00:40:34.104 1347.956 - 1355.404: 41.6338% ( 669) 00:40:34.104 1355.404 - 1362.851: 42.3906% ( 652) 00:40:34.104 1362.851 - 1370.298: 43.1046% ( 615) 00:40:34.104 1370.298 - 1377.745: 43.8870% ( 674) 00:40:34.104 1377.745 - 1385.193: 44.6067% ( 620) 00:40:34.104 1385.193 - 1392.640: 45.3880% ( 673) 00:40:34.104 1392.640 - 1400.087: 46.1321% ( 641) 00:40:34.104 1400.087 - 1407.535: 46.9087% ( 669) 00:40:34.104 1407.535 - 1414.982: 47.6388% ( 629) 00:40:34.104 1414.982 - 1422.429: 48.3911% ( 648) 00:40:34.104 1422.429 - 1429.876: 49.1479% ( 652) 00:40:34.104 1429.876 - 1437.324: 49.8770% ( 628) 00:40:34.104 1437.324 - 1444.771: 50.6419% ( 659) 00:40:34.104 1444.771 - 1452.218: 51.3721% ( 629) 00:40:34.104 1452.218 - 1459.665: 52.1464% ( 667) 00:40:34.104 1459.665 - 1467.113: 52.8801% ( 632) 00:40:34.104 1467.113 - 1474.560: 53.6300% ( 646) 00:40:34.104 1474.560 - 1482.007: 54.3555% ( 625) 00:40:34.104 1482.007 - 1489.455: 55.0996% ( 641) 00:40:34.104 1489.455 - 1496.902: 55.8158% ( 617) 00:40:34.104 1496.902 - 1504.349: 56.5658% ( 646) 00:40:34.104 1504.349 - 1511.796: 57.2797% ( 615) 00:40:34.104 1511.796 - 1519.244: 58.0191% ( 637) 00:40:34.104 1519.244 - 1526.691: 58.7307% ( 613) 00:40:34.104 1526.691 - 1534.138: 59.4412% ( 612) 00:40:34.104 1534.138 - 1541.585: 60.1446% ( 606) 00:40:34.104 1541.585 - 1549.033: 60.8562% ( 613) 00:40:34.104 1549.033 - 1556.480: 61.5562% ( 603) 00:40:34.104 1556.480 - 1563.927: 62.3015% ( 642) 00:40:34.104 1563.927 - 1571.375: 62.9678% ( 574) 00:40:34.104 1571.375 - 1578.822: 63.6794% ( 613) 00:40:34.104 1578.822 - 1586.269: 64.3910% ( 613) 00:40:34.104 1586.269 - 1593.716: 65.0991% ( 610) 00:40:34.104 1593.716 - 1601.164: 65.8073% ( 610) 00:40:34.104 1601.164 - 1608.611: 66.4782% ( 578) 00:40:34.104 1608.611 - 1616.058: 67.2119% ( 632) 00:40:34.104 1616.058 - 1623.505: 67.8608% ( 559) 00:40:34.104 1623.505 - 1630.953: 68.5712% ( 612) 00:40:34.104 1630.953 - 1638.400: 69.2201% ( 559) 00:40:34.104 1638.400 - 1645.847: 69.9108% ( 595) 00:40:34.104 1645.847 - 1653.295: 70.6248% ( 615) 00:40:34.104 1653.295 - 1660.742: 71.2830% ( 567) 00:40:34.104 1660.742 - 1668.189: 71.9864% ( 606) 00:40:34.104 1668.189 - 1675.636: 72.6562% ( 577) 00:40:34.104 1675.636 - 1683.084: 73.3713% ( 616) 00:40:34.104 1683.084 - 1690.531: 74.0168% ( 556) 00:40:34.104 1690.531 - 1697.978: 74.7458% ( 628) 00:40:34.104 1697.978 - 1705.425: 75.4121% ( 574) 00:40:34.104 1705.425 - 1712.873: 76.0993% ( 592) 00:40:34.104 1712.873 - 1720.320: 76.7552% ( 565) 00:40:34.104 1720.320 - 1727.767: 77.4180% ( 571) 00:40:34.105 1727.767 - 1735.215: 78.0960% ( 584) 00:40:34.105 1735.215 - 1742.662: 78.7275% ( 544) 00:40:34.105 1742.662 - 1750.109: 79.3717% ( 555) 00:40:34.105 1750.109 - 1757.556: 79.9754% ( 520) 00:40:34.105 1757.556 - 1765.004: 80.6173% ( 553) 00:40:34.105 1765.004 - 1772.451: 81.1989% ( 501) 00:40:34.105 1772.451 - 1779.898: 81.7944% ( 513) 00:40:34.105 1779.898 - 1787.345: 82.3807% ( 505) 00:40:34.105 1787.345 - 1794.793: 82.9506% ( 491) 00:40:34.105 1794.793 - 1802.240: 83.4870% ( 462) 00:40:34.105 1802.240 - 1809.687: 83.9931% ( 436) 00:40:34.105 1809.687 - 1817.135: 84.5108% ( 446) 00:40:34.105 1817.135 - 1824.582: 85.0146% ( 434) 00:40:34.105 1824.582 - 1832.029: 85.5033% ( 421) 00:40:34.105 1832.029 - 1839.476: 85.9770% ( 408) 00:40:34.105 1839.476 - 1846.924: 86.4274% ( 388) 00:40:34.105 1846.924 - 1854.371: 86.8348% ( 351) 00:40:34.105 1854.371 - 1861.818: 87.2435% ( 352) 00:40:34.105 1861.818 - 1869.265: 87.6289% ( 332) 00:40:34.105 1869.265 - 1876.713: 88.0015% ( 321) 00:40:34.105 1876.713 - 1884.160: 88.3381% ( 290) 00:40:34.105 1884.160 - 1891.607: 88.6759% ( 291) 00:40:34.105 1891.607 - 1899.055: 88.9766% ( 259) 00:40:34.105 1899.055 - 1906.502: 89.2552% ( 240) 00:40:34.105 1906.502 - 1921.396: 89.7683% ( 442) 00:40:34.105 1921.396 - 1936.291: 90.2628% ( 426) 00:40:34.105 1936.291 - 1951.185: 90.6819% ( 361) 00:40:34.105 1951.185 - 1966.080: 91.0557% ( 322) 00:40:34.105 1966.080 - 1980.975: 91.4329% ( 325) 00:40:34.105 1980.975 - 1995.869: 91.7638% ( 285) 00:40:34.105 1995.869 - 2010.764: 92.0819% ( 274) 00:40:34.105 2010.764 - 2025.658: 92.3779% ( 255) 00:40:34.105 2025.658 - 2040.553: 92.6681% ( 250) 00:40:34.105 2040.553 - 2055.447: 92.9421% ( 236) 00:40:34.105 2055.447 - 2070.342: 93.2044% ( 226) 00:40:34.105 2070.342 - 2085.236: 93.4818% ( 239) 00:40:34.105 2085.236 - 2100.131: 93.7233% ( 208) 00:40:34.105 2100.131 - 2115.025: 93.9543% ( 199) 00:40:34.105 2115.025 - 2129.920: 94.1795% ( 194) 00:40:34.105 2129.920 - 2144.815: 94.3769% ( 170) 00:40:34.105 2144.815 - 2159.709: 94.5568% ( 155) 00:40:34.105 2159.709 - 2174.604: 94.7379% ( 156) 00:40:34.105 2174.604 - 2189.498: 94.9039% ( 143) 00:40:34.105 2189.498 - 2204.393: 95.0548% ( 130) 00:40:34.105 2204.393 - 2219.287: 95.2022% ( 127) 00:40:34.105 2219.287 - 2234.182: 95.3404% ( 119) 00:40:34.105 2234.182 - 2249.076: 95.4657% ( 108) 00:40:34.105 2249.076 - 2263.971: 95.5772% ( 96) 00:40:34.105 2263.971 - 2278.865: 95.6944% ( 101) 00:40:34.105 2278.865 - 2293.760: 95.8012% ( 92) 00:40:34.105 2293.760 - 2308.655: 95.9103% ( 94) 00:40:34.105 2308.655 - 2323.549: 96.0102% ( 86) 00:40:34.105 2323.549 - 2338.444: 96.1054% ( 82) 00:40:34.105 2338.444 - 2353.338: 96.1982% ( 80) 00:40:34.105 2353.338 - 2368.233: 96.2923% ( 81) 00:40:34.105 2368.233 - 2383.127: 96.3840% ( 79) 00:40:34.105 2383.127 - 2398.022: 96.4664% ( 71) 00:40:34.105 2398.022 - 2412.916: 96.5558% ( 77) 00:40:34.105 2412.916 - 2427.811: 96.6405% ( 73) 00:40:34.105 2427.811 - 2442.705: 96.7253% ( 73) 00:40:34.105 2442.705 - 2457.600: 96.8053% ( 69) 00:40:34.105 2457.600 - 2472.495: 96.8913% ( 74) 00:40:34.105 2472.495 - 2487.389: 96.9725% ( 70) 00:40:34.105 2487.389 - 2502.284: 97.0491% ( 66) 00:40:34.105 2502.284 - 2517.178: 97.1211% ( 62) 00:40:34.105 2517.178 - 2532.073: 97.1977% ( 66) 00:40:34.105 2532.073 - 2546.967: 97.2801% ( 71) 00:40:34.105 2546.967 - 2561.862: 97.3579% ( 67) 00:40:34.105 2561.862 - 2576.756: 97.4322% ( 64) 00:40:34.105 2576.756 - 2591.651: 97.5181% ( 74) 00:40:34.105 2591.651 - 2606.545: 97.5982% ( 69) 00:40:34.105 2606.545 - 2621.440: 97.6783% ( 69) 00:40:34.105 2621.440 - 2636.335: 97.7584% ( 69) 00:40:34.105 2636.335 - 2651.229: 97.8339% ( 65) 00:40:34.105 2651.229 - 2666.124: 97.9058% ( 62) 00:40:34.105 2666.124 - 2681.018: 97.9778% ( 62) 00:40:34.105 2681.018 - 2695.913: 98.0509% ( 63) 00:40:34.105 2695.913 - 2710.807: 98.1276% ( 66) 00:40:34.105 2710.807 - 2725.702: 98.1937% ( 57) 00:40:34.105 2725.702 - 2740.596: 98.2634% ( 60) 00:40:34.105 2740.596 - 2755.491: 98.3388% ( 65) 00:40:34.105 2755.491 - 2770.385: 98.4154% ( 66) 00:40:34.105 2770.385 - 2785.280: 98.4874% ( 62) 00:40:34.105 2785.280 - 2800.175: 98.5582% ( 61) 00:40:34.105 2800.175 - 2815.069: 98.6279% ( 60) 00:40:34.105 2815.069 - 2829.964: 98.6975% ( 60) 00:40:34.105 2829.964 - 2844.858: 98.7637% ( 57) 00:40:34.105 2844.858 - 2859.753: 98.8333% ( 60) 00:40:34.105 2859.753 - 2874.647: 98.8902% ( 49) 00:40:34.105 2874.647 - 2889.542: 98.9425% ( 45) 00:40:34.105 2889.542 - 2904.436: 98.9970% ( 47) 00:40:34.105 2904.436 - 2919.331: 99.0469% ( 43) 00:40:34.105 2919.331 - 2934.225: 99.0945% ( 41) 00:40:34.105 2934.225 - 2949.120: 99.1410% ( 40) 00:40:34.105 2949.120 - 2964.015: 99.1862% ( 39) 00:40:34.105 2964.015 - 2978.909: 99.2269% ( 35) 00:40:34.105 2978.909 - 2993.804: 99.2652% ( 33) 00:40:34.105 2993.804 - 3008.698: 99.3047% ( 34) 00:40:34.105 3008.698 - 3023.593: 99.3383% ( 29) 00:40:34.105 3023.593 - 3038.487: 99.3673% ( 25) 00:40:34.105 3038.487 - 3053.382: 99.4010% ( 29) 00:40:34.105 3053.382 - 3068.276: 99.4265% ( 22) 00:40:34.105 3068.276 - 3083.171: 99.4532% ( 23) 00:40:34.105 3083.171 - 3098.065: 99.4765% ( 20) 00:40:34.105 3098.065 - 3112.960: 99.4974% ( 18) 00:40:34.105 3112.960 - 3127.855: 99.5136% ( 14) 00:40:34.105 3127.855 - 3142.749: 99.5345% ( 18) 00:40:34.105 3142.749 - 3157.644: 99.5508% ( 14) 00:40:34.105 3157.644 - 3172.538: 99.5658% ( 13) 00:40:34.105 3172.538 - 3187.433: 99.5809% ( 13) 00:40:34.105 3187.433 - 3202.327: 99.5937% ( 11) 00:40:34.105 3202.327 - 3217.222: 99.6042% ( 9) 00:40:34.105 3217.222 - 3232.116: 99.6134% ( 8) 00:40:34.105 3232.116 - 3247.011: 99.6204% ( 6) 00:40:34.105 3247.011 - 3261.905: 99.6285% ( 7) 00:40:34.105 3261.905 - 3276.800: 99.6355% ( 6) 00:40:34.105 3276.800 - 3291.695: 99.6425% ( 6) 00:40:34.105 3291.695 - 3306.589: 99.6483% ( 5) 00:40:34.105 3306.589 - 3321.484: 99.6564% ( 7) 00:40:34.105 3321.484 - 3336.378: 99.6657% ( 8) 00:40:34.105 3336.378 - 3351.273: 99.6726% ( 6) 00:40:34.105 3351.273 - 3366.167: 99.6808% ( 7) 00:40:34.105 3366.167 - 3381.062: 99.6866% ( 5) 00:40:34.105 3381.062 - 3395.956: 99.6935% ( 6) 00:40:34.105 3395.956 - 3410.851: 99.7005% ( 6) 00:40:34.105 3410.851 - 3425.745: 99.7063% ( 5) 00:40:34.105 3425.745 - 3440.640: 99.7109% ( 4) 00:40:34.105 3440.640 - 3455.535: 99.7191% ( 7) 00:40:34.105 3455.535 - 3470.429: 99.7249% ( 5) 00:40:34.105 3470.429 - 3485.324: 99.7307% ( 5) 00:40:34.105 3485.324 - 3500.218: 99.7376% ( 6) 00:40:34.105 3500.218 - 3515.113: 99.7446% ( 6) 00:40:34.105 3515.113 - 3530.007: 99.7527% ( 7) 00:40:34.105 3530.007 - 3544.902: 99.7574% ( 4) 00:40:34.105 3544.902 - 3559.796: 99.7643% ( 6) 00:40:34.105 3559.796 - 3574.691: 99.7690% ( 4) 00:40:34.105 3574.691 - 3589.585: 99.7725% ( 3) 00:40:34.105 3589.585 - 3604.480: 99.7806% ( 7) 00:40:34.105 3604.480 - 3619.375: 99.7876% ( 6) 00:40:34.105 3619.375 - 3634.269: 99.7934% ( 5) 00:40:34.105 3634.269 - 3649.164: 99.7980% ( 4) 00:40:34.105 3649.164 - 3664.058: 99.8050% ( 6) 00:40:34.105 3664.058 - 3678.953: 99.8108% ( 5) 00:40:34.105 3678.953 - 3693.847: 99.8143% ( 3) 00:40:34.105 3693.847 - 3708.742: 99.8201% ( 5) 00:40:34.105 3708.742 - 3723.636: 99.8247% ( 4) 00:40:34.105 3723.636 - 3738.531: 99.8294% ( 4) 00:40:34.105 3738.531 - 3753.425: 99.8363% ( 6) 00:40:34.105 3753.425 - 3768.320: 99.8398% ( 3) 00:40:34.105 3768.320 - 3783.215: 99.8456% ( 5) 00:40:34.105 3783.215 - 3798.109: 99.8479% ( 2) 00:40:34.105 3798.109 - 3813.004: 99.8526% ( 4) 00:40:34.105 3813.004 - 3842.793: 99.8642% ( 10) 00:40:34.105 3842.793 - 3872.582: 99.8746% ( 9) 00:40:34.105 3872.582 - 3902.371: 99.8862% ( 10) 00:40:34.105 3902.371 - 3932.160: 99.8978% ( 10) 00:40:34.105 3932.160 - 3961.949: 99.9083% ( 9) 00:40:34.105 3961.949 - 3991.738: 99.9176% ( 8) 00:40:34.105 3991.738 - 4021.527: 99.9280% ( 9) 00:40:34.105 4021.527 - 4051.316: 99.9338% ( 5) 00:40:34.105 4051.316 - 4081.105: 99.9373% ( 3) 00:40:34.105 4081.105 - 4110.895: 99.9408% ( 3) 00:40:34.105 4110.895 - 4140.684: 99.9443% ( 3) 00:40:34.105 4140.684 - 4170.473: 99.9454% ( 1) 00:40:34.105 4170.473 - 4200.262: 99.9478% ( 2) 00:40:34.105 4200.262 - 4230.051: 99.9501% ( 2) 00:40:34.105 4230.051 - 4259.840: 99.9512% ( 1) 00:40:34.105 4259.840 - 4289.629: 99.9536% ( 2) 00:40:34.105 4289.629 - 4319.418: 99.9547% ( 1) 00:40:34.105 4319.418 - 4349.207: 99.9570% ( 2) 00:40:34.105 4349.207 - 4378.996: 99.9582% ( 1) 00:40:34.105 4378.996 - 4408.785: 99.9605% ( 2) 00:40:34.105 4408.785 - 4438.575: 99.9629% ( 2) 00:40:34.105 4438.575 - 4468.364: 99.9640% ( 1) 00:40:34.105 4468.364 - 4498.153: 99.9663% ( 2) 00:40:34.105 4498.153 - 4527.942: 99.9687% ( 2) 00:40:34.105 4527.942 - 4557.731: 99.9698% ( 1) 00:40:34.105 4557.731 - 4587.520: 99.9721% ( 2) 00:40:34.105 4587.520 - 4617.309: 99.9733% ( 1) 00:40:34.105 4617.309 - 4647.098: 99.9756% ( 2) 00:40:34.105 4647.098 - 4676.887: 99.9779% ( 2) 00:40:34.105 4676.887 - 4706.676: 99.9791% ( 1) 00:40:34.105 4706.676 - 4736.465: 99.9814% ( 2) 00:40:34.105 4736.465 - 4766.255: 99.9826% ( 1) 00:40:34.105 4766.255 - 4796.044: 99.9837% ( 1) 00:40:34.105 4796.044 - 4825.833: 99.9861% ( 2) 00:40:34.105 4825.833 - 4855.622: 99.9872% ( 1) 00:40:34.105 4855.622 - 4885.411: 99.9896% ( 2) 00:40:34.105 4885.411 - 4915.200: 99.9907% ( 1) 00:40:34.105 4915.200 - 4944.989: 99.9930% ( 2) 00:40:34.105 4944.989 - 4974.778: 99.9954% ( 2) 00:40:34.105 4974.778 - 5004.567: 99.9965% ( 1) 00:40:34.105 5004.567 - 5034.356: 99.9988% ( 2) 00:40:34.105 5064.145 - 5093.935: 100.0000% ( 1) 00:40:34.105 00:40:34.105 23:26:23 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:40:35.037 Initializing NVMe Controllers 00:40:35.037 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:40:35.037 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:40:35.037 Initialization complete. Launching workers. 00:40:35.037 ======================================================== 00:40:35.037 Latency(us) 00:40:35.037 Device Information : IOPS MiB/s Average min max 00:40:35.037 PCIE (0000:00:10.0) NSID 1 from core 0: 73030.00 855.82 1751.97 697.49 13994.86 00:40:35.037 ======================================================== 00:40:35.037 Total : 73030.00 855.82 1751.97 697.49 13994.86 00:40:35.037 00:40:35.037 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:40:35.037 ================================================================================= 00:40:35.037 1.00000% : 1117.091us 00:40:35.037 10.00000% : 1325.615us 00:40:35.037 25.00000% : 1452.218us 00:40:35.037 50.00000% : 1645.847us 00:40:35.037 75.00000% : 1921.396us 00:40:35.037 90.00000% : 2234.182us 00:40:35.037 95.00000% : 2517.178us 00:40:35.037 98.00000% : 2949.120us 00:40:35.037 99.00000% : 3366.167us 00:40:35.037 99.50000% : 3738.531us 00:40:35.037 99.90000% : 13405.091us 00:40:35.037 99.99000% : 13881.716us 00:40:35.037 99.99900% : 14000.873us 00:40:35.037 99.99990% : 14000.873us 00:40:35.037 99.99999% : 14000.873us 00:40:35.037 00:40:35.037 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:40:35.037 ============================================================================== 00:40:35.037 Range in us Cumulative IO count 00:40:35.037 696.320 - 700.044: 0.0014% ( 1) 00:40:35.037 729.833 - 733.556: 0.0027% ( 1) 00:40:35.037 744.727 - 748.451: 0.0041% ( 1) 00:40:35.037 748.451 - 752.175: 0.0068% ( 2) 00:40:35.037 752.175 - 755.898: 0.0082% ( 1) 00:40:35.037 759.622 - 763.345: 0.0096% ( 1) 00:40:35.037 767.069 - 770.793: 0.0110% ( 1) 00:40:35.037 770.793 - 774.516: 0.0123% ( 1) 00:40:35.037 778.240 - 781.964: 0.0137% ( 1) 00:40:35.037 785.687 - 789.411: 0.0178% ( 3) 00:40:35.037 789.411 - 793.135: 0.0192% ( 1) 00:40:35.037 793.135 - 796.858: 0.0205% ( 1) 00:40:35.037 796.858 - 800.582: 0.0219% ( 1) 00:40:35.037 804.305 - 808.029: 0.0246% ( 2) 00:40:35.037 815.476 - 819.200: 0.0260% ( 1) 00:40:35.037 819.200 - 822.924: 0.0288% ( 2) 00:40:35.037 822.924 - 826.647: 0.0301% ( 1) 00:40:35.037 826.647 - 830.371: 0.0329% ( 2) 00:40:35.037 830.371 - 834.095: 0.0342% ( 1) 00:40:35.037 834.095 - 837.818: 0.0370% ( 2) 00:40:35.037 837.818 - 841.542: 0.0383% ( 1) 00:40:35.037 841.542 - 845.265: 0.0411% ( 2) 00:40:35.037 845.265 - 848.989: 0.0452% ( 3) 00:40:35.037 848.989 - 852.713: 0.0479% ( 2) 00:40:35.037 852.713 - 856.436: 0.0493% ( 1) 00:40:35.037 856.436 - 860.160: 0.0534% ( 3) 00:40:35.037 867.607 - 871.331: 0.0548% ( 1) 00:40:35.037 871.331 - 875.055: 0.0575% ( 2) 00:40:35.037 878.778 - 882.502: 0.0616% ( 3) 00:40:35.037 882.502 - 886.225: 0.0630% ( 1) 00:40:35.037 886.225 - 889.949: 0.0671% ( 3) 00:40:35.037 889.949 - 893.673: 0.0685% ( 1) 00:40:35.037 893.673 - 897.396: 0.0698% ( 1) 00:40:35.037 901.120 - 904.844: 0.0767% ( 5) 00:40:35.037 908.567 - 912.291: 0.0794% ( 2) 00:40:35.037 916.015 - 919.738: 0.0835% ( 3) 00:40:35.037 919.738 - 923.462: 0.0849% ( 1) 00:40:35.037 923.462 - 927.185: 0.0917% ( 5) 00:40:35.037 927.185 - 930.909: 0.0945% ( 2) 00:40:35.037 930.909 - 934.633: 0.0986% ( 3) 00:40:35.037 934.633 - 938.356: 0.1013% ( 2) 00:40:35.037 938.356 - 942.080: 0.1082% ( 5) 00:40:35.037 942.080 - 945.804: 0.1164% ( 6) 00:40:35.037 945.804 - 949.527: 0.1232% ( 5) 00:40:35.037 949.527 - 953.251: 0.1260% ( 2) 00:40:35.037 953.251 - 960.698: 0.1356% ( 7) 00:40:35.037 960.698 - 968.145: 0.1520% ( 12) 00:40:35.037 968.145 - 975.593: 0.1602% ( 6) 00:40:35.037 975.593 - 983.040: 0.1698% ( 7) 00:40:35.037 983.040 - 990.487: 0.1862% ( 12) 00:40:35.037 990.487 - 997.935: 0.1917% ( 4) 00:40:35.037 997.935 - 1005.382: 0.2013% ( 7) 00:40:35.037 1005.382 - 1012.829: 0.2122% ( 8) 00:40:35.037 1012.829 - 1020.276: 0.2273% ( 11) 00:40:35.037 1020.276 - 1027.724: 0.2492% ( 16) 00:40:35.037 1027.724 - 1035.171: 0.2739% ( 18) 00:40:35.038 1035.171 - 1042.618: 0.3108% ( 27) 00:40:35.038 1042.618 - 1050.065: 0.3505% ( 29) 00:40:35.038 1050.065 - 1057.513: 0.4094% ( 43) 00:40:35.038 1057.513 - 1064.960: 0.4615% ( 38) 00:40:35.038 1064.960 - 1072.407: 0.5094% ( 35) 00:40:35.038 1072.407 - 1079.855: 0.5737% ( 47) 00:40:35.038 1079.855 - 1087.302: 0.6573% ( 61) 00:40:35.038 1087.302 - 1094.749: 0.7367% ( 58) 00:40:35.038 1094.749 - 1102.196: 0.8312% ( 69) 00:40:35.038 1102.196 - 1109.644: 0.9188% ( 64) 00:40:35.038 1109.644 - 1117.091: 1.0270% ( 79) 00:40:35.038 1117.091 - 1124.538: 1.1625% ( 99) 00:40:35.038 1124.538 - 1131.985: 1.2707% ( 79) 00:40:35.038 1131.985 - 1139.433: 1.4145% ( 105) 00:40:35.038 1139.433 - 1146.880: 1.5583% ( 105) 00:40:35.038 1146.880 - 1154.327: 1.7185% ( 117) 00:40:35.038 1154.327 - 1161.775: 1.8800% ( 118) 00:40:35.038 1161.775 - 1169.222: 2.0718% ( 140) 00:40:35.038 1169.222 - 1176.669: 2.2826% ( 154) 00:40:35.038 1176.669 - 1184.116: 2.5168% ( 171) 00:40:35.038 1184.116 - 1191.564: 2.7263% ( 153) 00:40:35.038 1191.564 - 1199.011: 2.9714% ( 179) 00:40:35.038 1199.011 - 1206.458: 3.2603% ( 211) 00:40:35.038 1206.458 - 1213.905: 3.5547% ( 215) 00:40:35.038 1213.905 - 1221.353: 3.8655% ( 227) 00:40:35.038 1221.353 - 1228.800: 4.1942% ( 240) 00:40:35.038 1228.800 - 1236.247: 4.5338% ( 248) 00:40:35.038 1236.247 - 1243.695: 4.8952% ( 264) 00:40:35.038 1243.695 - 1251.142: 5.2499% ( 259) 00:40:35.038 1251.142 - 1258.589: 5.6525% ( 294) 00:40:35.038 1258.589 - 1266.036: 6.0496% ( 290) 00:40:35.038 1266.036 - 1273.484: 6.4535% ( 295) 00:40:35.038 1273.484 - 1280.931: 6.9150% ( 337) 00:40:35.038 1280.931 - 1288.378: 7.3490% ( 317) 00:40:35.038 1288.378 - 1295.825: 7.8584% ( 372) 00:40:35.038 1295.825 - 1303.273: 8.3801% ( 381) 00:40:35.038 1303.273 - 1310.720: 8.9319% ( 403) 00:40:35.038 1310.720 - 1318.167: 9.5468% ( 449) 00:40:35.038 1318.167 - 1325.615: 10.2369% ( 504) 00:40:35.038 1325.615 - 1333.062: 11.0105% ( 565) 00:40:35.038 1333.062 - 1340.509: 11.7774% ( 560) 00:40:35.038 1340.509 - 1347.956: 12.5318% ( 551) 00:40:35.038 1347.956 - 1355.404: 13.3520% ( 599) 00:40:35.038 1355.404 - 1362.851: 14.1408% ( 576) 00:40:35.038 1362.851 - 1370.298: 15.0363% ( 654) 00:40:35.038 1370.298 - 1377.745: 15.9181% ( 644) 00:40:35.038 1377.745 - 1385.193: 16.7999% ( 644) 00:40:35.038 1385.193 - 1392.640: 17.7557% ( 698) 00:40:35.038 1392.640 - 1400.087: 18.7663% ( 738) 00:40:35.038 1400.087 - 1407.535: 19.7220% ( 698) 00:40:35.038 1407.535 - 1414.982: 20.6751% ( 696) 00:40:35.038 1414.982 - 1422.429: 21.5199% ( 617) 00:40:35.038 1422.429 - 1429.876: 22.4757% ( 698) 00:40:35.038 1429.876 - 1437.324: 23.4945% ( 744) 00:40:35.038 1437.324 - 1444.771: 24.4653% ( 709) 00:40:35.038 1444.771 - 1452.218: 25.3923% ( 677) 00:40:35.038 1452.218 - 1459.665: 26.3563% ( 704) 00:40:35.038 1459.665 - 1467.113: 27.2792% ( 674) 00:40:35.038 1467.113 - 1474.560: 28.2062% ( 677) 00:40:35.038 1474.560 - 1482.007: 29.1305% ( 675) 00:40:35.038 1482.007 - 1489.455: 30.0808% ( 694) 00:40:35.038 1489.455 - 1496.902: 30.9996% ( 671) 00:40:35.038 1496.902 - 1504.349: 31.9827% ( 718) 00:40:35.038 1504.349 - 1511.796: 32.9289% ( 691) 00:40:35.038 1511.796 - 1519.244: 33.8491% ( 672) 00:40:35.038 1519.244 - 1526.691: 34.6912% ( 615) 00:40:35.038 1526.691 - 1534.138: 35.6073% ( 669) 00:40:35.038 1534.138 - 1541.585: 36.5494% ( 688) 00:40:35.038 1541.585 - 1549.033: 37.5448% ( 727) 00:40:35.038 1549.033 - 1556.480: 38.5116% ( 706) 00:40:35.038 1556.480 - 1563.927: 39.5591% ( 765) 00:40:35.038 1563.927 - 1571.375: 40.5149% ( 698) 00:40:35.038 1571.375 - 1578.822: 41.4706% ( 698) 00:40:35.038 1578.822 - 1586.269: 42.4374% ( 706) 00:40:35.038 1586.269 - 1593.716: 43.3822% ( 690) 00:40:35.038 1593.716 - 1601.164: 44.3215% ( 686) 00:40:35.038 1601.164 - 1608.611: 45.2622% ( 687) 00:40:35.038 1608.611 - 1616.058: 46.1797% ( 670) 00:40:35.038 1616.058 - 1623.505: 47.2806% ( 804) 00:40:35.038 1623.505 - 1630.953: 48.2199% ( 686) 00:40:35.038 1630.953 - 1638.400: 49.2702% ( 767) 00:40:35.038 1638.400 - 1645.847: 50.1890% ( 671) 00:40:35.038 1645.847 - 1653.295: 51.0667% ( 641) 00:40:35.038 1653.295 - 1660.742: 52.0841% ( 743) 00:40:35.038 1660.742 - 1668.189: 52.9344% ( 621) 00:40:35.038 1668.189 - 1675.636: 53.8464% ( 666) 00:40:35.038 1675.636 - 1683.084: 54.7008% ( 624) 00:40:35.038 1683.084 - 1690.531: 55.5238% ( 601) 00:40:35.038 1690.531 - 1697.978: 56.3399% ( 596) 00:40:35.038 1697.978 - 1705.425: 57.2176% ( 641) 00:40:35.038 1705.425 - 1712.873: 58.0200% ( 586) 00:40:35.038 1712.873 - 1720.320: 58.8238% ( 587) 00:40:35.038 1720.320 - 1727.767: 59.6686% ( 617) 00:40:35.038 1727.767 - 1735.215: 60.4231% ( 551) 00:40:35.038 1735.215 - 1742.662: 61.1927% ( 562) 00:40:35.038 1742.662 - 1750.109: 61.9910% ( 583) 00:40:35.038 1750.109 - 1757.556: 62.7537% ( 557) 00:40:35.038 1757.556 - 1765.004: 63.4616% ( 517) 00:40:35.038 1765.004 - 1772.451: 64.2394% ( 568) 00:40:35.038 1772.451 - 1779.898: 64.9048% ( 486) 00:40:35.038 1779.898 - 1787.345: 65.5539% ( 474) 00:40:35.038 1787.345 - 1794.793: 66.1810% ( 458) 00:40:35.038 1794.793 - 1802.240: 66.8397% ( 481) 00:40:35.038 1802.240 - 1809.687: 67.4641% ( 456) 00:40:35.038 1809.687 - 1817.135: 68.0734% ( 445) 00:40:35.038 1817.135 - 1824.582: 68.7115% ( 466) 00:40:35.038 1824.582 - 1832.029: 69.3140% ( 440) 00:40:35.038 1832.029 - 1839.476: 69.8973% ( 426) 00:40:35.038 1839.476 - 1846.924: 70.4984% ( 439) 00:40:35.038 1846.924 - 1854.371: 71.0571% ( 408) 00:40:35.038 1854.371 - 1861.818: 71.6486% ( 432) 00:40:35.038 1861.818 - 1869.265: 72.2073% ( 408) 00:40:35.038 1869.265 - 1876.713: 72.7660% ( 408) 00:40:35.038 1876.713 - 1884.160: 73.3562% ( 431) 00:40:35.038 1884.160 - 1891.607: 73.8368% ( 351) 00:40:35.038 1891.607 - 1899.055: 74.4256% ( 430) 00:40:35.038 1899.055 - 1906.502: 74.9596% ( 390) 00:40:35.038 1906.502 - 1921.396: 76.0742% ( 814) 00:40:35.038 1921.396 - 1936.291: 77.0560% ( 717) 00:40:35.038 1936.291 - 1951.185: 77.9817% ( 676) 00:40:35.038 1951.185 - 1966.080: 78.8854% ( 660) 00:40:35.038 1966.080 - 1980.975: 79.7768% ( 651) 00:40:35.038 1980.975 - 1995.869: 80.6093% ( 608) 00:40:35.038 1995.869 - 2010.764: 81.3912% ( 571) 00:40:35.038 2010.764 - 2025.658: 82.1375% ( 545) 00:40:35.038 2025.658 - 2040.553: 82.8837% ( 545) 00:40:35.038 2040.553 - 2055.447: 83.5807% ( 509) 00:40:35.038 2055.447 - 2070.342: 84.2435% ( 484) 00:40:35.038 2070.342 - 2085.236: 84.9377% ( 507) 00:40:35.038 2085.236 - 2100.131: 85.5731% ( 464) 00:40:35.038 2100.131 - 2115.025: 86.1783% ( 442) 00:40:35.038 2115.025 - 2129.920: 86.7233% ( 398) 00:40:35.038 2129.920 - 2144.815: 87.3107% ( 429) 00:40:35.038 2144.815 - 2159.709: 87.7995% ( 357) 00:40:35.038 2159.709 - 2174.604: 88.2843% ( 354) 00:40:35.038 2174.604 - 2189.498: 88.7649% ( 351) 00:40:35.038 2189.498 - 2204.393: 89.2209% ( 333) 00:40:35.038 2204.393 - 2219.287: 89.6864% ( 340) 00:40:35.038 2219.287 - 2234.182: 90.1246% ( 320) 00:40:35.038 2234.182 - 2249.076: 90.5107% ( 282) 00:40:35.038 2249.076 - 2263.971: 90.8983% ( 283) 00:40:35.038 2263.971 - 2278.865: 91.2776% ( 277) 00:40:35.038 2278.865 - 2293.760: 91.6035% ( 238) 00:40:35.038 2293.760 - 2308.655: 91.9280% ( 237) 00:40:35.039 2308.655 - 2323.549: 92.1922% ( 193) 00:40:35.039 2323.549 - 2338.444: 92.4962% ( 222) 00:40:35.039 2338.444 - 2353.338: 92.7523% ( 187) 00:40:35.039 2353.338 - 2368.233: 93.0125% ( 190) 00:40:35.039 2368.233 - 2383.127: 93.2767% ( 193) 00:40:35.039 2383.127 - 2398.022: 93.5191% ( 177) 00:40:35.039 2398.022 - 2412.916: 93.7601% ( 176) 00:40:35.039 2412.916 - 2427.811: 93.9628% ( 148) 00:40:35.039 2427.811 - 2442.705: 94.1640% ( 147) 00:40:35.039 2442.705 - 2457.600: 94.3530% ( 138) 00:40:35.039 2457.600 - 2472.495: 94.5379% ( 135) 00:40:35.039 2472.495 - 2487.389: 94.7077% ( 124) 00:40:35.039 2487.389 - 2502.284: 94.8692% ( 118) 00:40:35.039 2502.284 - 2517.178: 95.0445% ( 128) 00:40:35.039 2517.178 - 2532.073: 95.1979% ( 112) 00:40:35.039 2532.073 - 2546.967: 95.3457% ( 108) 00:40:35.039 2546.967 - 2561.862: 95.5060% ( 117) 00:40:35.039 2561.862 - 2576.756: 95.6484% ( 104) 00:40:35.039 2576.756 - 2591.651: 95.8031% ( 113) 00:40:35.039 2591.651 - 2606.545: 95.9537% ( 110) 00:40:35.039 2606.545 - 2621.440: 96.0783% ( 91) 00:40:35.039 2621.440 - 2636.335: 96.2016% ( 90) 00:40:35.039 2636.335 - 2651.229: 96.3193% ( 86) 00:40:35.039 2651.229 - 2666.124: 96.4467% ( 93) 00:40:35.039 2666.124 - 2681.018: 96.5644% ( 86) 00:40:35.039 2681.018 - 2695.913: 96.6808% ( 85) 00:40:35.039 2695.913 - 2710.807: 96.7972% ( 85) 00:40:35.039 2710.807 - 2725.702: 96.9068% ( 80) 00:40:35.039 2725.702 - 2740.596: 97.0040% ( 71) 00:40:35.039 2740.596 - 2755.491: 97.1108% ( 78) 00:40:35.039 2755.491 - 2770.385: 97.2272% ( 85) 00:40:35.039 2770.385 - 2785.280: 97.3121% ( 62) 00:40:35.039 2785.280 - 2800.175: 97.3860% ( 54) 00:40:35.039 2800.175 - 2815.069: 97.4654% ( 58) 00:40:35.039 2815.069 - 2829.964: 97.5435% ( 57) 00:40:35.039 2829.964 - 2844.858: 97.6092% ( 48) 00:40:35.039 2844.858 - 2859.753: 97.6790% ( 51) 00:40:35.039 2859.753 - 2874.647: 97.7393% ( 44) 00:40:35.039 2874.647 - 2889.542: 97.7982% ( 43) 00:40:35.039 2889.542 - 2904.436: 97.8570% ( 43) 00:40:35.039 2904.436 - 2919.331: 97.9091% ( 38) 00:40:35.039 2919.331 - 2934.225: 97.9570% ( 35) 00:40:35.039 2934.225 - 2949.120: 98.0090% ( 38) 00:40:35.039 2949.120 - 2964.015: 98.0515% ( 31) 00:40:35.039 2964.015 - 2978.909: 98.0871% ( 26) 00:40:35.039 2978.909 - 2993.804: 98.1323% ( 33) 00:40:35.039 2993.804 - 3008.698: 98.1734% ( 30) 00:40:35.039 3008.698 - 3023.593: 98.2268% ( 39) 00:40:35.039 3023.593 - 3038.487: 98.2678% ( 30) 00:40:35.039 3038.487 - 3053.382: 98.3117% ( 32) 00:40:35.039 3053.382 - 3068.276: 98.3527% ( 30) 00:40:35.039 3068.276 - 3083.171: 98.3883% ( 26) 00:40:35.039 3083.171 - 3098.065: 98.4322% ( 32) 00:40:35.039 3098.065 - 3112.960: 98.4678% ( 26) 00:40:35.039 3112.960 - 3127.855: 98.5047% ( 27) 00:40:35.039 3127.855 - 3142.749: 98.5431% ( 28) 00:40:35.039 3142.749 - 3157.644: 98.5814% ( 28) 00:40:35.039 3157.644 - 3172.538: 98.6088% ( 20) 00:40:35.039 3172.538 - 3187.433: 98.6471% ( 28) 00:40:35.039 3187.433 - 3202.327: 98.6759% ( 21) 00:40:35.039 3202.327 - 3217.222: 98.7129% ( 27) 00:40:35.039 3217.222 - 3232.116: 98.7416% ( 21) 00:40:35.039 3232.116 - 3247.011: 98.7731% ( 23) 00:40:35.039 3247.011 - 3261.905: 98.8032% ( 22) 00:40:35.039 3261.905 - 3276.800: 98.8361% ( 24) 00:40:35.039 3276.800 - 3291.695: 98.8607% ( 18) 00:40:35.039 3291.695 - 3306.589: 98.8895% ( 21) 00:40:35.039 3306.589 - 3321.484: 98.9183% ( 21) 00:40:35.039 3321.484 - 3336.378: 98.9484% ( 22) 00:40:35.039 3336.378 - 3351.273: 98.9785% ( 22) 00:40:35.039 3351.273 - 3366.167: 99.0031% ( 18) 00:40:35.039 3366.167 - 3381.062: 99.0292% ( 19) 00:40:35.039 3381.062 - 3395.956: 99.0579% ( 21) 00:40:35.039 3395.956 - 3410.851: 99.0963% ( 28) 00:40:35.039 3410.851 - 3425.745: 99.1319% ( 26) 00:40:35.039 3425.745 - 3440.640: 99.1606% ( 21) 00:40:35.039 3440.640 - 3455.535: 99.1825% ( 16) 00:40:35.039 3455.535 - 3470.429: 99.2044% ( 16) 00:40:35.039 3470.429 - 3485.324: 99.2263% ( 16) 00:40:35.039 3485.324 - 3500.218: 99.2483% ( 16) 00:40:35.039 3500.218 - 3515.113: 99.2674% ( 14) 00:40:35.039 3515.113 - 3530.007: 99.2866% ( 14) 00:40:35.039 3530.007 - 3544.902: 99.3044% ( 13) 00:40:35.039 3544.902 - 3559.796: 99.3167% ( 9) 00:40:35.039 3559.796 - 3574.691: 99.3290% ( 9) 00:40:35.039 3574.691 - 3589.585: 99.3455% ( 12) 00:40:35.039 3589.585 - 3604.480: 99.3605% ( 11) 00:40:35.039 3604.480 - 3619.375: 99.3770% ( 12) 00:40:35.039 3619.375 - 3634.269: 99.3934% ( 12) 00:40:35.039 3634.269 - 3649.164: 99.4098% ( 12) 00:40:35.039 3649.164 - 3664.058: 99.4249% ( 11) 00:40:35.039 3664.058 - 3678.953: 99.4427% ( 13) 00:40:35.039 3678.953 - 3693.847: 99.4660% ( 17) 00:40:35.039 3693.847 - 3708.742: 99.4742% ( 6) 00:40:35.039 3708.742 - 3723.636: 99.4906% ( 12) 00:40:35.039 3723.636 - 3738.531: 99.5016% ( 8) 00:40:35.039 3738.531 - 3753.425: 99.5153% ( 10) 00:40:35.039 3753.425 - 3768.320: 99.5262% ( 8) 00:40:35.039 3768.320 - 3783.215: 99.5344% ( 6) 00:40:35.039 3783.215 - 3798.109: 99.5440% ( 7) 00:40:35.039 3798.109 - 3813.004: 99.5509% ( 5) 00:40:35.039 3813.004 - 3842.793: 99.5673% ( 12) 00:40:35.039 3842.793 - 3872.582: 99.5837% ( 12) 00:40:35.039 3872.582 - 3902.371: 99.6015% ( 13) 00:40:35.039 3902.371 - 3932.160: 99.6166% ( 11) 00:40:35.039 3932.160 - 3961.949: 99.6330% ( 12) 00:40:35.039 3961.949 - 3991.738: 99.6454% ( 9) 00:40:35.039 3991.738 - 4021.527: 99.6590% ( 10) 00:40:35.039 4021.527 - 4051.316: 99.6714% ( 9) 00:40:35.039 4051.316 - 4081.105: 99.6796% ( 6) 00:40:35.039 4081.105 - 4110.895: 99.6905% ( 8) 00:40:35.039 4110.895 - 4140.684: 99.7001% ( 7) 00:40:35.039 4140.684 - 4170.473: 99.7070% ( 5) 00:40:35.039 4170.473 - 4200.262: 99.7138% ( 5) 00:40:35.039 4200.262 - 4230.051: 99.7220% ( 6) 00:40:35.039 4230.051 - 4259.840: 99.7302% ( 6) 00:40:35.039 4259.840 - 4289.629: 99.7357% ( 4) 00:40:35.039 4289.629 - 4319.418: 99.7398% ( 3) 00:40:35.039 4319.418 - 4349.207: 99.7412% ( 1) 00:40:35.039 4349.207 - 4378.996: 99.7439% ( 2) 00:40:35.039 4438.575 - 4468.364: 99.7453% ( 1) 00:40:35.039 4527.942 - 4557.731: 99.7480% ( 2) 00:40:35.039 4557.731 - 4587.520: 99.7494% ( 1) 00:40:35.039 4647.098 - 4676.887: 99.7508% ( 1) 00:40:35.039 4885.411 - 4915.200: 99.7522% ( 1) 00:40:35.039 5272.669 - 5302.458: 99.7535% ( 1) 00:40:35.039 5302.458 - 5332.247: 99.7549% ( 1) 00:40:35.039 5600.349 - 5630.138: 99.7563% ( 1) 00:40:35.039 5659.927 - 5689.716: 99.7576% ( 1) 00:40:35.039 6881.280 - 6911.069: 99.7617% ( 3) 00:40:35.039 6911.069 - 6940.858: 99.7645% ( 2) 00:40:35.039 6940.858 - 6970.647: 99.7672% ( 2) 00:40:35.039 7298.327 - 7328.116: 99.7686% ( 1) 00:40:35.039 7417.484 - 7447.273: 99.7700% ( 1) 00:40:35.039 7745.164 - 7804.742: 99.7727% ( 2) 00:40:35.039 7804.742 - 7864.320: 99.7768% ( 3) 00:40:35.039 7864.320 - 7923.898: 99.7795% ( 2) 00:40:35.039 7923.898 - 7983.476: 99.7823% ( 2) 00:40:35.039 8996.305 - 9055.884: 99.7837% ( 1) 00:40:35.039 9055.884 - 9115.462: 99.7850% ( 1) 00:40:35.039 9115.462 - 9175.040: 99.7878% ( 2) 00:40:35.039 9175.040 - 9234.618: 99.7919% ( 3) 00:40:35.039 9234.618 - 9294.196: 99.7960% ( 3) 00:40:35.039 9294.196 - 9353.775: 99.7987% ( 2) 00:40:35.039 9353.775 - 9413.353: 99.8015% ( 2) 00:40:35.039 9413.353 - 9472.931: 99.8042% ( 2) 00:40:35.039 9532.509 - 9592.087: 99.8097% ( 4) 00:40:35.039 9592.087 - 9651.665: 99.8193% ( 7) 00:40:35.039 9651.665 - 9711.244: 99.8220% ( 2) 00:40:35.039 9889.978 - 9949.556: 99.8234% ( 1) 00:40:35.039 9949.556 - 10009.135: 99.8247% ( 1) 00:40:35.039 13226.356 - 13285.935: 99.8329% ( 6) 00:40:35.039 13285.935 - 13345.513: 99.8877% ( 40) 00:40:35.039 13345.513 - 13405.091: 99.9343% ( 34) 00:40:35.039 13405.091 - 13464.669: 99.9356% ( 1) 00:40:35.039 13583.825 - 13643.404: 99.9466% ( 8) 00:40:35.039 13643.404 - 13702.982: 99.9726% ( 19) 00:40:35.039 13702.982 - 13762.560: 99.9822% ( 7) 00:40:35.039 13762.560 - 13822.138: 99.9863% ( 3) 00:40:35.040 13822.138 - 13881.716: 99.9918% ( 4) 00:40:35.040 13881.716 - 13941.295: 99.9973% ( 4) 00:40:35.040 13941.295 - 14000.873: 100.0000% ( 2) 00:40:35.040 00:40:35.040 23:26:24 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:40:35.040 00:40:35.040 real 0m2.544s 00:40:35.040 user 0m2.153s 00:40:35.040 sys 0m0.210s 00:40:35.040 23:26:24 nvme.nvme_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:35.040 23:26:24 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:40:35.040 ************************************ 00:40:35.040 END TEST nvme_perf 00:40:35.040 ************************************ 00:40:35.297 23:26:24 nvme -- common/autotest_common.sh@1142 -- # return 0 00:40:35.297 23:26:24 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:40:35.297 23:26:24 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:40:35.298 23:26:24 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:35.298 23:26:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:35.298 ************************************ 00:40:35.298 START TEST nvme_hello_world 00:40:35.298 ************************************ 00:40:35.298 23:26:24 nvme.nvme_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:40:35.555 Initializing NVMe Controllers 00:40:35.555 Attached to 0000:00:10.0 00:40:35.555 Namespace ID: 1 size: 5GB 00:40:35.555 Initialization complete. 00:40:35.555 INFO: using host memory buffer for IO 00:40:35.555 Hello world! 00:40:35.555 00:40:35.555 real 0m0.280s 00:40:35.555 user 0m0.101s 00:40:35.555 sys 0m0.101s 00:40:35.555 23:26:24 nvme.nvme_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:35.555 23:26:24 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:40:35.555 ************************************ 00:40:35.555 END TEST nvme_hello_world 00:40:35.555 ************************************ 00:40:35.555 23:26:24 nvme -- common/autotest_common.sh@1142 -- # return 0 00:40:35.555 23:26:24 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:40:35.555 23:26:24 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:35.555 23:26:24 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:35.555 23:26:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:35.555 ************************************ 00:40:35.555 START TEST nvme_sgl 00:40:35.555 ************************************ 00:40:35.555 23:26:24 nvme.nvme_sgl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:40:35.813 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:40:35.813 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:40:35.813 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:40:35.813 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:40:35.813 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:40:35.813 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:40:35.813 NVMe Readv/Writev Request test 00:40:35.813 Attached to 0000:00:10.0 00:40:35.813 0000:00:10.0: build_io_request_2 test passed 00:40:35.813 0000:00:10.0: build_io_request_4 test passed 00:40:35.813 0000:00:10.0: build_io_request_5 test passed 00:40:35.813 0000:00:10.0: build_io_request_6 test passed 00:40:35.813 0000:00:10.0: build_io_request_7 test passed 00:40:35.813 0000:00:10.0: build_io_request_10 test passed 00:40:35.813 Cleaning up... 00:40:35.813 00:40:35.813 real 0m0.306s 00:40:35.813 user 0m0.136s 00:40:35.813 sys 0m0.103s 00:40:35.813 23:26:25 nvme.nvme_sgl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:35.813 23:26:25 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:40:35.813 ************************************ 00:40:35.813 END TEST nvme_sgl 00:40:35.813 ************************************ 00:40:35.813 23:26:25 nvme -- common/autotest_common.sh@1142 -- # return 0 00:40:35.813 23:26:25 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:40:35.813 23:26:25 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:35.813 23:26:25 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:35.813 23:26:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:35.813 ************************************ 00:40:35.813 START TEST nvme_e2edp 00:40:35.813 ************************************ 00:40:35.813 23:26:25 nvme.nvme_e2edp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:40:36.071 NVMe Write/Read with End-to-End data protection test 00:40:36.071 Attached to 0000:00:10.0 00:40:36.071 Cleaning up... 00:40:36.071 00:40:36.071 real 0m0.291s 00:40:36.071 user 0m0.097s 00:40:36.071 sys 0m0.110s 00:40:36.071 23:26:25 nvme.nvme_e2edp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:36.071 23:26:25 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:40:36.071 ************************************ 00:40:36.071 END TEST nvme_e2edp 00:40:36.071 ************************************ 00:40:36.329 23:26:25 nvme -- common/autotest_common.sh@1142 -- # return 0 00:40:36.329 23:26:25 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:40:36.329 23:26:25 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:36.329 23:26:25 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:36.329 23:26:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:36.329 ************************************ 00:40:36.329 START TEST nvme_reserve 00:40:36.329 ************************************ 00:40:36.329 23:26:25 nvme.nvme_reserve -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:40:36.588 ===================================================== 00:40:36.588 NVMe Controller at PCI bus 0, device 16, function 0 00:40:36.588 ===================================================== 00:40:36.588 Reservations: Not Supported 00:40:36.588 Reservation test passed 00:40:36.588 00:40:36.588 real 0m0.270s 00:40:36.588 user 0m0.075s 00:40:36.588 sys 0m0.125s 00:40:36.588 23:26:25 nvme.nvme_reserve -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:36.588 ************************************ 00:40:36.588 23:26:25 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:40:36.588 END TEST nvme_reserve 00:40:36.588 ************************************ 00:40:36.588 23:26:25 nvme -- common/autotest_common.sh@1142 -- # return 0 00:40:36.588 23:26:25 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:40:36.588 23:26:25 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:36.588 23:26:25 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:36.588 23:26:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:36.588 ************************************ 00:40:36.588 START TEST nvme_err_injection 00:40:36.588 ************************************ 00:40:36.588 23:26:25 nvme.nvme_err_injection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:40:36.847 NVMe Error Injection test 00:40:36.847 Attached to 0000:00:10.0 00:40:36.847 0000:00:10.0: get features failed as expected 00:40:36.847 0000:00:10.0: get features successfully as expected 00:40:36.847 0000:00:10.0: read failed as expected 00:40:36.847 0000:00:10.0: read successfully as expected 00:40:36.847 Cleaning up... 00:40:36.847 00:40:36.847 real 0m0.268s 00:40:36.847 user 0m0.095s 00:40:36.847 sys 0m0.111s 00:40:36.847 23:26:26 nvme.nvme_err_injection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:36.847 ************************************ 00:40:36.847 23:26:26 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:40:36.847 END TEST nvme_err_injection 00:40:36.847 ************************************ 00:40:36.847 23:26:26 nvme -- common/autotest_common.sh@1142 -- # return 0 00:40:36.847 23:26:26 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:40:36.847 23:26:26 nvme -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:40:36.847 23:26:26 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:36.847 23:26:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:36.847 ************************************ 00:40:36.847 START TEST nvme_overhead 00:40:36.847 ************************************ 00:40:36.847 23:26:26 nvme.nvme_overhead -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:40:38.221 Initializing NVMe Controllers 00:40:38.221 Attached to 0000:00:10.0 00:40:38.221 Initialization complete. Launching workers. 00:40:38.221 submit (in ns) avg, min, max = 15932.5, 12103.6, 89644.1 00:40:38.221 complete (in ns) avg, min, max = 10868.5, 8250.9, 170605.5 00:40:38.221 00:40:38.221 Submit histogram 00:40:38.221 ================ 00:40:38.221 Range in us Cumulative Count 00:40:38.221 12.102 - 12.160: 0.0366% ( 3) 00:40:38.221 12.160 - 12.218: 0.0610% ( 2) 00:40:38.221 12.218 - 12.276: 0.0975% ( 3) 00:40:38.221 12.276 - 12.335: 0.1341% ( 3) 00:40:38.221 12.335 - 12.393: 0.3048% ( 14) 00:40:38.221 12.393 - 12.451: 0.6341% ( 27) 00:40:38.221 12.451 - 12.509: 1.4388% ( 66) 00:40:38.221 12.509 - 12.567: 2.4143% ( 80) 00:40:38.221 12.567 - 12.625: 3.1338% ( 59) 00:40:38.221 12.625 - 12.684: 3.8410% ( 58) 00:40:38.221 12.684 - 12.742: 4.7067% ( 71) 00:40:38.221 12.742 - 12.800: 6.3529% ( 135) 00:40:38.221 12.800 - 12.858: 8.7550% ( 197) 00:40:38.221 12.858 - 12.916: 11.2303% ( 203) 00:40:38.221 12.916 - 12.975: 13.0350% ( 148) 00:40:38.221 12.975 - 13.033: 14.2056% ( 96) 00:40:38.221 13.033 - 13.091: 14.9006% ( 57) 00:40:38.221 13.091 - 13.149: 15.8273% ( 76) 00:40:38.221 13.149 - 13.207: 17.0101% ( 97) 00:40:38.221 13.207 - 13.265: 18.6319% ( 133) 00:40:38.221 13.265 - 13.324: 19.4245% ( 65) 00:40:38.221 13.324 - 13.382: 20.0951% ( 55) 00:40:38.221 13.382 - 13.440: 20.4121% ( 26) 00:40:38.221 13.440 - 13.498: 20.8877% ( 39) 00:40:38.221 13.498 - 13.556: 21.4242% ( 44) 00:40:38.221 13.556 - 13.615: 22.2168% ( 65) 00:40:38.221 13.615 - 13.673: 23.2533% ( 85) 00:40:38.221 13.673 - 13.731: 24.5458% ( 106) 00:40:38.221 13.731 - 13.789: 26.7894% ( 184) 00:40:38.221 13.789 - 13.847: 28.3624% ( 129) 00:40:38.221 13.847 - 13.905: 29.8622% ( 123) 00:40:38.221 13.905 - 13.964: 30.6914% ( 68) 00:40:38.221 13.964 - 14.022: 31.4840% ( 65) 00:40:38.221 14.022 - 14.080: 32.9960% ( 124) 00:40:38.221 14.080 - 14.138: 35.4225% ( 199) 00:40:38.221 14.138 - 14.196: 38.3246% ( 238) 00:40:38.221 14.196 - 14.255: 40.8609% ( 208) 00:40:38.221 14.255 - 14.313: 42.5802% ( 141) 00:40:38.221 14.313 - 14.371: 43.7386% ( 95) 00:40:38.221 14.371 - 14.429: 45.0067% ( 104) 00:40:38.221 14.429 - 14.487: 47.3967% ( 196) 00:40:38.221 14.487 - 14.545: 51.4449% ( 332) 00:40:38.221 14.545 - 14.604: 56.6638% ( 428) 00:40:38.221 14.604 - 14.662: 60.8950% ( 347) 00:40:38.221 14.662 - 14.720: 62.9679% ( 170) 00:40:38.221 14.720 - 14.778: 64.1141% ( 94) 00:40:38.221 14.778 - 14.836: 65.1750% ( 87) 00:40:38.221 14.836 - 14.895: 66.1871% ( 83) 00:40:38.221 14.895 - 15.011: 67.6259% ( 118) 00:40:38.221 15.011 - 15.127: 68.5526% ( 76) 00:40:38.221 15.127 - 15.244: 69.0038% ( 37) 00:40:38.221 15.244 - 15.360: 69.2720% ( 22) 00:40:38.221 15.360 - 15.476: 69.4671% ( 16) 00:40:38.221 15.476 - 15.593: 69.5281% ( 5) 00:40:38.221 15.593 - 15.709: 69.6257% ( 8) 00:40:38.221 15.709 - 15.825: 69.6988% ( 6) 00:40:38.221 15.825 - 15.942: 69.7598% ( 5) 00:40:38.221 15.942 - 16.058: 69.8817% ( 10) 00:40:38.221 16.058 - 16.175: 69.9183% ( 3) 00:40:38.221 16.175 - 16.291: 69.9427% ( 2) 00:40:38.221 16.291 - 16.407: 69.9671% ( 2) 00:40:38.221 16.407 - 16.524: 70.7719% ( 66) 00:40:38.221 16.524 - 16.640: 78.1124% ( 602) 00:40:38.221 16.640 - 16.756: 84.7214% ( 542) 00:40:38.221 16.756 - 16.873: 87.2577% ( 208) 00:40:38.221 16.873 - 16.989: 88.8672% ( 132) 00:40:38.221 16.989 - 17.105: 89.4403% ( 47) 00:40:38.221 17.105 - 17.222: 89.7452% ( 25) 00:40:38.221 17.222 - 17.338: 89.8915% ( 12) 00:40:38.221 17.338 - 17.455: 89.9403% ( 4) 00:40:38.221 17.455 - 17.571: 90.0378% ( 8) 00:40:38.221 17.571 - 17.687: 90.0988% ( 5) 00:40:38.221 17.687 - 17.804: 90.1719% ( 6) 00:40:38.221 17.804 - 17.920: 90.2695% ( 8) 00:40:38.221 17.920 - 18.036: 90.3426% ( 6) 00:40:38.221 18.036 - 18.153: 90.3670% ( 2) 00:40:38.221 18.153 - 18.269: 90.4158% ( 4) 00:40:38.221 18.269 - 18.385: 90.4402% ( 2) 00:40:38.221 18.385 - 18.502: 90.4646% ( 2) 00:40:38.221 18.502 - 18.618: 90.5255% ( 5) 00:40:38.222 18.618 - 18.735: 90.5499% ( 2) 00:40:38.222 18.735 - 18.851: 90.5865% ( 3) 00:40:38.222 18.851 - 18.967: 90.6109% ( 2) 00:40:38.222 18.967 - 19.084: 90.6719% ( 5) 00:40:38.222 19.084 - 19.200: 90.6841% ( 1) 00:40:38.222 19.200 - 19.316: 90.6963% ( 1) 00:40:38.222 19.316 - 19.433: 90.7206% ( 2) 00:40:38.222 19.433 - 19.549: 90.7450% ( 2) 00:40:38.222 19.549 - 19.665: 90.7572% ( 1) 00:40:38.222 19.782 - 19.898: 90.7816% ( 2) 00:40:38.222 19.898 - 20.015: 90.7938% ( 1) 00:40:38.222 20.015 - 20.131: 90.8060% ( 1) 00:40:38.222 20.131 - 20.247: 90.8182% ( 1) 00:40:38.222 20.247 - 20.364: 90.8548% ( 3) 00:40:38.222 20.364 - 20.480: 90.8792% ( 2) 00:40:38.222 20.713 - 20.829: 90.8914% ( 1) 00:40:38.222 20.945 - 21.062: 90.9035% ( 1) 00:40:38.222 21.062 - 21.178: 90.9279% ( 2) 00:40:38.222 21.295 - 21.411: 90.9401% ( 1) 00:40:38.222 21.411 - 21.527: 90.9645% ( 2) 00:40:38.222 21.527 - 21.644: 90.9767% ( 1) 00:40:38.222 21.644 - 21.760: 91.0011% ( 2) 00:40:38.222 21.760 - 21.876: 91.0133% ( 1) 00:40:38.222 21.876 - 21.993: 91.0255% ( 1) 00:40:38.222 21.993 - 22.109: 91.0499% ( 2) 00:40:38.222 22.109 - 22.225: 91.0743% ( 2) 00:40:38.222 22.342 - 22.458: 91.1108% ( 3) 00:40:38.222 22.458 - 22.575: 91.1230% ( 1) 00:40:38.222 22.575 - 22.691: 91.1352% ( 1) 00:40:38.222 22.807 - 22.924: 91.1474% ( 1) 00:40:38.222 22.924 - 23.040: 91.1596% ( 1) 00:40:38.222 23.040 - 23.156: 91.1840% ( 2) 00:40:38.222 23.156 - 23.273: 91.2206% ( 3) 00:40:38.222 23.389 - 23.505: 91.2328% ( 1) 00:40:38.222 23.622 - 23.738: 91.2572% ( 2) 00:40:38.222 23.738 - 23.855: 91.2937% ( 3) 00:40:38.222 23.971 - 24.087: 91.3547% ( 5) 00:40:38.222 24.204 - 24.320: 91.3913% ( 3) 00:40:38.222 24.553 - 24.669: 91.4157% ( 2) 00:40:38.222 24.669 - 24.785: 91.4645% ( 4) 00:40:38.222 24.785 - 24.902: 91.4766% ( 1) 00:40:38.222 24.902 - 25.018: 91.4888% ( 1) 00:40:38.222 25.135 - 25.251: 91.5010% ( 1) 00:40:38.222 25.367 - 25.484: 91.5132% ( 1) 00:40:38.222 25.484 - 25.600: 91.5254% ( 1) 00:40:38.222 25.716 - 25.833: 91.5376% ( 1) 00:40:38.222 26.065 - 26.182: 91.5620% ( 2) 00:40:38.222 26.415 - 26.531: 91.5742% ( 1) 00:40:38.222 26.531 - 26.647: 91.5986% ( 2) 00:40:38.222 26.764 - 26.880: 91.6108% ( 1) 00:40:38.222 26.880 - 26.996: 91.6596% ( 4) 00:40:38.222 26.996 - 27.113: 91.6717% ( 1) 00:40:38.222 27.113 - 27.229: 91.7327% ( 5) 00:40:38.222 27.229 - 27.345: 91.7815% ( 4) 00:40:38.222 27.345 - 27.462: 91.8912% ( 9) 00:40:38.222 27.462 - 27.578: 92.1595% ( 22) 00:40:38.222 27.578 - 27.695: 92.5985% ( 36) 00:40:38.222 27.695 - 27.811: 92.8667% ( 22) 00:40:38.222 27.811 - 27.927: 93.1960% ( 27) 00:40:38.222 27.927 - 28.044: 93.3789% ( 15) 00:40:38.222 28.044 - 28.160: 93.7203% ( 28) 00:40:38.222 28.160 - 28.276: 93.9154% ( 16) 00:40:38.222 28.276 - 28.393: 94.0983% ( 15) 00:40:38.222 28.393 - 28.509: 94.4519% ( 29) 00:40:38.222 28.509 - 28.625: 94.7933% ( 28) 00:40:38.222 28.625 - 28.742: 95.1469% ( 29) 00:40:38.222 28.742 - 28.858: 95.6591% ( 42) 00:40:38.222 28.858 - 28.975: 96.1102% ( 37) 00:40:38.222 28.975 - 29.091: 96.6224% ( 42) 00:40:38.222 29.091 - 29.207: 97.1955% ( 47) 00:40:38.222 29.207 - 29.324: 97.6832% ( 40) 00:40:38.222 29.324 - 29.440: 98.0490% ( 30) 00:40:38.222 29.440 - 29.556: 98.4026% ( 29) 00:40:38.222 29.556 - 29.673: 98.6465% ( 20) 00:40:38.222 29.673 - 29.789: 98.7562% ( 9) 00:40:38.222 29.789 - 30.022: 98.9270% ( 14) 00:40:38.222 30.022 - 30.255: 99.0733% ( 12) 00:40:38.222 30.255 - 30.487: 99.1708% ( 8) 00:40:38.222 30.487 - 30.720: 99.2440% ( 6) 00:40:38.222 30.720 - 30.953: 99.2928% ( 4) 00:40:38.222 30.953 - 31.185: 99.3415% ( 4) 00:40:38.222 31.185 - 31.418: 99.3537% ( 1) 00:40:38.222 31.418 - 31.651: 99.4147% ( 5) 00:40:38.222 31.651 - 31.884: 99.4635% ( 4) 00:40:38.222 31.884 - 32.116: 99.4879% ( 2) 00:40:38.222 32.582 - 32.815: 99.5001% ( 1) 00:40:38.222 33.047 - 33.280: 99.5123% ( 1) 00:40:38.222 33.280 - 33.513: 99.5366% ( 2) 00:40:38.222 33.978 - 34.211: 99.5610% ( 2) 00:40:38.222 34.211 - 34.444: 99.5854% ( 2) 00:40:38.222 34.444 - 34.676: 99.6220% ( 3) 00:40:38.222 34.909 - 35.142: 99.6342% ( 1) 00:40:38.222 35.142 - 35.375: 99.6464% ( 1) 00:40:38.222 35.375 - 35.607: 99.6586% ( 1) 00:40:38.222 35.840 - 36.073: 99.6708% ( 1) 00:40:38.222 36.305 - 36.538: 99.6830% ( 1) 00:40:38.222 36.538 - 36.771: 99.6952% ( 1) 00:40:38.222 37.702 - 37.935: 99.7074% ( 1) 00:40:38.222 38.633 - 38.865: 99.7195% ( 1) 00:40:38.222 41.891 - 42.124: 99.7317% ( 1) 00:40:38.222 43.055 - 43.287: 99.7561% ( 2) 00:40:38.222 43.287 - 43.520: 99.7683% ( 1) 00:40:38.222 43.753 - 43.985: 99.7805% ( 1) 00:40:38.222 43.985 - 44.218: 99.7927% ( 1) 00:40:38.222 44.684 - 44.916: 99.8049% ( 1) 00:40:38.222 44.916 - 45.149: 99.8171% ( 1) 00:40:38.222 45.149 - 45.382: 99.8293% ( 1) 00:40:38.222 45.615 - 45.847: 99.8415% ( 1) 00:40:38.222 50.735 - 50.967: 99.8659% ( 2) 00:40:38.222 50.967 - 51.200: 99.8781% ( 1) 00:40:38.222 54.458 - 54.691: 99.8903% ( 1) 00:40:38.222 57.716 - 57.949: 99.9025% ( 1) 00:40:38.222 58.415 - 58.647: 99.9146% ( 1) 00:40:38.222 60.975 - 61.440: 99.9268% ( 1) 00:40:38.222 61.905 - 62.371: 99.9390% ( 1) 00:40:38.222 62.371 - 62.836: 99.9512% ( 1) 00:40:38.222 69.353 - 69.818: 99.9756% ( 2) 00:40:38.222 88.436 - 88.902: 99.9878% ( 1) 00:40:38.222 89.367 - 89.833: 100.0000% ( 1) 00:40:38.222 00:40:38.222 Complete histogram 00:40:38.222 ================== 00:40:38.222 Range in us Cumulative Count 00:40:38.222 8.204 - 8.262: 0.0244% ( 2) 00:40:38.222 8.262 - 8.320: 0.0854% ( 5) 00:40:38.222 8.320 - 8.378: 0.1951% ( 9) 00:40:38.222 8.378 - 8.436: 0.6828% ( 40) 00:40:38.222 8.436 - 8.495: 1.5852% ( 74) 00:40:38.222 8.495 - 8.553: 3.2557% ( 137) 00:40:38.222 8.553 - 8.611: 4.4751% ( 100) 00:40:38.222 8.611 - 8.669: 5.3530% ( 72) 00:40:38.222 8.669 - 8.727: 7.3893% ( 167) 00:40:38.222 8.727 - 8.785: 11.3401% ( 324) 00:40:38.222 8.785 - 8.844: 15.8639% ( 371) 00:40:38.222 8.844 - 8.902: 19.2903% ( 281) 00:40:38.222 8.902 - 8.960: 22.4851% ( 262) 00:40:38.222 8.960 - 9.018: 31.0694% ( 704) 00:40:38.222 9.018 - 9.076: 41.6778% ( 870) 00:40:38.222 9.076 - 9.135: 48.6404% ( 571) 00:40:38.222 9.135 - 9.193: 52.0302% ( 278) 00:40:38.222 9.193 - 9.251: 54.0300% ( 164) 00:40:38.222 9.251 - 9.309: 56.0298% ( 164) 00:40:38.222 9.309 - 9.367: 59.0538% ( 248) 00:40:38.222 9.367 - 9.425: 62.6265% ( 293) 00:40:38.222 9.425 - 9.484: 64.9921% ( 194) 00:40:38.222 9.484 - 9.542: 65.8456% ( 70) 00:40:38.222 9.542 - 9.600: 66.3700% ( 43) 00:40:38.222 9.600 - 9.658: 66.6382% ( 22) 00:40:38.222 9.658 - 9.716: 67.2113% ( 47) 00:40:38.222 9.716 - 9.775: 67.7966% ( 48) 00:40:38.222 9.775 - 9.833: 68.4307% ( 52) 00:40:38.222 9.833 - 9.891: 68.8453% ( 34) 00:40:38.222 9.891 - 9.949: 69.2720% ( 35) 00:40:38.222 9.949 - 10.007: 69.5037% ( 19) 00:40:38.222 10.007 - 10.065: 69.6744% ( 14) 00:40:38.222 10.065 - 10.124: 69.9305% ( 21) 00:40:38.222 10.124 - 10.182: 70.0768% ( 12) 00:40:38.222 10.182 - 10.240: 70.1500% ( 6) 00:40:38.222 10.240 - 10.298: 70.3329% ( 15) 00:40:38.222 10.298 - 10.356: 70.5036% ( 14) 00:40:38.222 10.356 - 10.415: 70.5890% ( 7) 00:40:38.222 10.415 - 10.473: 70.7109% ( 10) 00:40:38.222 10.473 - 10.531: 70.7719% ( 5) 00:40:38.222 10.531 - 10.589: 70.7962% ( 2) 00:40:38.222 10.589 - 10.647: 70.8450% ( 4) 00:40:38.222 10.647 - 10.705: 70.9182% ( 6) 00:40:38.222 10.705 - 10.764: 70.9913% ( 6) 00:40:38.222 10.764 - 10.822: 71.0767% ( 7) 00:40:38.222 10.822 - 10.880: 71.1133% ( 3) 00:40:38.222 10.880 - 10.938: 71.1377% ( 2) 00:40:38.222 10.938 - 10.996: 71.1986% ( 5) 00:40:38.222 10.996 - 11.055: 71.2718% ( 6) 00:40:38.222 11.055 - 11.113: 71.3328% ( 5) 00:40:38.222 11.113 - 11.171: 71.4059% ( 6) 00:40:38.222 11.171 - 11.229: 71.4547% ( 4) 00:40:38.222 11.229 - 11.287: 71.5157% ( 5) 00:40:38.222 11.287 - 11.345: 71.5888% ( 6) 00:40:38.222 11.345 - 11.404: 71.6742% ( 7) 00:40:38.222 11.404 - 11.462: 71.7108% ( 3) 00:40:38.222 11.462 - 11.520: 71.8083% ( 8) 00:40:38.222 11.520 - 11.578: 73.1496% ( 110) 00:40:38.222 11.578 - 11.636: 77.8442% ( 385) 00:40:38.222 11.636 - 11.695: 84.4043% ( 538) 00:40:38.222 11.695 - 11.753: 88.2331% ( 314) 00:40:38.222 11.753 - 11.811: 89.5379% ( 107) 00:40:38.222 11.811 - 11.869: 89.8671% ( 27) 00:40:38.222 11.869 - 11.927: 90.0988% ( 19) 00:40:38.222 11.927 - 11.985: 90.2329% ( 11) 00:40:38.222 11.985 - 12.044: 90.4158% ( 15) 00:40:38.222 12.044 - 12.102: 90.5255% ( 9) 00:40:38.222 12.102 - 12.160: 90.6231% ( 8) 00:40:38.222 12.160 - 12.218: 90.6353% ( 1) 00:40:38.222 12.218 - 12.276: 90.7085% ( 6) 00:40:38.223 12.276 - 12.335: 90.7694% ( 5) 00:40:38.223 12.393 - 12.451: 90.8304% ( 5) 00:40:38.223 12.451 - 12.509: 90.9157% ( 7) 00:40:38.223 12.509 - 12.567: 90.9889% ( 6) 00:40:38.223 12.567 - 12.625: 91.0743% ( 7) 00:40:38.223 12.625 - 12.684: 91.1474% ( 6) 00:40:38.223 12.684 - 12.742: 91.1596% ( 1) 00:40:38.223 12.742 - 12.800: 91.1840% ( 2) 00:40:38.223 12.800 - 12.858: 91.1962% ( 1) 00:40:38.223 12.916 - 12.975: 91.2084% ( 1) 00:40:38.223 12.975 - 13.033: 91.2206% ( 1) 00:40:38.223 13.033 - 13.091: 91.2328% ( 1) 00:40:38.223 13.149 - 13.207: 91.2450% ( 1) 00:40:38.223 13.265 - 13.324: 91.2572% ( 1) 00:40:38.223 13.324 - 13.382: 91.2816% ( 2) 00:40:38.223 13.382 - 13.440: 91.2937% ( 1) 00:40:38.223 13.440 - 13.498: 91.3181% ( 2) 00:40:38.223 13.498 - 13.556: 91.3303% ( 1) 00:40:38.223 13.731 - 13.789: 91.3913% ( 5) 00:40:38.223 13.789 - 13.847: 91.4035% ( 1) 00:40:38.223 13.905 - 13.964: 91.4157% ( 1) 00:40:38.223 13.964 - 14.022: 91.4523% ( 3) 00:40:38.223 14.022 - 14.080: 91.4645% ( 1) 00:40:38.223 14.080 - 14.138: 91.4766% ( 1) 00:40:38.223 14.138 - 14.196: 91.5010% ( 2) 00:40:38.223 14.196 - 14.255: 91.5254% ( 2) 00:40:38.223 14.313 - 14.371: 91.5742% ( 4) 00:40:38.223 14.371 - 14.429: 91.6352% ( 5) 00:40:38.223 14.429 - 14.487: 91.6717% ( 3) 00:40:38.223 14.545 - 14.604: 91.6961% ( 2) 00:40:38.223 14.604 - 14.662: 91.7571% ( 5) 00:40:38.223 14.720 - 14.778: 91.7815% ( 2) 00:40:38.223 14.778 - 14.836: 91.7937% ( 1) 00:40:38.223 14.836 - 14.895: 91.8059% ( 1) 00:40:38.223 14.895 - 15.011: 91.8668% ( 5) 00:40:38.223 15.011 - 15.127: 91.9034% ( 3) 00:40:38.223 15.127 - 15.244: 91.9400% ( 3) 00:40:38.223 15.244 - 15.360: 92.0010% ( 5) 00:40:38.223 15.360 - 15.476: 92.0498% ( 4) 00:40:38.223 15.476 - 15.593: 92.0741% ( 2) 00:40:38.223 15.593 - 15.709: 92.1107% ( 3) 00:40:38.223 15.709 - 15.825: 92.1351% ( 2) 00:40:38.223 15.825 - 15.942: 92.1717% ( 3) 00:40:38.223 15.942 - 16.058: 92.1961% ( 2) 00:40:38.223 16.058 - 16.175: 92.2692% ( 6) 00:40:38.223 16.175 - 16.291: 92.2936% ( 2) 00:40:38.223 16.291 - 16.407: 92.3180% ( 2) 00:40:38.223 16.407 - 16.524: 92.3424% ( 2) 00:40:38.223 16.524 - 16.640: 92.4156% ( 6) 00:40:38.223 16.640 - 16.756: 92.4278% ( 1) 00:40:38.223 16.756 - 16.873: 92.4643% ( 3) 00:40:38.223 16.873 - 16.989: 92.5253% ( 5) 00:40:38.223 16.989 - 17.105: 92.5619% ( 3) 00:40:38.223 17.105 - 17.222: 92.5863% ( 2) 00:40:38.223 17.338 - 17.455: 92.6107% ( 2) 00:40:38.223 17.455 - 17.571: 92.6594% ( 4) 00:40:38.223 17.571 - 17.687: 92.6838% ( 2) 00:40:38.223 17.687 - 17.804: 92.6960% ( 1) 00:40:38.223 17.920 - 18.036: 92.7448% ( 4) 00:40:38.223 18.036 - 18.153: 92.7570% ( 1) 00:40:38.223 18.153 - 18.269: 92.7692% ( 1) 00:40:38.223 18.269 - 18.385: 92.7814% ( 1) 00:40:38.223 18.502 - 18.618: 92.8058% ( 2) 00:40:38.223 18.851 - 18.967: 92.8423% ( 3) 00:40:38.223 19.200 - 19.316: 92.8545% ( 1) 00:40:38.223 19.316 - 19.433: 92.8667% ( 1) 00:40:38.223 19.433 - 19.549: 92.9033% ( 3) 00:40:38.223 19.665 - 19.782: 92.9277% ( 2) 00:40:38.223 19.782 - 19.898: 92.9521% ( 2) 00:40:38.223 19.898 - 20.015: 92.9887% ( 3) 00:40:38.223 20.015 - 20.131: 93.0374% ( 4) 00:40:38.223 20.131 - 20.247: 93.0618% ( 2) 00:40:38.223 20.247 - 20.364: 93.0984% ( 3) 00:40:38.223 20.364 - 20.480: 93.1594% ( 5) 00:40:38.223 20.480 - 20.596: 93.2081% ( 4) 00:40:38.223 20.596 - 20.713: 93.2447% ( 3) 00:40:38.223 20.829 - 20.945: 93.2935% ( 4) 00:40:38.223 20.945 - 21.062: 93.3179% ( 2) 00:40:38.223 21.062 - 21.178: 93.3301% ( 1) 00:40:38.223 21.178 - 21.295: 93.3545% ( 2) 00:40:38.223 21.295 - 21.411: 93.3789% ( 2) 00:40:38.223 21.411 - 21.527: 93.4032% ( 2) 00:40:38.223 21.527 - 21.644: 93.4520% ( 4) 00:40:38.223 21.644 - 21.760: 93.4642% ( 1) 00:40:38.223 21.760 - 21.876: 93.4886% ( 2) 00:40:38.223 21.993 - 22.109: 93.5008% ( 1) 00:40:38.223 22.109 - 22.225: 93.5374% ( 3) 00:40:38.223 22.342 - 22.458: 93.5618% ( 2) 00:40:38.223 22.458 - 22.575: 93.5861% ( 2) 00:40:38.223 22.575 - 22.691: 93.6105% ( 2) 00:40:38.223 22.807 - 22.924: 93.6471% ( 3) 00:40:38.223 22.924 - 23.040: 93.6959% ( 4) 00:40:38.223 23.040 - 23.156: 93.8300% ( 11) 00:40:38.223 23.156 - 23.273: 94.0251% ( 16) 00:40:38.223 23.273 - 23.389: 94.4641% ( 36) 00:40:38.223 23.389 - 23.505: 95.0372% ( 47) 00:40:38.223 23.505 - 23.622: 95.7688% ( 60) 00:40:38.223 23.622 - 23.738: 96.4029% ( 52) 00:40:38.223 23.738 - 23.855: 97.0491% ( 53) 00:40:38.223 23.855 - 23.971: 97.6100% ( 46) 00:40:38.223 23.971 - 24.087: 97.9393% ( 27) 00:40:38.223 24.087 - 24.204: 98.2563% ( 26) 00:40:38.223 24.204 - 24.320: 98.4514% ( 16) 00:40:38.223 24.320 - 24.436: 98.6465% ( 16) 00:40:38.223 24.436 - 24.553: 98.7684% ( 10) 00:40:38.223 24.553 - 24.669: 98.8538% ( 7) 00:40:38.223 24.669 - 24.785: 98.9148% ( 5) 00:40:38.223 24.785 - 24.902: 98.9757% ( 5) 00:40:38.223 24.902 - 25.018: 99.0489% ( 6) 00:40:38.223 25.018 - 25.135: 99.0855% ( 3) 00:40:38.223 25.135 - 25.251: 99.1586% ( 6) 00:40:38.223 25.251 - 25.367: 99.2196% ( 5) 00:40:38.223 25.367 - 25.484: 99.2562% ( 3) 00:40:38.223 25.484 - 25.600: 99.3050% ( 4) 00:40:38.223 25.600 - 25.716: 99.3781% ( 6) 00:40:38.223 25.716 - 25.833: 99.4025% ( 2) 00:40:38.223 25.833 - 25.949: 99.4269% ( 2) 00:40:38.223 25.949 - 26.065: 99.4513% ( 2) 00:40:38.223 26.065 - 26.182: 99.4635% ( 1) 00:40:38.223 26.182 - 26.298: 99.4757% ( 1) 00:40:38.223 26.298 - 26.415: 99.4879% ( 1) 00:40:38.223 26.415 - 26.531: 99.5123% ( 2) 00:40:38.223 27.229 - 27.345: 99.5244% ( 1) 00:40:38.223 27.462 - 27.578: 99.5366% ( 1) 00:40:38.223 27.578 - 27.695: 99.5488% ( 1) 00:40:38.223 28.393 - 28.509: 99.5610% ( 1) 00:40:38.223 29.207 - 29.324: 99.5732% ( 1) 00:40:38.223 29.673 - 29.789: 99.5854% ( 1) 00:40:38.223 30.255 - 30.487: 99.5976% ( 1) 00:40:38.223 30.487 - 30.720: 99.6098% ( 1) 00:40:38.223 30.720 - 30.953: 99.6220% ( 1) 00:40:38.223 32.116 - 32.349: 99.6342% ( 1) 00:40:38.223 33.047 - 33.280: 99.6464% ( 1) 00:40:38.223 33.280 - 33.513: 99.6586% ( 1) 00:40:38.223 34.211 - 34.444: 99.6708% ( 1) 00:40:38.223 35.607 - 35.840: 99.6830% ( 1) 00:40:38.223 37.236 - 37.469: 99.6952% ( 1) 00:40:38.223 37.935 - 38.167: 99.7074% ( 1) 00:40:38.223 38.400 - 38.633: 99.7317% ( 2) 00:40:38.223 39.098 - 39.331: 99.7561% ( 2) 00:40:38.223 39.331 - 39.564: 99.7683% ( 1) 00:40:38.223 40.262 - 40.495: 99.7927% ( 2) 00:40:38.223 40.960 - 41.193: 99.8049% ( 1) 00:40:38.223 41.425 - 41.658: 99.8171% ( 1) 00:40:38.223 46.545 - 46.778: 99.8293% ( 1) 00:40:38.223 51.433 - 51.665: 99.8415% ( 1) 00:40:38.223 54.691 - 54.924: 99.8537% ( 1) 00:40:38.223 55.389 - 55.622: 99.8659% ( 1) 00:40:38.223 56.320 - 56.553: 99.8781% ( 1) 00:40:38.223 58.182 - 58.415: 99.8903% ( 1) 00:40:38.223 58.647 - 58.880: 99.9025% ( 1) 00:40:38.223 61.440 - 61.905: 99.9146% ( 1) 00:40:38.223 61.905 - 62.371: 99.9268% ( 1) 00:40:38.223 62.836 - 63.302: 99.9390% ( 1) 00:40:38.223 67.025 - 67.491: 99.9512% ( 1) 00:40:38.223 75.404 - 75.869: 99.9634% ( 1) 00:40:38.223 76.335 - 76.800: 99.9756% ( 1) 00:40:38.223 87.971 - 88.436: 99.9878% ( 1) 00:40:38.223 170.356 - 171.287: 100.0000% ( 1) 00:40:38.223 00:40:38.223 00:40:38.223 real 0m1.271s 00:40:38.223 user 0m1.105s 00:40:38.223 sys 0m0.093s 00:40:38.223 23:26:27 nvme.nvme_overhead -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:38.223 23:26:27 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:40:38.223 ************************************ 00:40:38.223 END TEST nvme_overhead 00:40:38.223 ************************************ 00:40:38.223 23:26:27 nvme -- common/autotest_common.sh@1142 -- # return 0 00:40:38.223 23:26:27 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:40:38.223 23:26:27 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:40:38.223 23:26:27 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:38.223 23:26:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:38.223 ************************************ 00:40:38.223 START TEST nvme_arbitration 00:40:38.223 ************************************ 00:40:38.223 23:26:27 nvme.nvme_arbitration -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:40:41.501 Initializing NVMe Controllers 00:40:41.501 Attached to 0000:00:10.0 00:40:41.501 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:40:41.501 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:40:41.501 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:40:41.501 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:40:41.501 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:40:41.501 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:40:41.501 Initialization complete. Launching workers. 00:40:41.501 Starting thread on core 1 with urgent priority queue 00:40:41.501 Starting thread on core 2 with urgent priority queue 00:40:41.501 Starting thread on core 3 with urgent priority queue 00:40:41.501 Starting thread on core 0 with urgent priority queue 00:40:41.502 QEMU NVMe Ctrl (12340 ) core 0: 6641.67 IO/s 15.06 secs/100000 ios 00:40:41.502 QEMU NVMe Ctrl (12340 ) core 1: 6812.67 IO/s 14.68 secs/100000 ios 00:40:41.502 QEMU NVMe Ctrl (12340 ) core 2: 3655.33 IO/s 27.36 secs/100000 ios 00:40:41.502 QEMU NVMe Ctrl (12340 ) core 3: 4014.00 IO/s 24.91 secs/100000 ios 00:40:41.502 ======================================================== 00:40:41.502 00:40:41.502 00:40:41.502 real 0m3.331s 00:40:41.502 user 0m9.165s 00:40:41.502 sys 0m0.097s 00:40:41.502 23:26:30 nvme.nvme_arbitration -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:41.502 23:26:30 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:40:41.502 ************************************ 00:40:41.502 END TEST nvme_arbitration 00:40:41.502 ************************************ 00:40:41.502 23:26:30 nvme -- common/autotest_common.sh@1142 -- # return 0 00:40:41.502 23:26:30 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:40:41.502 23:26:30 nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:40:41.502 23:26:30 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:41.502 23:26:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:41.502 ************************************ 00:40:41.502 START TEST nvme_single_aen 00:40:41.502 ************************************ 00:40:41.502 23:26:30 nvme.nvme_single_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:40:41.760 Asynchronous Event Request test 00:40:41.760 Attached to 0000:00:10.0 00:40:41.760 Reset controller to setup AER completions for this process 00:40:41.760 Registering asynchronous event callbacks... 00:40:41.760 Getting orig temperature thresholds of all controllers 00:40:41.760 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:40:41.760 Setting all controllers temperature threshold low to trigger AER 00:40:41.760 Waiting for all controllers temperature threshold to be set lower 00:40:41.760 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:40:41.760 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:40:41.760 Waiting for all controllers to trigger AER and reset threshold 00:40:41.760 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:40:41.760 Cleaning up... 00:40:41.760 00:40:41.760 real 0m0.234s 00:40:41.760 user 0m0.069s 00:40:41.760 sys 0m0.103s 00:40:41.760 23:26:31 nvme.nvme_single_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:41.760 23:26:31 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:40:41.760 ************************************ 00:40:41.760 END TEST nvme_single_aen 00:40:41.760 ************************************ 00:40:41.760 23:26:31 nvme -- common/autotest_common.sh@1142 -- # return 0 00:40:41.760 23:26:31 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:40:41.760 23:26:31 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:41.760 23:26:31 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:41.760 23:26:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:41.760 ************************************ 00:40:41.760 START TEST nvme_doorbell_aers 00:40:41.760 ************************************ 00:40:41.760 23:26:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1123 -- # nvme_doorbell_aers 00:40:41.760 23:26:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:40:41.760 23:26:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:40:41.760 23:26:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:40:41.760 23:26:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:40:41.760 23:26:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:40:41.760 23:26:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:40:41.760 23:26:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:41.760 23:26:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:40:41.760 23:26:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:40:42.018 23:26:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:40:42.018 23:26:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:40:42.018 23:26:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:40:42.018 23:26:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:40:42.276 [2024-07-13 23:26:31.433711] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 179321) is not found. Dropping the request. 00:40:52.245 Executing: test_write_invalid_db 00:40:52.245 Waiting for AER completion... 00:40:52.245 Failure: test_write_invalid_db 00:40:52.245 00:40:52.245 Executing: test_invalid_db_write_overflow_sq 00:40:52.245 Waiting for AER completion... 00:40:52.245 Failure: test_invalid_db_write_overflow_sq 00:40:52.245 00:40:52.245 Executing: test_invalid_db_write_overflow_cq 00:40:52.245 Waiting for AER completion... 00:40:52.245 Failure: test_invalid_db_write_overflow_cq 00:40:52.245 00:40:52.245 00:40:52.245 real 0m10.102s 00:40:52.245 user 0m8.481s 00:40:52.245 sys 0m1.531s 00:40:52.245 23:26:41 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:52.245 ************************************ 00:40:52.245 23:26:41 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:40:52.245 END TEST nvme_doorbell_aers 00:40:52.245 ************************************ 00:40:52.245 23:26:41 nvme -- common/autotest_common.sh@1142 -- # return 0 00:40:52.245 23:26:41 nvme -- nvme/nvme.sh@97 -- # uname 00:40:52.245 23:26:41 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:40:52.245 23:26:41 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:40:52.245 23:26:41 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:40:52.245 23:26:41 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:52.245 23:26:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:52.245 ************************************ 00:40:52.245 START TEST nvme_multi_aen 00:40:52.245 ************************************ 00:40:52.245 23:26:41 nvme.nvme_multi_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:40:52.245 [2024-07-13 23:26:41.505693] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 179321) is not found. Dropping the request. 00:40:52.245 [2024-07-13 23:26:41.505864] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 179321) is not found. Dropping the request. 00:40:52.245 [2024-07-13 23:26:41.505906] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 179321) is not found. Dropping the request. 00:40:52.245 Child process pid: 179503 00:40:52.504 [Child] Asynchronous Event Request test 00:40:52.504 [Child] Attached to 0000:00:10.0 00:40:52.504 [Child] Registering asynchronous event callbacks... 00:40:52.504 [Child] Getting orig temperature thresholds of all controllers 00:40:52.504 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:40:52.504 [Child] Waiting for all controllers to trigger AER and reset threshold 00:40:52.504 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:40:52.504 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:40:52.504 [Child] Cleaning up... 00:40:52.504 Asynchronous Event Request test 00:40:52.504 Attached to 0000:00:10.0 00:40:52.504 Reset controller to setup AER completions for this process 00:40:52.504 Registering asynchronous event callbacks... 00:40:52.504 Getting orig temperature thresholds of all controllers 00:40:52.504 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:40:52.504 Setting all controllers temperature threshold low to trigger AER 00:40:52.504 Waiting for all controllers temperature threshold to be set lower 00:40:52.504 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:40:52.504 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:40:52.504 Waiting for all controllers to trigger AER and reset threshold 00:40:52.504 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:40:52.504 Cleaning up... 00:40:52.504 00:40:52.504 real 0m0.609s 00:40:52.504 user 0m0.227s 00:40:52.504 sys 0m0.188s 00:40:52.504 23:26:41 nvme.nvme_multi_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:52.504 ************************************ 00:40:52.504 23:26:41 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:40:52.504 END TEST nvme_multi_aen 00:40:52.504 ************************************ 00:40:52.762 23:26:41 nvme -- common/autotest_common.sh@1142 -- # return 0 00:40:52.762 23:26:41 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:40:52.762 23:26:41 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:40:52.762 23:26:41 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:52.762 23:26:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:52.762 ************************************ 00:40:52.762 START TEST nvme_startup 00:40:52.762 ************************************ 00:40:52.762 23:26:41 nvme.nvme_startup -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:40:53.020 Initializing NVMe Controllers 00:40:53.020 Attached to 0000:00:10.0 00:40:53.020 Initialization complete. 00:40:53.020 Time used:194861.969 (us). 00:40:53.020 00:40:53.020 real 0m0.279s 00:40:53.020 user 0m0.093s 00:40:53.020 sys 0m0.113s 00:40:53.020 23:26:42 nvme.nvme_startup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:53.020 23:26:42 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:40:53.020 ************************************ 00:40:53.020 END TEST nvme_startup 00:40:53.020 ************************************ 00:40:53.020 23:26:42 nvme -- common/autotest_common.sh@1142 -- # return 0 00:40:53.020 23:26:42 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:40:53.020 23:26:42 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:53.020 23:26:42 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:53.020 23:26:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:53.020 ************************************ 00:40:53.020 START TEST nvme_multi_secondary 00:40:53.020 ************************************ 00:40:53.020 23:26:42 nvme.nvme_multi_secondary -- common/autotest_common.sh@1123 -- # nvme_multi_secondary 00:40:53.020 23:26:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=179570 00:40:53.020 23:26:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:40:53.020 23:26:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=179571 00:40:53.020 23:26:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:40:53.020 23:26:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:40:56.300 Initializing NVMe Controllers 00:40:56.300 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:40:56.300 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:40:56.300 Initialization complete. Launching workers. 00:40:56.300 ======================================================== 00:40:56.300 Latency(us) 00:40:56.300 Device Information : IOPS MiB/s Average min max 00:40:56.300 PCIE (0000:00:10.0) NSID 1 from core 1: 32220.97 125.86 496.18 135.66 3100.55 00:40:56.300 ======================================================== 00:40:56.300 Total : 32220.97 125.86 496.18 135.66 3100.55 00:40:56.300 00:40:56.300 Initializing NVMe Controllers 00:40:56.300 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:40:56.300 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:40:56.300 Initialization complete. Launching workers. 00:40:56.300 ======================================================== 00:40:56.300 Latency(us) 00:40:56.300 Device Information : IOPS MiB/s Average min max 00:40:56.300 PCIE (0000:00:10.0) NSID 1 from core 2: 12762.74 49.85 1253.37 151.16 24847.36 00:40:56.300 ======================================================== 00:40:56.300 Total : 12762.74 49.85 1253.37 151.16 24847.36 00:40:56.300 00:40:56.558 23:26:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 179570 00:40:58.458 Initializing NVMe Controllers 00:40:58.458 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:40:58.458 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:40:58.458 Initialization complete. Launching workers. 00:40:58.458 ======================================================== 00:40:58.458 Latency(us) 00:40:58.458 Device Information : IOPS MiB/s Average min max 00:40:58.458 PCIE (0000:00:10.0) NSID 1 from core 0: 40894.47 159.74 390.90 121.54 3085.06 00:40:58.458 ======================================================== 00:40:58.458 Total : 40894.47 159.74 390.90 121.54 3085.06 00:40:58.458 00:40:58.458 23:26:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 179571 00:40:58.458 23:26:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=179652 00:40:58.458 23:26:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=179653 00:40:58.458 23:26:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:40:58.458 23:26:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:40:58.458 23:26:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:41:02.640 Initializing NVMe Controllers 00:41:02.640 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:41:02.640 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:41:02.640 Initialization complete. Launching workers. 00:41:02.640 ======================================================== 00:41:02.640 Latency(us) 00:41:02.640 Device Information : IOPS MiB/s Average min max 00:41:02.640 PCIE (0000:00:10.0) NSID 1 from core 0: 29503.58 115.25 541.93 154.42 4484.25 00:41:02.640 ======================================================== 00:41:02.640 Total : 29503.58 115.25 541.93 154.42 4484.25 00:41:02.640 00:41:02.640 Initializing NVMe Controllers 00:41:02.640 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:41:02.640 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:41:02.640 Initialization complete. Launching workers. 00:41:02.640 ======================================================== 00:41:02.640 Latency(us) 00:41:02.640 Device Information : IOPS MiB/s Average min max 00:41:02.640 PCIE (0000:00:10.0) NSID 1 from core 1: 28855.29 112.72 554.09 147.38 4634.62 00:41:02.640 ======================================================== 00:41:02.640 Total : 28855.29 112.72 554.09 147.38 4634.62 00:41:02.640 00:41:04.075 Initializing NVMe Controllers 00:41:04.075 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:41:04.075 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:41:04.075 Initialization complete. Launching workers. 00:41:04.075 ======================================================== 00:41:04.075 Latency(us) 00:41:04.075 Device Information : IOPS MiB/s Average min max 00:41:04.075 PCIE (0000:00:10.0) NSID 1 from core 2: 16810.10 65.66 951.08 138.08 28520.05 00:41:04.075 ======================================================== 00:41:04.075 Total : 16810.10 65.66 951.08 138.08 28520.05 00:41:04.075 00:41:04.075 23:26:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 179652 00:41:04.075 23:26:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 179653 00:41:04.075 00:41:04.075 real 0m10.857s 00:41:04.075 user 0m18.550s 00:41:04.075 sys 0m0.734s 00:41:04.075 23:26:53 nvme.nvme_multi_secondary -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:04.075 23:26:53 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:41:04.075 ************************************ 00:41:04.075 END TEST nvme_multi_secondary 00:41:04.075 ************************************ 00:41:04.075 23:26:53 nvme -- common/autotest_common.sh@1142 -- # return 0 00:41:04.075 23:26:53 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:41:04.075 23:26:53 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:41:04.075 23:26:53 nvme -- common/autotest_common.sh@1087 -- # [[ -e /proc/178890 ]] 00:41:04.075 23:26:53 nvme -- common/autotest_common.sh@1088 -- # kill 178890 00:41:04.075 23:26:53 nvme -- common/autotest_common.sh@1089 -- # wait 178890 00:41:04.075 [2024-07-13 23:26:53.177551] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 179502) is not found. Dropping the request. 00:41:04.075 [2024-07-13 23:26:53.177746] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 179502) is not found. Dropping the request. 00:41:04.075 [2024-07-13 23:26:53.177802] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 179502) is not found. Dropping the request. 00:41:04.075 [2024-07-13 23:26:53.177873] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 179502) is not found. Dropping the request. 00:41:04.075 23:26:53 nvme -- common/autotest_common.sh@1091 -- # rm -f /var/run/spdk_stub0 00:41:04.075 23:26:53 nvme -- common/autotest_common.sh@1095 -- # echo 2 00:41:04.075 23:26:53 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:41:04.075 23:26:53 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:04.075 23:26:53 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:04.075 23:26:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:41:04.075 ************************************ 00:41:04.075 START TEST bdev_nvme_reset_stuck_adm_cmd 00:41:04.075 ************************************ 00:41:04.075 23:26:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:41:04.075 * Looking for test storage... 00:41:04.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:41:04.075 23:26:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:41:04.075 23:26:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:41:04.075 23:26:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:41:04.075 23:26:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:41:04.075 23:26:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:41:04.075 23:26:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:41:04.075 23:26:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:41:04.075 23:26:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:41:04.075 23:26:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:41:04.075 23:26:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:41:04.075 23:26:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:41:04.075 23:26:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:41:04.075 23:26:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:41:04.075 23:26:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:41:04.075 23:26:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:41:04.075 23:26:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:41:04.075 23:26:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:41:04.075 23:26:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:41:04.075 23:26:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:41:04.075 23:26:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:41:04.075 23:26:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=179789 00:41:04.075 23:26:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:41:04.075 23:26:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:41:04.075 23:26:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 179789 00:41:04.075 23:26:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@829 -- # '[' -z 179789 ']' 00:41:04.075 23:26:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:04.075 23:26:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:04.075 23:26:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:04.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:04.075 23:26:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:04.075 23:26:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:41:04.333 [2024-07-13 23:26:53.525019] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:41:04.333 [2024-07-13 23:26:53.525268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179789 ] 00:41:04.333 [2024-07-13 23:26:53.712194] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:04.590 [2024-07-13 23:26:53.809305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:41:04.591 [2024-07-13 23:26:53.809452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:41:04.591 [2024-07-13 23:26:53.809816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:04.591 [2024-07-13 23:26:53.809823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:41:05.157 23:26:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:05.157 23:26:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # return 0 00:41:05.157 23:26:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:41:05.157 23:26:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:05.157 23:26:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:41:05.157 nvme0n1 00:41:05.157 23:26:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:05.157 23:26:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:41:05.157 23:26:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_3dcVT.txt 00:41:05.157 23:26:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:41:05.157 23:26:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:05.157 23:26:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:41:05.157 true 00:41:05.157 23:26:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:05.157 23:26:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:41:05.157 23:26:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1720913214 00:41:05.157 23:26:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=179819 00:41:05.157 23:26:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:41:05.157 23:26:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:41:05.157 23:26:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:41:07.688 [2024-07-13 23:26:56.544952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:41:07.688 [2024-07-13 23:26:56.545433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:41:07.688 [2024-07-13 23:26:56.545517] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:41:07.688 [2024-07-13 23:26:56.545592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:07.688 [2024-07-13 23:26:56.547910] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:07.688 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 179819 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 179819 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 179819 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_3dcVT.txt 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_3dcVT.txt 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 179789 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@948 -- # '[' -z 179789 ']' 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # kill -0 179789 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # uname 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 179789 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 179789' 00:41:07.688 killing process with pid 179789 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@967 -- # kill 179789 00:41:07.688 23:26:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # wait 179789 00:41:07.946 23:26:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:41:07.946 23:26:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:41:07.946 ************************************ 00:41:07.946 END TEST bdev_nvme_reset_stuck_adm_cmd 00:41:07.946 ************************************ 00:41:07.946 00:41:07.946 real 0m3.808s 00:41:07.946 user 0m13.575s 00:41:07.946 sys 0m0.598s 00:41:07.946 23:26:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:07.946 23:26:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:41:07.946 23:26:57 nvme -- common/autotest_common.sh@1142 -- # return 0 00:41:07.946 23:26:57 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:41:07.946 23:26:57 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:41:07.946 23:26:57 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:07.946 23:26:57 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:07.946 23:26:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:41:07.946 ************************************ 00:41:07.946 START TEST nvme_fio 00:41:07.946 ************************************ 00:41:07.946 23:26:57 nvme.nvme_fio -- common/autotest_common.sh@1123 -- # nvme_fio_test 00:41:07.947 23:26:57 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:41:07.947 23:26:57 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:41:07.947 23:26:57 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:41:07.947 23:26:57 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:41:07.947 23:26:57 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:41:07.947 23:26:57 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:41:07.947 23:26:57 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:41:07.947 23:26:57 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:41:07.947 23:26:57 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:41:07.947 23:26:57 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:41:07.947 23:26:57 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0') 00:41:07.947 23:26:57 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:41:07.947 23:26:57 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:41:07.947 23:26:57 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:41:07.947 23:26:57 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:41:08.254 23:26:57 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:41:08.254 23:26:57 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:41:08.254 23:26:57 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:41:08.254 23:26:57 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:41:08.254 23:26:57 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:41:08.254 23:26:57 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:08.254 23:26:57 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:08.254 23:26:57 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:08.254 23:26:57 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:41:08.254 23:26:57 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:41:08.254 23:26:57 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:08.254 23:26:57 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:08.254 23:26:57 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:41:08.254 23:26:57 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:41:08.254 23:26:57 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:08.254 23:26:57 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:41:08.254 23:26:57 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:41:08.254 23:26:57 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:41:08.254 23:26:57 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:41:08.254 23:26:57 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:41:08.512 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:41:08.512 fio-3.35 00:41:08.512 Starting 1 thread 00:41:11.792 00:41:11.792 test: (groupid=0, jobs=1): err= 0: pid=179949: Sat Jul 13 23:27:00 2024 00:41:11.792 read: IOPS=17.1k, BW=66.8MiB/s (70.1MB/s)(134MiB/2001msec) 00:41:11.792 slat (nsec): min=4135, max=95846, avg=5746.33, stdev=2153.94 00:41:11.792 clat (usec): min=301, max=11413, avg=3722.10, stdev=545.11 00:41:11.792 lat (usec): min=308, max=11509, avg=3727.85, stdev=545.70 00:41:11.792 clat percentiles (usec): 00:41:11.792 | 1.00th=[ 3163], 5.00th=[ 3294], 10.00th=[ 3359], 20.00th=[ 3425], 00:41:11.792 | 30.00th=[ 3490], 40.00th=[ 3556], 50.00th=[ 3621], 60.00th=[ 3687], 00:41:11.792 | 70.00th=[ 3785], 80.00th=[ 3949], 90.00th=[ 4146], 95.00th=[ 4293], 00:41:11.792 | 99.00th=[ 6390], 99.50th=[ 7177], 99.90th=[ 8717], 99.95th=[10683], 00:41:11.792 | 99.99th=[11338] 00:41:11.792 bw ( KiB/s): min=67920, max=70984, per=100.00%, avg=69120.00, stdev=1636.37, samples=3 00:41:11.792 iops : min=16980, max=17746, avg=17280.00, stdev=409.09, samples=3 00:41:11.792 write: IOPS=17.1k, BW=67.0MiB/s (70.2MB/s)(134MiB/2001msec); 0 zone resets 00:41:11.792 slat (nsec): min=4321, max=92549, avg=5954.53, stdev=2317.74 00:41:11.792 clat (usec): min=328, max=11311, avg=3727.93, stdev=551.70 00:41:11.792 lat (usec): min=339, max=11359, avg=3733.89, stdev=552.31 00:41:11.792 clat percentiles (usec): 00:41:11.792 | 1.00th=[ 3163], 5.00th=[ 3294], 10.00th=[ 3359], 20.00th=[ 3425], 00:41:11.792 | 30.00th=[ 3490], 40.00th=[ 3556], 50.00th=[ 3621], 60.00th=[ 3687], 00:41:11.792 | 70.00th=[ 3785], 80.00th=[ 3949], 90.00th=[ 4146], 95.00th=[ 4293], 00:41:11.792 | 99.00th=[ 6587], 99.50th=[ 7242], 99.90th=[ 9634], 99.95th=[10683], 00:41:11.792 | 99.99th=[11207] 00:41:11.792 bw ( KiB/s): min=68096, max=70640, per=100.00%, avg=69018.67, stdev=1408.58, samples=3 00:41:11.792 iops : min=17024, max=17660, avg=17254.67, stdev=352.14, samples=3 00:41:11.792 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:41:11.792 lat (msec) : 2=0.13%, 4=83.18%, 10=16.58%, 20=0.08% 00:41:11.792 cpu : usr=99.75%, sys=0.05%, ctx=22, majf=0, minf=39 00:41:11.792 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:41:11.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:11.792 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:11.792 issued rwts: total=34242,34300,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:11.792 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:11.792 00:41:11.792 Run status group 0 (all jobs): 00:41:11.792 READ: bw=66.8MiB/s (70.1MB/s), 66.8MiB/s-66.8MiB/s (70.1MB/s-70.1MB/s), io=134MiB (140MB), run=2001-2001msec 00:41:11.792 WRITE: bw=67.0MiB/s (70.2MB/s), 67.0MiB/s-67.0MiB/s (70.2MB/s-70.2MB/s), io=134MiB (140MB), run=2001-2001msec 00:41:11.792 ----------------------------------------------------- 00:41:11.792 Suppressions used: 00:41:11.792 count bytes template 00:41:11.792 1 32 /usr/src/fio/parse.c 00:41:11.792 ----------------------------------------------------- 00:41:11.792 00:41:11.792 23:27:01 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:41:11.792 23:27:01 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:41:11.792 00:41:11.792 real 0m3.955s 00:41:11.792 user 0m3.311s 00:41:11.792 sys 0m0.318s 00:41:11.792 ************************************ 00:41:11.792 END TEST nvme_fio 00:41:11.792 ************************************ 00:41:11.792 23:27:01 nvme.nvme_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:11.792 23:27:01 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:41:11.792 23:27:01 nvme -- common/autotest_common.sh@1142 -- # return 0 00:41:11.792 ************************************ 00:41:11.792 END TEST nvme 00:41:11.792 ************************************ 00:41:11.792 00:41:11.792 real 0m43.835s 00:41:11.792 user 1m57.117s 00:41:11.792 sys 0m7.575s 00:41:11.792 23:27:01 nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:11.792 23:27:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:41:12.051 23:27:01 -- common/autotest_common.sh@1142 -- # return 0 00:41:12.051 23:27:01 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:41:12.051 23:27:01 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:41:12.051 23:27:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:12.051 23:27:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:12.051 23:27:01 -- common/autotest_common.sh@10 -- # set +x 00:41:12.051 ************************************ 00:41:12.051 START TEST nvme_scc 00:41:12.051 ************************************ 00:41:12.051 23:27:01 nvme_scc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:41:12.051 * Looking for test storage... 00:41:12.051 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:41:12.051 23:27:01 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:41:12.051 23:27:01 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:41:12.051 23:27:01 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:41:12.051 23:27:01 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:41:12.051 23:27:01 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:12.051 23:27:01 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:12.051 23:27:01 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:12.051 23:27:01 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:12.051 23:27:01 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:12.051 23:27:01 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:12.051 23:27:01 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:12.051 23:27:01 nvme_scc -- paths/export.sh@5 -- # export PATH 00:41:12.051 23:27:01 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:12.051 23:27:01 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:41:12.051 23:27:01 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:41:12.051 23:27:01 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:41:12.051 23:27:01 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:41:12.051 23:27:01 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:41:12.051 23:27:01 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:41:12.051 23:27:01 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:41:12.051 23:27:01 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:41:12.051 23:27:01 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:41:12.051 23:27:01 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:41:12.051 23:27:01 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:41:12.051 23:27:01 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:41:12.051 23:27:01 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:41:12.051 23:27:01 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:41:12.310 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:41:12.310 Waiting for block devices as requested 00:41:12.310 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:41:12.571 23:27:01 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:41:12.571 23:27:01 nvme_scc -- scripts/common.sh@15 -- # local i 00:41:12.571 23:27:01 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:41:12.571 23:27:01 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:41:12.571 23:27:01 nvme_scc -- scripts/common.sh@24 -- # return 0 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@18 -- # shift 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.571 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.572 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.573 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@18 -- # shift 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:41:12.574 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:41:12.575 23:27:01 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:41:12.575 23:27:01 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:41:12.576 23:27:01 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:41:12.576 23:27:01 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:41:12.576 23:27:01 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:41:12.576 23:27:01 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:41:12.576 23:27:01 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:41:12.576 23:27:01 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:41:12.576 23:27:01 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:41:12.576 23:27:01 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:41:12.576 23:27:01 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:41:12.576 23:27:01 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:41:12.576 23:27:01 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:41:12.576 23:27:01 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:41:12.576 23:27:01 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:41:12.576 23:27:01 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:41:12.576 23:27:01 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:41:12.576 23:27:01 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:41:12.576 23:27:01 nvme_scc -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:41:12.576 23:27:01 nvme_scc -- nvme/functions.sh@206 -- # echo nvme0 00:41:12.576 23:27:01 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:41:12.576 23:27:01 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:41:12.576 23:27:01 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:41:12.576 23:27:01 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:41:12.835 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:41:13.093 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:41:14.046 23:27:03 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:41:14.046 23:27:03 nvme_scc -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:41:14.046 23:27:03 nvme_scc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:14.046 23:27:03 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:41:14.046 ************************************ 00:41:14.046 START TEST nvme_simple_copy 00:41:14.046 ************************************ 00:41:14.046 23:27:03 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:41:14.304 Initializing NVMe Controllers 00:41:14.304 Attaching to 0000:00:10.0 00:41:14.304 Controller supports SCC. Attached to 0000:00:10.0 00:41:14.304 Namespace ID: 1 size: 5GB 00:41:14.304 Initialization complete. 00:41:14.304 00:41:14.304 Controller QEMU NVMe Ctrl (12340 ) 00:41:14.304 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:41:14.304 Namespace Block Size:4096 00:41:14.304 Writing LBAs 0 to 63 with Random Data 00:41:14.304 Copied LBAs from 0 - 63 to the Destination LBA 256 00:41:14.304 LBAs matching Written Data: 64 00:41:14.304 00:41:14.304 real 0m0.266s 00:41:14.304 user 0m0.115s 00:41:14.304 sys 0m0.053s 00:41:14.304 23:27:03 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:14.304 23:27:03 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:41:14.304 ************************************ 00:41:14.304 END TEST nvme_simple_copy 00:41:14.304 ************************************ 00:41:14.304 23:27:03 nvme_scc -- common/autotest_common.sh@1142 -- # return 0 00:41:14.304 00:41:14.304 real 0m2.462s 00:41:14.304 user 0m0.784s 00:41:14.304 sys 0m1.586s 00:41:14.304 23:27:03 nvme_scc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:14.304 ************************************ 00:41:14.304 23:27:03 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:41:14.304 END TEST nvme_scc 00:41:14.304 ************************************ 00:41:14.561 23:27:03 -- common/autotest_common.sh@1142 -- # return 0 00:41:14.561 23:27:03 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:41:14.561 23:27:03 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:41:14.561 23:27:03 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:41:14.561 23:27:03 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:41:14.561 23:27:03 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:41:14.561 23:27:03 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:41:14.561 23:27:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:14.561 23:27:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:14.561 23:27:03 -- common/autotest_common.sh@10 -- # set +x 00:41:14.561 ************************************ 00:41:14.561 START TEST nvme_rpc 00:41:14.561 ************************************ 00:41:14.561 23:27:03 nvme_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:41:14.561 * Looking for test storage... 00:41:14.561 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:41:14.561 23:27:03 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:41:14.561 23:27:03 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:41:14.561 23:27:03 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:41:14.561 23:27:03 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:41:14.561 23:27:03 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:41:14.561 23:27:03 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:41:14.561 23:27:03 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:41:14.561 23:27:03 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:41:14.561 23:27:03 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:41:14.561 23:27:03 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:41:14.561 23:27:03 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:41:14.561 23:27:03 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:41:14.561 23:27:03 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:41:14.561 23:27:03 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:41:14.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:14.561 23:27:03 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:41:14.561 23:27:03 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=180420 00:41:14.561 23:27:03 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:41:14.561 23:27:03 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:41:14.561 23:27:03 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 180420 00:41:14.561 23:27:03 nvme_rpc -- common/autotest_common.sh@829 -- # '[' -z 180420 ']' 00:41:14.561 23:27:03 nvme_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:14.561 23:27:03 nvme_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:14.561 23:27:03 nvme_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:14.561 23:27:03 nvme_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:14.561 23:27:03 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:41:14.561 [2024-07-13 23:27:03.925053] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:41:14.561 [2024-07-13 23:27:03.925282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180420 ] 00:41:14.818 [2024-07-13 23:27:04.085549] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:14.818 [2024-07-13 23:27:04.166304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:41:14.818 [2024-07-13 23:27:04.166316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:15.751 23:27:04 nvme_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:15.751 23:27:04 nvme_rpc -- common/autotest_common.sh@862 -- # return 0 00:41:15.751 23:27:04 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:41:16.009 Nvme0n1 00:41:16.009 23:27:05 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:41:16.009 23:27:05 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:41:16.267 request: 00:41:16.267 { 00:41:16.267 "bdev_name": "Nvme0n1", 00:41:16.267 "filename": "non_existing_file", 00:41:16.267 "method": "bdev_nvme_apply_firmware", 00:41:16.267 "req_id": 1 00:41:16.267 } 00:41:16.267 Got JSON-RPC error response 00:41:16.267 response: 00:41:16.267 { 00:41:16.267 "code": -32603, 00:41:16.267 "message": "open file failed." 00:41:16.267 } 00:41:16.267 23:27:05 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:41:16.267 23:27:05 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:41:16.267 23:27:05 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:41:16.267 23:27:05 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:41:16.267 23:27:05 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 180420 00:41:16.267 23:27:05 nvme_rpc -- common/autotest_common.sh@948 -- # '[' -z 180420 ']' 00:41:16.267 23:27:05 nvme_rpc -- common/autotest_common.sh@952 -- # kill -0 180420 00:41:16.267 23:27:05 nvme_rpc -- common/autotest_common.sh@953 -- # uname 00:41:16.267 23:27:05 nvme_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:41:16.267 23:27:05 nvme_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 180420 00:41:16.267 23:27:05 nvme_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:41:16.267 23:27:05 nvme_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:41:16.267 23:27:05 nvme_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 180420' 00:41:16.267 killing process with pid 180420 00:41:16.267 23:27:05 nvme_rpc -- common/autotest_common.sh@967 -- # kill 180420 00:41:16.267 23:27:05 nvme_rpc -- common/autotest_common.sh@972 -- # wait 180420 00:41:16.834 00:41:16.834 real 0m2.398s 00:41:16.834 user 0m4.803s 00:41:16.834 sys 0m0.567s 00:41:16.834 23:27:06 nvme_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:16.834 23:27:06 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:41:16.834 ************************************ 00:41:16.834 END TEST nvme_rpc 00:41:16.834 ************************************ 00:41:16.834 23:27:06 -- common/autotest_common.sh@1142 -- # return 0 00:41:16.834 23:27:06 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:41:16.834 23:27:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:16.834 23:27:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:16.834 23:27:06 -- common/autotest_common.sh@10 -- # set +x 00:41:16.834 ************************************ 00:41:16.834 START TEST nvme_rpc_timeouts 00:41:16.834 ************************************ 00:41:16.834 23:27:06 nvme_rpc_timeouts -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:41:17.092 * Looking for test storage... 00:41:17.093 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:41:17.093 23:27:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:41:17.093 23:27:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_180486 00:41:17.093 23:27:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_180486 00:41:17.093 23:27:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=180512 00:41:17.093 23:27:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:41:17.093 23:27:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 180512 00:41:17.093 23:27:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:41:17.093 23:27:06 nvme_rpc_timeouts -- common/autotest_common.sh@829 -- # '[' -z 180512 ']' 00:41:17.093 23:27:06 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:17.093 23:27:06 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:17.093 23:27:06 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:17.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:17.093 23:27:06 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:17.093 23:27:06 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:41:17.093 [2024-07-13 23:27:06.325764] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:41:17.093 [2024-07-13 23:27:06.326047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180512 ] 00:41:17.093 [2024-07-13 23:27:06.472888] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:17.352 [2024-07-13 23:27:06.548214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:41:17.352 [2024-07-13 23:27:06.548217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:17.919 23:27:07 nvme_rpc_timeouts -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:17.919 23:27:07 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # return 0 00:41:17.919 23:27:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:41:17.919 Checking default timeout settings: 00:41:17.919 23:27:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:41:18.177 Making settings changes with rpc: 00:41:18.177 23:27:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:41:18.177 23:27:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:41:18.743 Check default vs. modified settings: 00:41:18.743 23:27:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:41:18.743 23:27:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_180486 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_180486 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:41:19.001 Setting action_on_timeout is changed as expected. 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_180486 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_180486 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:41:19.001 Setting timeout_us is changed as expected. 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_180486 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_180486 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:41:19.001 Setting timeout_admin_us is changed as expected. 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_180486 /tmp/settings_modified_180486 00:41:19.001 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 180512 00:41:19.001 23:27:08 nvme_rpc_timeouts -- common/autotest_common.sh@948 -- # '[' -z 180512 ']' 00:41:19.001 23:27:08 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # kill -0 180512 00:41:19.001 23:27:08 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # uname 00:41:19.001 23:27:08 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:41:19.001 23:27:08 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 180512 00:41:19.001 23:27:08 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:41:19.001 23:27:08 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:41:19.001 killing process with pid 180512 00:41:19.001 23:27:08 nvme_rpc_timeouts -- common/autotest_common.sh@966 -- # echo 'killing process with pid 180512' 00:41:19.001 23:27:08 nvme_rpc_timeouts -- common/autotest_common.sh@967 -- # kill 180512 00:41:19.001 23:27:08 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # wait 180512 00:41:19.594 RPC TIMEOUT SETTING TEST PASSED. 00:41:19.594 23:27:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:41:19.594 ************************************ 00:41:19.594 END TEST nvme_rpc_timeouts 00:41:19.594 ************************************ 00:41:19.594 00:41:19.594 real 0m2.683s 00:41:19.594 user 0m5.591s 00:41:19.594 sys 0m0.617s 00:41:19.594 23:27:08 nvme_rpc_timeouts -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:19.594 23:27:08 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:41:19.594 23:27:08 -- common/autotest_common.sh@1142 -- # return 0 00:41:19.594 23:27:08 -- spdk/autotest.sh@243 -- # uname -s 00:41:19.594 23:27:08 -- spdk/autotest.sh@243 -- # '[' Linux = Linux ']' 00:41:19.594 23:27:08 -- spdk/autotest.sh@244 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:41:19.594 23:27:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:19.594 23:27:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:19.594 23:27:08 -- common/autotest_common.sh@10 -- # set +x 00:41:19.594 ************************************ 00:41:19.594 START TEST sw_hotplug 00:41:19.594 ************************************ 00:41:19.594 23:27:08 sw_hotplug -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:41:19.594 * Looking for test storage... 00:41:19.594 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:41:19.594 23:27:08 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:41:19.852 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:41:19.852 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:41:21.227 23:27:10 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:41:21.227 23:27:10 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:41:21.227 23:27:10 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:41:21.227 23:27:10 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:41:21.227 23:27:10 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:41:21.227 23:27:10 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:41:21.227 23:27:10 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:41:21.227 23:27:10 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:41:21.227 23:27:10 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:41:21.227 23:27:10 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:41:21.227 23:27:10 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:41:21.227 23:27:10 sw_hotplug -- scripts/common.sh@230 -- # local class 00:41:21.227 23:27:10 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:41:21.227 23:27:10 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:41:21.227 23:27:10 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:41:21.227 23:27:10 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:41:21.227 23:27:10 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:41:21.227 23:27:10 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:41:21.227 23:27:10 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:41:21.227 23:27:10 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:41:21.227 23:27:10 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:41:21.227 23:27:10 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:41:21.485 23:27:10 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:41:21.485 23:27:10 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:41:21.485 23:27:10 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:41:21.485 23:27:10 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:41:21.485 23:27:10 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:41:21.485 23:27:10 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:41:21.485 23:27:10 sw_hotplug -- scripts/common.sh@15 -- # local i 00:41:21.485 23:27:10 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:41:21.485 23:27:10 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:41:21.485 23:27:10 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:41:21.485 23:27:10 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:41:21.485 23:27:10 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:41:21.485 23:27:10 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:41:21.485 23:27:10 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:41:21.485 23:27:10 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:41:21.485 23:27:10 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:41:21.485 23:27:10 sw_hotplug -- scripts/common.sh@325 -- # (( 1 )) 00:41:21.485 23:27:10 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 00:41:21.485 23:27:10 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=1 00:41:21.485 23:27:10 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:41:21.485 23:27:10 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:41:21.743 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:41:21.743 Waiting for block devices as requested 00:41:21.743 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:41:21.743 23:27:11 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED=0000:00:10.0 00:41:21.743 23:27:11 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:41:22.011 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:41:22.011 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:41:22.272 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:41:23.647 23:27:12 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:41:23.647 23:27:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:41:23.647 23:27:13 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:41:23.647 23:27:13 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:41:23.647 23:27:13 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=181069 00:41:23.647 23:27:13 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:41:23.647 23:27:13 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 3 -r 3 -l warning 00:41:23.647 23:27:13 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:41:23.647 23:27:13 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:41:23.647 23:27:13 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:41:23.647 23:27:13 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:41:23.647 23:27:13 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:41:23.647 23:27:13 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:41:23.647 23:27:13 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 false 00:41:23.647 23:27:13 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:41:23.647 23:27:13 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:41:23.647 23:27:13 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:41:23.647 23:27:13 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:41:23.647 23:27:13 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:41:23.906 Initializing NVMe Controllers 00:41:23.906 Attaching to 0000:00:10.0 00:41:23.906 Attached to 0000:00:10.0 00:41:23.906 Initialization complete. Starting I/O... 00:41:23.906 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:41:23.906 00:41:24.841 QEMU NVMe Ctrl (12340 ): 2268 I/Os completed (+2268) 00:41:24.841 00:41:26.218 QEMU NVMe Ctrl (12340 ): 5476 I/Os completed (+3208) 00:41:26.218 00:41:27.154 QEMU NVMe Ctrl (12340 ): 9070 I/Os completed (+3594) 00:41:27.154 00:41:28.089 QEMU NVMe Ctrl (12340 ): 12730 I/Os completed (+3660) 00:41:28.089 00:41:29.026 QEMU NVMe Ctrl (12340 ): 16442 I/Os completed (+3712) 00:41:29.026 00:41:29.962 23:27:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:41:29.962 23:27:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:41:29.962 23:27:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:41:29.962 [2024-07-13 23:27:19.042015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:41:29.962 Controller removed: QEMU NVMe Ctrl (12340 ) 00:41:29.962 [2024-07-13 23:27:19.043333] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:29.962 [2024-07-13 23:27:19.043426] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:29.962 [2024-07-13 23:27:19.043453] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:29.962 [2024-07-13 23:27:19.043474] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:29.962 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:41:29.962 [2024-07-13 23:27:19.045581] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:29.962 [2024-07-13 23:27:19.045647] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:29.962 [2024-07-13 23:27:19.045667] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:29.962 [2024-07-13 23:27:19.045685] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:29.962 23:27:19 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:41:29.962 23:27:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:41:29.962 23:27:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:41:29.962 23:27:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:41:29.962 23:27:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:41:29.962 23:27:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:41:29.962 00:41:29.962 23:27:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:41:29.962 23:27:19 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:41:29.962 Attaching to 0000:00:10.0 00:41:29.962 Attached to 0000:00:10.0 00:41:30.902 QEMU NVMe Ctrl (12340 ): 3636 I/Os completed (+3636) 00:41:30.902 00:41:31.845 QEMU NVMe Ctrl (12340 ): 7292 I/Os completed (+3656) 00:41:31.845 00:41:33.221 QEMU NVMe Ctrl (12340 ): 10718 I/Os completed (+3426) 00:41:33.221 00:41:34.156 QEMU NVMe Ctrl (12340 ): 13449 I/Os completed (+2731) 00:41:34.156 00:41:35.088 QEMU NVMe Ctrl (12340 ): 16226 I/Os completed (+2777) 00:41:35.088 00:41:36.019 QEMU NVMe Ctrl (12340 ): 18978 I/Os completed (+2752) 00:41:36.019 00:41:36.019 23:27:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:41:36.019 23:27:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:41:36.019 23:27:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:41:36.019 23:27:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:41:36.019 [2024-07-13 23:27:25.243424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:41:36.019 Controller removed: QEMU NVMe Ctrl (12340 ) 00:41:36.019 [2024-07-13 23:27:25.245139] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:36.019 [2024-07-13 23:27:25.245198] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:36.019 [2024-07-13 23:27:25.245222] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:36.019 [2024-07-13 23:27:25.245260] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:36.019 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:41:36.019 [2024-07-13 23:27:25.247803] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:36.019 [2024-07-13 23:27:25.247863] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:36.019 [2024-07-13 23:27:25.247888] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:36.019 [2024-07-13 23:27:25.247923] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:36.019 23:27:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:41:36.019 23:27:25 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:41:36.019 23:27:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:41:36.019 23:27:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:41:36.019 23:27:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:41:36.019 23:27:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:41:36.277 23:27:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:41:36.277 23:27:25 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:41:36.277 Attaching to 0000:00:10.0 00:41:36.277 Attached to 0000:00:10.0 00:41:36.841 QEMU NVMe Ctrl (12340 ): 2164 I/Os completed (+2164) 00:41:36.841 00:41:38.214 QEMU NVMe Ctrl (12340 ): 5147 I/Os completed (+2983) 00:41:38.214 00:41:39.149 QEMU NVMe Ctrl (12340 ): 7942 I/Os completed (+2795) 00:41:39.149 00:41:40.083 QEMU NVMe Ctrl (12340 ): 10795 I/Os completed (+2853) 00:41:40.083 00:41:41.019 QEMU NVMe Ctrl (12340 ): 13752 I/Os completed (+2957) 00:41:41.019 00:41:41.954 QEMU NVMe Ctrl (12340 ): 16824 I/Os completed (+3072) 00:41:41.954 00:41:42.213 23:27:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:41:42.213 23:27:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:41:42.213 23:27:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:41:42.213 23:27:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:41:42.213 [2024-07-13 23:27:31.438022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:41:42.213 Controller removed: QEMU NVMe Ctrl (12340 ) 00:41:42.213 [2024-07-13 23:27:31.439173] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:42.213 [2024-07-13 23:27:31.439242] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:42.213 [2024-07-13 23:27:31.439266] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:42.213 [2024-07-13 23:27:31.439284] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:42.213 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:41:42.213 [2024-07-13 23:27:31.441330] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:42.213 [2024-07-13 23:27:31.441375] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:42.213 [2024-07-13 23:27:31.441394] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:42.213 [2024-07-13 23:27:31.441422] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:42.213 23:27:31 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:41:42.213 23:27:31 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:41:42.213 23:27:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:41:42.213 23:27:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:41:42.213 23:27:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:41:42.213 23:27:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:41:42.213 23:27:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:41:42.213 23:27:31 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:41:42.213 Attaching to 0000:00:10.0 00:41:42.213 Attached to 0000:00:10.0 00:41:42.213 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:41:42.213 [2024-07-13 23:27:31.615544] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:41:48.776 23:27:37 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:41:48.776 23:27:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:41:48.776 23:27:37 sw_hotplug -- common/autotest_common.sh@715 -- # time=24.57 00:41:48.776 23:27:37 sw_hotplug -- common/autotest_common.sh@716 -- # echo 24.57 00:41:48.776 23:27:37 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:41:48.776 23:27:37 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=24.57 00:41:48.776 23:27:37 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 24.57 1 00:41:48.776 remove_attach_helper took 24.57s to complete (handling 1 nvme drive(s)) 23:27:37 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:41:55.346 23:27:43 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 181069 00:41:55.346 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (181069) - No such process 00:41:55.346 23:27:43 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 181069 00:41:55.346 23:27:43 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:41:55.346 23:27:43 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:41:55.346 23:27:43 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:41:55.346 23:27:43 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=181403 00:41:55.346 23:27:43 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:55.346 23:27:43 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:41:55.346 23:27:43 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 181403 00:41:55.346 23:27:43 sw_hotplug -- common/autotest_common.sh@829 -- # '[' -z 181403 ']' 00:41:55.347 23:27:43 sw_hotplug -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:55.347 23:27:43 sw_hotplug -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:55.347 23:27:43 sw_hotplug -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:55.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:55.347 23:27:43 sw_hotplug -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:55.347 23:27:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:41:55.347 [2024-07-13 23:27:43.716798] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:41:55.347 [2024-07-13 23:27:43.717423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181403 ] 00:41:55.347 [2024-07-13 23:27:43.873579] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:55.347 [2024-07-13 23:27:44.004218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:55.347 23:27:44 sw_hotplug -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:55.347 23:27:44 sw_hotplug -- common/autotest_common.sh@862 -- # return 0 00:41:55.347 23:27:44 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:41:55.347 23:27:44 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:55.347 23:27:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:41:55.347 23:27:44 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:55.347 23:27:44 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:41:55.347 23:27:44 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:41:55.347 23:27:44 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:41:55.347 23:27:44 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:41:55.347 23:27:44 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:41:55.347 23:27:44 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:41:55.347 23:27:44 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:41:55.347 23:27:44 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:41:55.347 23:27:44 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:41:55.347 23:27:44 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:41:55.347 23:27:44 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:41:55.347 23:27:44 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:41:55.347 23:27:44 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:42:01.907 23:27:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:42:01.907 23:27:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:42:01.907 23:27:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:42:01.907 23:27:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:42:01.907 23:27:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:42:01.907 23:27:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:42:01.907 23:27:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:42:01.907 23:27:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:42:01.907 23:27:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:42:01.907 23:27:50 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:01.907 23:27:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:42:01.907 23:27:50 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:01.907 [2024-07-13 23:27:50.699260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:42:01.907 [2024-07-13 23:27:50.700820] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:01.907 [2024-07-13 23:27:50.700882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:42:01.907 [2024-07-13 23:27:50.700960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:01.907 [2024-07-13 23:27:50.701022] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:01.907 [2024-07-13 23:27:50.701048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:42:01.907 [2024-07-13 23:27:50.701076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:01.907 [2024-07-13 23:27:50.701096] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:01.907 [2024-07-13 23:27:50.701122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:42:01.907 [2024-07-13 23:27:50.701160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:01.907 [2024-07-13 23:27:50.701187] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:01.907 [2024-07-13 23:27:50.701210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:42:01.907 [2024-07-13 23:27:50.701236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:01.907 23:27:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:42:01.907 23:27:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:42:01.907 23:27:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:42:01.907 23:27:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:42:01.907 23:27:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:42:01.907 23:27:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:42:01.907 23:27:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:42:01.907 23:27:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:42:01.907 23:27:51 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:01.907 23:27:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:42:01.907 23:27:51 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:01.907 23:27:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:42:01.907 23:27:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:42:02.164 23:27:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:42:02.164 23:27:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:42:02.164 23:27:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:42:02.164 23:27:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:42:02.164 23:27:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:42:02.164 23:27:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:42:08.725 23:27:57 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:42:08.725 23:27:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:42:08.725 23:27:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:42:08.725 23:27:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:42:08.725 23:27:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:42:08.725 23:27:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:42:08.725 23:27:57 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:08.725 23:27:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:42:08.725 23:27:57 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:08.725 23:27:57 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:42:08.725 23:27:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:42:08.725 23:27:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:42:08.725 23:27:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:42:08.725 [2024-07-13 23:27:57.499329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:42:08.725 [2024-07-13 23:27:57.500984] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:08.725 [2024-07-13 23:27:57.501056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:42:08.725 [2024-07-13 23:27:57.501092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:08.725 [2024-07-13 23:27:57.501127] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:08.725 [2024-07-13 23:27:57.501147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:42:08.725 [2024-07-13 23:27:57.501184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:08.725 [2024-07-13 23:27:57.501204] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:08.725 [2024-07-13 23:27:57.501240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:42:08.725 [2024-07-13 23:27:57.501261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:08.725 [2024-07-13 23:27:57.501294] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:08.725 [2024-07-13 23:27:57.501315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:42:08.725 [2024-07-13 23:27:57.501339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:08.725 23:27:57 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:42:08.725 23:27:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:42:08.725 23:27:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:42:08.725 23:27:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:42:08.725 23:27:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:42:08.725 23:27:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:42:08.725 23:27:57 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:08.725 23:27:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:42:08.725 23:27:57 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:08.725 23:27:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:42:08.725 23:27:57 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:42:08.725 23:27:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:42:08.725 23:27:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:42:08.725 23:27:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:42:08.725 23:27:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:42:08.725 23:27:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:42:08.725 23:27:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:42:15.334 23:28:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:42:15.334 23:28:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:42:15.334 23:28:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:42:15.334 23:28:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:42:15.334 23:28:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:42:15.334 23:28:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:42:15.334 23:28:03 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:15.334 23:28:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:42:15.334 23:28:03 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:15.334 23:28:03 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:42:15.334 23:28:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:42:15.334 23:28:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:42:15.334 23:28:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:42:15.334 23:28:03 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:42:15.334 23:28:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:42:15.334 23:28:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:42:15.334 23:28:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:42:15.334 23:28:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:42:15.334 23:28:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:42:15.334 23:28:03 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:15.334 23:28:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:42:15.334 [2024-07-13 23:28:03.799407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:42:15.334 [2024-07-13 23:28:03.801042] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:15.334 [2024-07-13 23:28:03.801103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:42:15.334 [2024-07-13 23:28:03.801142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:15.334 [2024-07-13 23:28:03.801173] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:15.334 [2024-07-13 23:28:03.801200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:42:15.334 [2024-07-13 23:28:03.801222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:15.334 [2024-07-13 23:28:03.801278] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:15.334 [2024-07-13 23:28:03.801321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:42:15.334 [2024-07-13 23:28:03.801349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:15.334 [2024-07-13 23:28:03.801375] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:15.334 [2024-07-13 23:28:03.801396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:42:15.334 [2024-07-13 23:28:03.801420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:15.334 23:28:03 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:15.334 23:28:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:42:15.334 23:28:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:42:15.334 23:28:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:42:15.334 23:28:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:42:15.334 23:28:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:42:15.334 23:28:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:42:15.334 23:28:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:42:15.334 23:28:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:42:15.334 23:28:04 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:15.334 23:28:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:42:15.334 23:28:04 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:15.334 23:28:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:42:15.334 23:28:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:42:15.334 23:28:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:42:15.334 23:28:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:42:15.334 23:28:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:42:15.334 23:28:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:42:15.334 23:28:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:42:15.334 23:28:04 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:42:21.896 23:28:10 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:42:21.896 23:28:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:42:21.896 23:28:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:42:21.896 23:28:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:42:21.896 23:28:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:42:21.896 23:28:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:42:21.896 23:28:10 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:21.896 23:28:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:42:21.896 23:28:10 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:21.896 23:28:10 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:42:21.896 23:28:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:42:21.896 23:28:10 sw_hotplug -- common/autotest_common.sh@715 -- # time=25.99 00:42:21.896 23:28:10 sw_hotplug -- common/autotest_common.sh@716 -- # echo 25.99 00:42:21.896 23:28:10 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:42:21.896 23:28:10 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=25.99 00:42:21.896 23:28:10 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 25.99 1 00:42:21.896 remove_attach_helper took 25.99s to complete (handling 1 nvme drive(s)) 23:28:10 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:42:21.896 23:28:10 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:21.896 23:28:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:42:21.896 23:28:10 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:21.896 23:28:10 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:42:21.896 23:28:10 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:21.896 23:28:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:42:21.896 23:28:10 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:21.896 23:28:10 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:42:21.896 23:28:10 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:42:21.896 23:28:10 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:42:21.896 23:28:10 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:42:21.896 23:28:10 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:42:21.896 23:28:10 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:42:21.896 23:28:10 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:42:21.896 23:28:10 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:42:21.896 23:28:10 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:42:21.896 23:28:10 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:42:21.896 23:28:10 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:42:21.896 23:28:10 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:42:21.896 23:28:10 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:42:28.457 23:28:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:42:28.457 23:28:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:42:28.457 23:28:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:42:28.457 23:28:16 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:42:28.457 23:28:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:42:28.457 23:28:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:42:28.457 23:28:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:42:28.457 23:28:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:42:28.458 23:28:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:42:28.458 23:28:16 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:28.458 23:28:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:42:28.458 23:28:16 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:28.458 [2024-07-13 23:28:16.721205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:42:28.458 [2024-07-13 23:28:16.722664] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:28.458 [2024-07-13 23:28:16.722730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:42:28.458 [2024-07-13 23:28:16.722767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:28.458 [2024-07-13 23:28:16.722796] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:28.458 [2024-07-13 23:28:16.722820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:42:28.458 [2024-07-13 23:28:16.722857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:28.458 [2024-07-13 23:28:16.722882] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:28.458 [2024-07-13 23:28:16.722904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:42:28.458 [2024-07-13 23:28:16.722921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:28.458 [2024-07-13 23:28:16.722955] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:28.458 [2024-07-13 23:28:16.722975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:42:28.458 [2024-07-13 23:28:16.723007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:28.458 23:28:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:42:28.458 23:28:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:42:28.458 23:28:17 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:42:28.458 23:28:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:42:28.458 23:28:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:42:28.458 23:28:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:42:28.458 23:28:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:42:28.458 23:28:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:42:28.458 23:28:17 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:28.458 23:28:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:42:28.458 23:28:17 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:28.458 23:28:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:42:28.458 23:28:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:42:28.458 23:28:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:42:28.458 23:28:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:42:28.458 23:28:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:42:28.458 23:28:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:42:28.458 23:28:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:42:28.458 23:28:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:42:35.021 23:28:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:42:35.021 23:28:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:42:35.021 23:28:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:42:35.021 23:28:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:42:35.021 23:28:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:42:35.021 23:28:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:42:35.021 23:28:23 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:35.021 23:28:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:42:35.021 23:28:23 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:35.021 23:28:23 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:42:35.021 23:28:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:42:35.021 23:28:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:42:35.021 23:28:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:42:35.021 [2024-07-13 23:28:23.421282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:42:35.021 [2024-07-13 23:28:23.422945] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:35.021 [2024-07-13 23:28:23.422998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:42:35.021 [2024-07-13 23:28:23.423031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:35.021 [2024-07-13 23:28:23.423064] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:35.021 [2024-07-13 23:28:23.423109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:42:35.021 [2024-07-13 23:28:23.423139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:35.021 [2024-07-13 23:28:23.423164] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:35.021 [2024-07-13 23:28:23.423183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:42:35.021 [2024-07-13 23:28:23.423209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:35.021 [2024-07-13 23:28:23.423241] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:35.021 [2024-07-13 23:28:23.423286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:42:35.021 [2024-07-13 23:28:23.423317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:35.021 23:28:23 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:42:35.021 23:28:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:42:35.021 23:28:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:42:35.021 23:28:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:42:35.021 23:28:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:42:35.021 23:28:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:42:35.021 23:28:23 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:35.021 23:28:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:42:35.021 23:28:23 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:35.021 23:28:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:42:35.021 23:28:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:42:35.021 23:28:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:42:35.021 23:28:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:42:35.021 23:28:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:42:35.021 23:28:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:42:35.021 23:28:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:42:35.021 23:28:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:42:40.285 23:28:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:42:40.285 23:28:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:42:40.285 23:28:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:42:40.285 23:28:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:42:40.285 23:28:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:42:40.285 23:28:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:42:40.285 23:28:29 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:40.285 23:28:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:42:40.285 23:28:29 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:40.285 23:28:29 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:42:40.285 23:28:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:42:40.285 23:28:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:42:40.285 23:28:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:42:40.544 23:28:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:42:40.544 23:28:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:42:40.544 [2024-07-13 23:28:29.721370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:42:40.544 23:28:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:42:40.544 [2024-07-13 23:28:29.722924] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:40.544 [2024-07-13 23:28:29.722980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:42:40.544 [2024-07-13 23:28:29.723025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:40.544 [2024-07-13 23:28:29.723061] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:40.544 [2024-07-13 23:28:29.723087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:42:40.544 [2024-07-13 23:28:29.723104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:40.544 [2024-07-13 23:28:29.723128] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:40.544 [2024-07-13 23:28:29.723148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:42:40.544 [2024-07-13 23:28:29.723170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:40.544 [2024-07-13 23:28:29.723187] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:40.544 [2024-07-13 23:28:29.723206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:42:40.544 [2024-07-13 23:28:29.723225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:40.544 23:28:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:42:40.544 23:28:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:42:40.544 23:28:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:42:40.544 23:28:29 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:40.544 23:28:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:42:40.544 23:28:29 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:40.544 23:28:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:42:40.544 23:28:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:42:40.544 23:28:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:42:40.544 23:28:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:42:40.544 23:28:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:42:40.544 23:28:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:42:40.544 23:28:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:42:40.544 23:28:29 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:42:47.173 23:28:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:42:47.173 23:28:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:42:47.173 23:28:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:42:47.173 23:28:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:42:47.173 23:28:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:42:47.173 23:28:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:42:47.173 23:28:35 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:47.173 23:28:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:42:47.173 23:28:35 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:47.173 23:28:35 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:42:47.173 23:28:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:42:47.173 23:28:35 sw_hotplug -- common/autotest_common.sh@715 -- # time=25.33 00:42:47.173 23:28:35 sw_hotplug -- common/autotest_common.sh@716 -- # echo 25.33 00:42:47.173 23:28:35 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:42:47.173 23:28:35 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=25.33 00:42:47.173 23:28:35 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 25.33 1 00:42:47.174 remove_attach_helper took 25.33s to complete (handling 1 nvme drive(s)) 23:28:35 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:42:47.174 23:28:35 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 181403 00:42:47.174 23:28:35 sw_hotplug -- common/autotest_common.sh@948 -- # '[' -z 181403 ']' 00:42:47.174 23:28:35 sw_hotplug -- common/autotest_common.sh@952 -- # kill -0 181403 00:42:47.174 23:28:35 sw_hotplug -- common/autotest_common.sh@953 -- # uname 00:42:47.174 23:28:35 sw_hotplug -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:47.174 23:28:35 sw_hotplug -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 181403 00:42:47.174 killing process with pid 181403 00:42:47.174 23:28:35 sw_hotplug -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:42:47.174 23:28:35 sw_hotplug -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:42:47.174 23:28:35 sw_hotplug -- common/autotest_common.sh@966 -- # echo 'killing process with pid 181403' 00:42:47.174 23:28:35 sw_hotplug -- common/autotest_common.sh@967 -- # kill 181403 00:42:47.174 23:28:35 sw_hotplug -- common/autotest_common.sh@972 -- # wait 181403 00:42:47.432 23:28:36 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:42:47.692 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:42:47.692 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:42:48.628 00:42:48.628 real 1m29.004s 00:42:48.628 user 1m4.000s 00:42:48.628 sys 0m15.071s 00:42:48.628 23:28:37 sw_hotplug -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:48.628 23:28:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:42:48.628 ************************************ 00:42:48.628 END TEST sw_hotplug 00:42:48.628 ************************************ 00:42:48.628 23:28:37 -- common/autotest_common.sh@1142 -- # return 0 00:42:48.628 23:28:37 -- spdk/autotest.sh@247 -- # [[ 0 -eq 1 ]] 00:42:48.628 23:28:37 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:42:48.628 23:28:37 -- spdk/autotest.sh@260 -- # timing_exit lib 00:42:48.628 23:28:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:42:48.628 23:28:37 -- common/autotest_common.sh@10 -- # set +x 00:42:48.628 23:28:37 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:42:48.628 23:28:37 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:42:48.628 23:28:37 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:42:48.628 23:28:37 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:42:48.628 23:28:37 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:42:48.628 23:28:37 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:42:48.628 23:28:37 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:42:48.628 23:28:37 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:42:48.628 23:28:37 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:42:48.628 23:28:37 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:42:48.628 23:28:37 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:42:48.628 23:28:37 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:42:48.628 23:28:37 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:42:48.628 23:28:37 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:42:48.628 23:28:37 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:42:48.628 23:28:37 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:42:48.628 23:28:37 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:42:48.628 23:28:37 -- spdk/autotest.sh@375 -- # [[ 1 -eq 1 ]] 00:42:48.628 23:28:37 -- spdk/autotest.sh@376 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:42:48.628 23:28:37 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:42:48.628 23:28:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:48.628 23:28:37 -- common/autotest_common.sh@10 -- # set +x 00:42:48.628 ************************************ 00:42:48.628 START TEST blockdev_raid5f 00:42:48.628 ************************************ 00:42:48.628 23:28:37 blockdev_raid5f -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:42:48.887 * Looking for test storage... 00:42:48.887 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:42:48.887 23:28:38 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:42:48.887 23:28:38 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:42:48.887 23:28:38 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:42:48.887 23:28:38 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:42:48.887 23:28:38 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:42:48.887 23:28:38 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:42:48.887 23:28:38 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:42:48.887 23:28:38 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:42:48.887 23:28:38 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:42:48.887 23:28:38 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:42:48.887 23:28:38 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:42:48.887 23:28:38 blockdev_raid5f -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:42:48.887 23:28:38 blockdev_raid5f -- bdev/blockdev.sh@674 -- # uname -s 00:42:48.887 23:28:38 blockdev_raid5f -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:42:48.887 23:28:38 blockdev_raid5f -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:42:48.887 23:28:38 blockdev_raid5f -- bdev/blockdev.sh@682 -- # test_type=raid5f 00:42:48.887 23:28:38 blockdev_raid5f -- bdev/blockdev.sh@683 -- # crypto_device= 00:42:48.887 23:28:38 blockdev_raid5f -- bdev/blockdev.sh@684 -- # dek= 00:42:48.887 23:28:38 blockdev_raid5f -- bdev/blockdev.sh@685 -- # env_ctx= 00:42:48.887 23:28:38 blockdev_raid5f -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:42:48.887 23:28:38 blockdev_raid5f -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:42:48.887 23:28:38 blockdev_raid5f -- bdev/blockdev.sh@690 -- # [[ raid5f == bdev ]] 00:42:48.887 23:28:38 blockdev_raid5f -- bdev/blockdev.sh@690 -- # [[ raid5f == crypto_* ]] 00:42:48.887 23:28:38 blockdev_raid5f -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:42:48.887 23:28:38 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=182254 00:42:48.887 23:28:38 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:42:48.887 23:28:38 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 182254 00:42:48.887 23:28:38 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:42:48.887 23:28:38 blockdev_raid5f -- common/autotest_common.sh@829 -- # '[' -z 182254 ']' 00:42:48.887 23:28:38 blockdev_raid5f -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:48.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:48.887 23:28:38 blockdev_raid5f -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:48.887 23:28:38 blockdev_raid5f -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:48.887 23:28:38 blockdev_raid5f -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:48.887 23:28:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:42:48.887 [2024-07-13 23:28:38.152188] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:42:48.887 [2024-07-13 23:28:38.152472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid182254 ] 00:42:49.146 [2024-07-13 23:28:38.302000] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:49.146 [2024-07-13 23:28:38.382034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:49.713 23:28:38 blockdev_raid5f -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:49.713 23:28:38 blockdev_raid5f -- common/autotest_common.sh@862 -- # return 0 00:42:49.713 23:28:38 blockdev_raid5f -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:42:49.713 23:28:38 blockdev_raid5f -- bdev/blockdev.sh@726 -- # setup_raid5f_conf 00:42:49.713 23:28:38 blockdev_raid5f -- bdev/blockdev.sh@280 -- # rpc_cmd 00:42:49.713 23:28:38 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:49.713 23:28:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:42:49.713 Malloc0 00:42:49.713 Malloc1 00:42:49.713 Malloc2 00:42:49.713 23:28:39 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:49.713 23:28:39 blockdev_raid5f -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:42:49.713 23:28:39 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:49.713 23:28:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:42:49.713 23:28:39 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:49.713 23:28:39 blockdev_raid5f -- bdev/blockdev.sh@740 -- # cat 00:42:49.713 23:28:39 blockdev_raid5f -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:42:49.713 23:28:39 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:49.713 23:28:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:42:49.713 23:28:39 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:49.713 23:28:39 blockdev_raid5f -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:42:49.713 23:28:39 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:49.713 23:28:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:42:49.713 23:28:39 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:49.713 23:28:39 blockdev_raid5f -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:42:49.713 23:28:39 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:49.713 23:28:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:42:49.713 23:28:39 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:49.713 23:28:39 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:42:49.713 23:28:39 blockdev_raid5f -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:42:49.713 23:28:39 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:49.713 23:28:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:42:49.713 23:28:39 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:42:49.971 23:28:39 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:49.971 23:28:39 blockdev_raid5f -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:42:49.971 23:28:39 blockdev_raid5f -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "601ba576-97d6-44e9-96ab-3a37ecb6797c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "601ba576-97d6-44e9-96ab-3a37ecb6797c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "601ba576-97d6-44e9-96ab-3a37ecb6797c",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "79e75afa-6665-4c2e-b3e4-0015d2a6d044",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "e251fbe3-068b-4c6b-9477-4af68e968664",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "656959ec-86b3-40de-b4a8-3e599dfa9b8c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:42:49.971 23:28:39 blockdev_raid5f -- bdev/blockdev.sh@749 -- # jq -r .name 00:42:49.971 23:28:39 blockdev_raid5f -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:42:49.971 23:28:39 blockdev_raid5f -- bdev/blockdev.sh@752 -- # hello_world_bdev=raid5f 00:42:49.971 23:28:39 blockdev_raid5f -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:42:49.971 23:28:39 blockdev_raid5f -- bdev/blockdev.sh@754 -- # killprocess 182254 00:42:49.971 23:28:39 blockdev_raid5f -- common/autotest_common.sh@948 -- # '[' -z 182254 ']' 00:42:49.971 23:28:39 blockdev_raid5f -- common/autotest_common.sh@952 -- # kill -0 182254 00:42:49.971 23:28:39 blockdev_raid5f -- common/autotest_common.sh@953 -- # uname 00:42:49.971 23:28:39 blockdev_raid5f -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:49.971 23:28:39 blockdev_raid5f -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 182254 00:42:49.971 23:28:39 blockdev_raid5f -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:42:49.971 killing process with pid 182254 00:42:49.971 23:28:39 blockdev_raid5f -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:42:49.971 23:28:39 blockdev_raid5f -- common/autotest_common.sh@966 -- # echo 'killing process with pid 182254' 00:42:49.971 23:28:39 blockdev_raid5f -- common/autotest_common.sh@967 -- # kill 182254 00:42:49.971 23:28:39 blockdev_raid5f -- common/autotest_common.sh@972 -- # wait 182254 00:42:50.538 23:28:39 blockdev_raid5f -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:42:50.538 23:28:39 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:42:50.538 23:28:39 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:42:50.538 23:28:39 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:50.538 23:28:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:42:50.538 ************************************ 00:42:50.538 START TEST bdev_hello_world 00:42:50.538 ************************************ 00:42:50.538 23:28:39 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:42:50.538 [2024-07-13 23:28:39.873265] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:42:50.538 [2024-07-13 23:28:39.873691] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid182295 ] 00:42:50.797 [2024-07-13 23:28:40.015226] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:50.797 [2024-07-13 23:28:40.083009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:51.055 [2024-07-13 23:28:40.333690] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:42:51.055 [2024-07-13 23:28:40.334085] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:42:51.055 [2024-07-13 23:28:40.334181] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:42:51.055 [2024-07-13 23:28:40.334772] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:42:51.055 [2024-07-13 23:28:40.335053] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:42:51.055 [2024-07-13 23:28:40.335125] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:42:51.055 [2024-07-13 23:28:40.335243] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:42:51.055 00:42:51.055 [2024-07-13 23:28:40.335342] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:42:51.313 ************************************ 00:42:51.313 END TEST bdev_hello_world 00:42:51.313 ************************************ 00:42:51.313 00:42:51.313 real 0m0.865s 00:42:51.313 user 0m0.489s 00:42:51.313 sys 0m0.261s 00:42:51.313 23:28:40 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:51.313 23:28:40 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:42:51.573 23:28:40 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 0 00:42:51.573 23:28:40 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:42:51.573 23:28:40 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:42:51.573 23:28:40 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:51.573 23:28:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:42:51.573 ************************************ 00:42:51.573 START TEST bdev_bounds 00:42:51.573 ************************************ 00:42:51.573 23:28:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:42:51.573 23:28:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=182333 00:42:51.573 23:28:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:42:51.573 23:28:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:42:51.573 23:28:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 182333' 00:42:51.573 Process bdevio pid: 182333 00:42:51.573 23:28:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 182333 00:42:51.573 23:28:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 182333 ']' 00:42:51.573 23:28:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:51.573 23:28:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:51.573 23:28:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:51.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:51.573 23:28:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:51.573 23:28:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:42:51.573 [2024-07-13 23:28:40.794606] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:42:51.573 [2024-07-13 23:28:40.795068] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid182333 ] 00:42:51.573 [2024-07-13 23:28:40.949288] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:51.832 [2024-07-13 23:28:41.016402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:51.832 [2024-07-13 23:28:41.016536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:51.832 [2024-07-13 23:28:41.016537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:42:52.400 23:28:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:52.400 23:28:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:42:52.400 23:28:41 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:42:52.658 I/O targets: 00:42:52.658 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:42:52.658 00:42:52.658 00:42:52.658 CUnit - A unit testing framework for C - Version 2.1-3 00:42:52.658 http://cunit.sourceforge.net/ 00:42:52.658 00:42:52.658 00:42:52.658 Suite: bdevio tests on: raid5f 00:42:52.658 Test: blockdev write read block ...passed 00:42:52.658 Test: blockdev write zeroes read block ...passed 00:42:52.658 Test: blockdev write zeroes read no split ...passed 00:42:52.658 Test: blockdev write zeroes read split ...passed 00:42:52.658 Test: blockdev write zeroes read split partial ...passed 00:42:52.658 Test: blockdev reset ...passed 00:42:52.658 Test: blockdev write read 8 blocks ...passed 00:42:52.658 Test: blockdev write read size > 128k ...passed 00:42:52.658 Test: blockdev write read invalid size ...passed 00:42:52.658 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:42:52.658 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:42:52.658 Test: blockdev write read max offset ...passed 00:42:52.658 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:42:52.658 Test: blockdev writev readv 8 blocks ...passed 00:42:52.658 Test: blockdev writev readv 30 x 1block ...passed 00:42:52.658 Test: blockdev writev readv block ...passed 00:42:52.658 Test: blockdev writev readv size > 128k ...passed 00:42:52.658 Test: blockdev writev readv size > 128k in two iovs ...passed 00:42:52.658 Test: blockdev comparev and writev ...passed 00:42:52.658 Test: blockdev nvme passthru rw ...passed 00:42:52.658 Test: blockdev nvme passthru vendor specific ...passed 00:42:52.658 Test: blockdev nvme admin passthru ...passed 00:42:52.658 Test: blockdev copy ...passed 00:42:52.658 00:42:52.658 Run Summary: Type Total Ran Passed Failed Inactive 00:42:52.658 suites 1 1 n/a 0 0 00:42:52.658 tests 23 23 23 0 0 00:42:52.658 asserts 130 130 130 0 n/a 00:42:52.658 00:42:52.658 Elapsed time = 0.307 seconds 00:42:52.658 0 00:42:52.658 23:28:42 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 182333 00:42:52.659 23:28:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 182333 ']' 00:42:52.659 23:28:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 182333 00:42:52.659 23:28:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:42:52.659 23:28:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:52.659 23:28:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 182333 00:42:52.659 23:28:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:42:52.659 killing process with pid 182333 00:42:52.659 23:28:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:42:52.659 23:28:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 182333' 00:42:52.659 23:28:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@967 -- # kill 182333 00:42:52.659 23:28:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # wait 182333 00:42:53.225 ************************************ 00:42:53.225 END TEST bdev_bounds 00:42:53.225 ************************************ 00:42:53.225 23:28:42 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:42:53.225 00:42:53.225 real 0m1.650s 00:42:53.225 user 0m4.154s 00:42:53.225 sys 0m0.312s 00:42:53.225 23:28:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:53.225 23:28:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:42:53.225 23:28:42 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 0 00:42:53.225 23:28:42 blockdev_raid5f -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:42:53.225 23:28:42 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:42:53.225 23:28:42 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:53.225 23:28:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:42:53.225 ************************************ 00:42:53.225 START TEST bdev_nbd 00:42:53.225 ************************************ 00:42:53.225 23:28:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:42:53.225 23:28:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:42:53.225 23:28:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:42:53.225 23:28:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:53.225 23:28:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:42:53.225 23:28:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('raid5f') 00:42:53.225 23:28:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:42:53.225 23:28:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=1 00:42:53.225 23:28:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:42:53.225 23:28:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:42:53.225 23:28:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:42:53.225 23:28:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=1 00:42:53.225 23:28:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0') 00:42:53.225 23:28:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:42:53.225 23:28:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('raid5f') 00:42:53.225 23:28:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:42:53.225 23:28:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=182390 00:42:53.225 23:28:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:42:53.225 23:28:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:42:53.225 23:28:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 182390 /var/tmp/spdk-nbd.sock 00:42:53.225 23:28:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 182390 ']' 00:42:53.225 23:28:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:42:53.225 23:28:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:53.225 23:28:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:42:53.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:42:53.225 23:28:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:53.225 23:28:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:42:53.225 [2024-07-13 23:28:42.503950] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:42:53.225 [2024-07-13 23:28:42.504403] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:53.484 [2024-07-13 23:28:42.644619] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:53.484 [2024-07-13 23:28:42.712841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:54.050 23:28:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:54.050 23:28:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:42:54.050 23:28:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:42:54.050 23:28:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:54.050 23:28:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:42:54.050 23:28:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:42:54.050 23:28:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:42:54.050 23:28:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:54.050 23:28:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:42:54.050 23:28:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:42:54.050 23:28:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:42:54.050 23:28:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:42:54.050 23:28:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:42:54.050 23:28:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:42:54.050 23:28:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:42:54.308 23:28:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:42:54.308 23:28:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:42:54.308 23:28:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:42:54.308 23:28:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:42:54.308 23:28:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:42:54.308 23:28:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:42:54.308 23:28:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:42:54.308 23:28:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:42:54.308 23:28:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:42:54.308 23:28:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:42:54.308 23:28:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:42:54.308 23:28:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:54.308 1+0 records in 00:42:54.308 1+0 records out 00:42:54.308 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000629526 s, 6.5 MB/s 00:42:54.308 23:28:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:54.308 23:28:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:42:54.308 23:28:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:54.308 23:28:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:42:54.308 23:28:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:42:54.308 23:28:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:42:54.308 23:28:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:42:54.308 23:28:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:42:54.566 23:28:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:42:54.566 { 00:42:54.566 "nbd_device": "/dev/nbd0", 00:42:54.566 "bdev_name": "raid5f" 00:42:54.566 } 00:42:54.566 ]' 00:42:54.566 23:28:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:42:54.566 23:28:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:42:54.566 { 00:42:54.566 "nbd_device": "/dev/nbd0", 00:42:54.566 "bdev_name": "raid5f" 00:42:54.566 } 00:42:54.566 ]' 00:42:54.566 23:28:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:42:54.824 23:28:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:42:54.824 23:28:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:54.824 23:28:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:42:54.824 23:28:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:54.824 23:28:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:42:54.824 23:28:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:54.824 23:28:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:42:54.824 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:54.824 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:54.824 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:54.824 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:54.824 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:54.824 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:54.824 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:42:54.824 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:42:54.824 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:42:54.824 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:54.825 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:42:55.083 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:42:55.083 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:42:55.083 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:42:55.341 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:42:55.341 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:42:55.341 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:42:55.341 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:42:55.341 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:42:55.341 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:42:55.341 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:42:55.341 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:42:55.341 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:42:55.341 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:42:55.341 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:55.341 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:42:55.341 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:42:55.342 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:42:55.342 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:42:55.342 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:42:55.342 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:55.342 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:42:55.342 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:42:55.342 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:42:55.342 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:42:55.342 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:42:55.342 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:42:55.342 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:55.342 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:42:55.342 /dev/nbd0 00:42:55.342 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:42:55.342 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:42:55.342 23:28:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:42:55.342 23:28:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:42:55.342 23:28:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:42:55.342 23:28:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:42:55.342 23:28:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:42:55.342 23:28:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:42:55.342 23:28:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:42:55.342 23:28:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:42:55.342 23:28:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:55.342 1+0 records in 00:42:55.342 1+0 records out 00:42:55.342 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000719142 s, 5.7 MB/s 00:42:55.342 23:28:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:55.342 23:28:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:42:55.342 23:28:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:55.342 23:28:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:42:55.600 23:28:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:42:55.600 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:55.600 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:55.600 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:42:55.600 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:55.600 23:28:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:42:55.858 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:42:55.858 { 00:42:55.858 "nbd_device": "/dev/nbd0", 00:42:55.858 "bdev_name": "raid5f" 00:42:55.858 } 00:42:55.858 ]' 00:42:55.858 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:42:55.858 { 00:42:55.858 "nbd_device": "/dev/nbd0", 00:42:55.858 "bdev_name": "raid5f" 00:42:55.858 } 00:42:55.858 ]' 00:42:55.858 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:42:55.858 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:42:55.858 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:42:55.858 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:42:55.858 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:42:55.858 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:42:55.858 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:42:55.858 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:42:55.858 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:42:55.858 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:42:55.858 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:42:55.858 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:42:55.858 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:42:55.858 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:42:55.859 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:42:55.859 256+0 records in 00:42:55.859 256+0 records out 00:42:55.859 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0085333 s, 123 MB/s 00:42:55.859 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:42:55.859 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:42:55.859 256+0 records in 00:42:55.859 256+0 records out 00:42:55.859 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0282245 s, 37.2 MB/s 00:42:55.859 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:42:55.859 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:42:55.859 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:42:55.859 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:42:55.859 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:42:55.859 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:42:55.859 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:42:55.859 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:42:55.859 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:42:55.859 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:42:55.859 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:42:55.859 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:55.859 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:42:55.859 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:55.859 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:42:55.859 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:55.859 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:42:56.117 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:56.117 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:56.117 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:56.117 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:56.117 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:56.117 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:56.117 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:42:56.117 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:42:56.117 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:42:56.117 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:56.117 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:42:56.375 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:42:56.375 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:42:56.375 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:42:56.375 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:42:56.375 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:42:56.375 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:42:56.375 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:42:56.375 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:42:56.375 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:42:56.375 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:42:56.375 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:42:56.375 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:42:56.375 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:42:56.375 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:56.375 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:42:56.375 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:42:56.375 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:42:56.375 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:42:56.634 malloc_lvol_verify 00:42:56.634 23:28:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:42:56.892 63be54a7-9dce-4f0b-a57c-d3595a55c7ab 00:42:56.892 23:28:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:42:57.151 aebbf148-a98f-448b-864b-3c18873987ea 00:42:57.151 23:28:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:42:57.409 /dev/nbd0 00:42:57.409 23:28:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:42:57.409 mke2fs 1.46.5 (30-Dec-2021) 00:42:57.409 00:42:57.409 Filesystem too small for a journal 00:42:57.409 Discarding device blocks: 0/1024 done 00:42:57.409 Creating filesystem with 1024 4k blocks and 1024 inodes 00:42:57.409 00:42:57.409 Allocating group tables: 0/1 done 00:42:57.409 Writing inode tables: 0/1 done 00:42:57.409 Writing superblocks and filesystem accounting information: 0/1 done 00:42:57.409 00:42:57.409 23:28:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:42:57.409 23:28:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:42:57.409 23:28:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:57.409 23:28:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:42:57.409 23:28:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:57.409 23:28:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:42:57.409 23:28:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:57.409 23:28:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:42:57.669 23:28:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:57.669 23:28:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:57.669 23:28:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:57.669 23:28:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:57.669 23:28:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:57.669 23:28:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:57.669 23:28:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:42:57.669 23:28:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:42:57.669 23:28:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:42:57.669 23:28:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:42:57.669 23:28:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 182390 00:42:57.669 23:28:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 182390 ']' 00:42:57.669 23:28:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 182390 00:42:57.669 23:28:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:42:57.669 23:28:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:57.669 23:28:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 182390 00:42:57.669 23:28:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:42:57.669 23:28:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:42:57.669 23:28:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 182390' 00:42:57.669 killing process with pid 182390 00:42:57.669 23:28:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@967 -- # kill 182390 00:42:57.669 23:28:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # wait 182390 00:42:57.928 23:28:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:42:57.928 00:42:57.928 real 0m4.780s 00:42:57.928 user 0m7.317s 00:42:57.928 sys 0m1.112s 00:42:57.928 23:28:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:57.928 ************************************ 00:42:57.928 23:28:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:42:57.928 END TEST bdev_nbd 00:42:57.928 ************************************ 00:42:57.928 23:28:47 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 0 00:42:57.928 23:28:47 blockdev_raid5f -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:42:57.928 23:28:47 blockdev_raid5f -- bdev/blockdev.sh@764 -- # '[' raid5f = nvme ']' 00:42:57.928 23:28:47 blockdev_raid5f -- bdev/blockdev.sh@764 -- # '[' raid5f = gpt ']' 00:42:57.928 23:28:47 blockdev_raid5f -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:42:57.928 23:28:47 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:42:57.928 23:28:47 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:57.928 23:28:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:42:57.928 ************************************ 00:42:57.928 START TEST bdev_fio 00:42:57.928 ************************************ 00:42:57.928 23:28:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:42:57.928 23:28:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:42:57.928 23:28:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:42:57.928 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:42:57.928 23:28:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:42:57.928 23:28:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:42:57.928 23:28:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:42:57.928 23:28:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:42:57.928 23:28:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:42:57.928 23:28:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:42:57.928 23:28:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:42:57.928 23:28:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:42:57.928 23:28:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:42:57.928 23:28:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:42:57.928 23:28:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:42:57.928 23:28:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:42:57.928 23:28:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:42:57.928 23:28:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:42:57.928 23:28:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:42:57.928 23:28:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:42:57.928 23:28:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:42:57.928 23:28:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:42:57.928 23:28:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:42:57.928 23:28:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:42:57.928 23:28:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:42:57.928 23:28:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:42:57.928 23:28:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid5f]' 00:42:57.928 23:28:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid5f 00:42:57.928 23:28:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:42:57.928 23:28:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:42:57.928 23:28:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:42:57.928 23:28:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:57.928 23:28:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:42:58.187 ************************************ 00:42:58.187 START TEST bdev_fio_rw_verify 00:42:58.187 ************************************ 00:42:58.187 23:28:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:42:58.187 23:28:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:42:58.187 23:28:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:42:58.187 23:28:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:58.187 23:28:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:42:58.187 23:28:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:42:58.187 23:28:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:42:58.187 23:28:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:42:58.187 23:28:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:42:58.187 23:28:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:42:58.187 23:28:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:42:58.187 23:28:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:42:58.187 23:28:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:42:58.187 23:28:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:42:58.187 23:28:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:42:58.187 23:28:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:42:58.187 23:28:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:42:58.188 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:42:58.188 fio-3.35 00:42:58.188 Starting 1 thread 00:43:10.419 00:43:10.419 job_raid5f: (groupid=0, jobs=1): err= 0: pid=182610: Sat Jul 13 23:28:58 2024 00:43:10.419 read: IOPS=11.2k, BW=43.6MiB/s (45.7MB/s)(436MiB/10001msec) 00:43:10.419 slat (usec): min=19, max=104, avg=21.53, stdev= 4.21 00:43:10.419 clat (usec): min=12, max=487, avg=144.77, stdev=53.62 00:43:10.419 lat (usec): min=33, max=527, avg=166.29, stdev=54.69 00:43:10.419 clat percentiles (usec): 00:43:10.419 | 50.000th=[ 145], 99.000th=[ 277], 99.900th=[ 343], 99.990th=[ 412], 00:43:10.419 | 99.999th=[ 457] 00:43:10.419 write: IOPS=11.7k, BW=45.8MiB/s (48.0MB/s)(452MiB/9884msec); 0 zone resets 00:43:10.419 slat (usec): min=9, max=260, avg=18.24, stdev= 4.92 00:43:10.419 clat (usec): min=61, max=1202, avg=326.46, stdev=57.06 00:43:10.419 lat (usec): min=77, max=1417, avg=344.71, stdev=59.37 00:43:10.419 clat percentiles (usec): 00:43:10.419 | 50.000th=[ 326], 99.000th=[ 529], 99.900th=[ 676], 99.990th=[ 1029], 00:43:10.419 | 99.999th=[ 1106] 00:43:10.419 bw ( KiB/s): min=42328, max=50736, per=98.74%, avg=46271.58, stdev=2145.11, samples=19 00:43:10.419 iops : min=10582, max=12684, avg=11567.89, stdev=536.28, samples=19 00:43:10.419 lat (usec) : 20=0.01%, 50=0.01%, 100=11.35%, 250=39.33%, 500=48.61% 00:43:10.419 lat (usec) : 750=0.68%, 1000=0.02% 00:43:10.419 lat (msec) : 2=0.01% 00:43:10.419 cpu : usr=99.42%, sys=0.57%, ctx=27, majf=0, minf=11025 00:43:10.419 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:10.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:10.419 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:10.419 issued rwts: total=111628,115791,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:10.419 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:10.419 00:43:10.419 Run status group 0 (all jobs): 00:43:10.419 READ: bw=43.6MiB/s (45.7MB/s), 43.6MiB/s-43.6MiB/s (45.7MB/s-45.7MB/s), io=436MiB (457MB), run=10001-10001msec 00:43:10.419 WRITE: bw=45.8MiB/s (48.0MB/s), 45.8MiB/s-45.8MiB/s (48.0MB/s-48.0MB/s), io=452MiB (474MB), run=9884-9884msec 00:43:10.419 ----------------------------------------------------- 00:43:10.419 Suppressions used: 00:43:10.419 count bytes template 00:43:10.419 1 7 /usr/src/fio/parse.c 00:43:10.419 685 65760 /usr/src/fio/iolog.c 00:43:10.419 1 904 libcrypto.so 00:43:10.419 ----------------------------------------------------- 00:43:10.419 00:43:10.419 00:43:10.419 real 0m11.321s 00:43:10.419 user 0m11.987s 00:43:10.419 sys 0m0.636s 00:43:10.419 23:28:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:10.419 23:28:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:43:10.419 ************************************ 00:43:10.419 END TEST bdev_fio_rw_verify 00:43:10.419 ************************************ 00:43:10.419 23:28:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:43:10.419 23:28:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:43:10.419 23:28:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:43:10.419 23:28:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:43:10.419 23:28:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:43:10.419 23:28:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:43:10.419 23:28:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:43:10.419 23:28:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:43:10.419 23:28:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:43:10.419 23:28:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:43:10.419 23:28:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:43:10.419 23:28:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:43:10.419 23:28:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:43:10.419 23:28:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:43:10.419 23:28:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:43:10.419 23:28:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:43:10.419 23:28:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:43:10.419 23:28:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "601ba576-97d6-44e9-96ab-3a37ecb6797c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "601ba576-97d6-44e9-96ab-3a37ecb6797c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "601ba576-97d6-44e9-96ab-3a37ecb6797c",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "79e75afa-6665-4c2e-b3e4-0015d2a6d044",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "e251fbe3-068b-4c6b-9477-4af68e968664",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "656959ec-86b3-40de-b4a8-3e599dfa9b8c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:43:10.419 23:28:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:43:10.419 23:28:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n '' ]] 00:43:10.419 23:28:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:43:10.419 /home/vagrant/spdk_repo/spdk 00:43:10.419 23:28:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # popd 00:43:10.419 23:28:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # trap - SIGINT SIGTERM EXIT 00:43:10.419 23:28:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@364 -- # return 0 00:43:10.419 00:43:10.419 real 0m11.499s 00:43:10.419 user 0m12.110s 00:43:10.419 sys 0m0.692s 00:43:10.419 23:28:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:10.419 23:28:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:43:10.419 ************************************ 00:43:10.419 END TEST bdev_fio 00:43:10.419 ************************************ 00:43:10.419 23:28:58 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 0 00:43:10.419 23:28:58 blockdev_raid5f -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:43:10.419 23:28:58 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:43:10.419 23:28:58 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:43:10.419 23:28:58 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:10.419 23:28:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:10.419 ************************************ 00:43:10.419 START TEST bdev_verify 00:43:10.419 ************************************ 00:43:10.419 23:28:58 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:43:10.419 [2024-07-13 23:28:58.895273] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:43:10.419 [2024-07-13 23:28:58.895537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid182770 ] 00:43:10.419 [2024-07-13 23:28:59.047294] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:10.419 [2024-07-13 23:28:59.116003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:43:10.419 [2024-07-13 23:28:59.116020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:10.419 Running I/O for 5 seconds... 00:43:15.693 00:43:15.693 Latency(us) 00:43:15.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:15.694 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:43:15.694 Verification LBA range: start 0x0 length 0x2000 00:43:15.694 raid5f : 5.02 6125.74 23.93 0.00 0.00 31517.93 163.84 29312.47 00:43:15.694 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:43:15.694 Verification LBA range: start 0x2000 length 0x2000 00:43:15.694 raid5f : 5.02 6148.91 24.02 0.00 0.00 31374.02 498.97 29550.78 00:43:15.694 =================================================================================================================== 00:43:15.694 Total : 12274.65 47.95 0.00 0.00 31445.84 163.84 29550.78 00:43:15.694 00:43:15.694 real 0m5.891s 00:43:15.694 user 0m10.982s 00:43:15.694 sys 0m0.281s 00:43:15.694 23:29:04 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:15.694 ************************************ 00:43:15.694 23:29:04 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:43:15.694 END TEST bdev_verify 00:43:15.694 ************************************ 00:43:15.694 23:29:04 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 0 00:43:15.694 23:29:04 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:43:15.694 23:29:04 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:43:15.694 23:29:04 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:15.694 23:29:04 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:15.694 ************************************ 00:43:15.694 START TEST bdev_verify_big_io 00:43:15.694 ************************************ 00:43:15.694 23:29:04 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:43:15.694 [2024-07-13 23:29:04.841076] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:43:15.694 [2024-07-13 23:29:04.841336] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid182859 ] 00:43:15.694 [2024-07-13 23:29:04.991527] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:15.694 [2024-07-13 23:29:05.060798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:43:15.694 [2024-07-13 23:29:05.060808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:15.952 Running I/O for 5 seconds... 00:43:21.219 00:43:21.219 Latency(us) 00:43:21.219 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:21.219 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:43:21.219 Verification LBA range: start 0x0 length 0x200 00:43:21.219 raid5f : 5.16 467.06 29.19 0.00 0.00 6789372.43 216.90 366048.35 00:43:21.219 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:43:21.219 Verification LBA range: start 0x200 length 0x200 00:43:21.219 raid5f : 5.19 464.20 29.01 0.00 0.00 6802355.25 170.36 369861.35 00:43:21.219 =================================================================================================================== 00:43:21.219 Total : 931.26 58.20 0.00 0.00 6795863.84 170.36 369861.35 00:43:21.477 00:43:21.477 real 0m5.985s 00:43:21.477 user 0m11.172s 00:43:21.477 sys 0m0.302s 00:43:21.477 23:29:10 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:21.477 ************************************ 00:43:21.477 END TEST bdev_verify_big_io 00:43:21.477 ************************************ 00:43:21.477 23:29:10 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:43:21.477 23:29:10 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 0 00:43:21.477 23:29:10 blockdev_raid5f -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:21.477 23:29:10 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:43:21.477 23:29:10 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:21.477 23:29:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:21.477 ************************************ 00:43:21.477 START TEST bdev_write_zeroes 00:43:21.477 ************************************ 00:43:21.477 23:29:10 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:21.477 [2024-07-13 23:29:10.873291] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:43:21.477 [2024-07-13 23:29:10.873537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid182950 ] 00:43:21.736 [2024-07-13 23:29:11.020155] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:21.737 [2024-07-13 23:29:11.084649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:22.001 Running I/O for 1 seconds... 00:43:22.943 00:43:22.943 Latency(us) 00:43:22.943 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:22.943 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:43:22.943 raid5f : 1.00 25782.92 100.71 0.00 0.00 4949.21 1504.35 6523.81 00:43:22.943 =================================================================================================================== 00:43:22.943 Total : 25782.92 100.71 0.00 0.00 4949.21 1504.35 6523.81 00:43:23.201 00:43:23.201 real 0m1.732s 00:43:23.201 user 0m1.381s 00:43:23.201 sys 0m0.236s 00:43:23.201 23:29:12 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:23.201 23:29:12 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:43:23.201 ************************************ 00:43:23.201 END TEST bdev_write_zeroes 00:43:23.201 ************************************ 00:43:23.201 23:29:12 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 0 00:43:23.201 23:29:12 blockdev_raid5f -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:23.201 23:29:12 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:43:23.201 23:29:12 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:23.201 23:29:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:23.460 ************************************ 00:43:23.460 START TEST bdev_json_nonenclosed 00:43:23.460 ************************************ 00:43:23.460 23:29:12 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:23.460 [2024-07-13 23:29:12.668161] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:43:23.460 [2024-07-13 23:29:12.668410] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid182998 ] 00:43:23.460 [2024-07-13 23:29:12.816776] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:23.719 [2024-07-13 23:29:12.877797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:23.719 [2024-07-13 23:29:12.877940] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:43:23.719 [2024-07-13 23:29:12.877975] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:43:23.719 [2024-07-13 23:29:12.878004] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:43:23.719 00:43:23.719 real 0m0.374s 00:43:23.719 user 0m0.146s 00:43:23.719 sys 0m0.128s 00:43:23.719 23:29:12 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:43:23.719 23:29:12 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:23.719 ************************************ 00:43:23.719 END TEST bdev_json_nonenclosed 00:43:23.719 ************************************ 00:43:23.719 23:29:12 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:43:23.719 23:29:13 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 234 00:43:23.719 23:29:13 blockdev_raid5f -- bdev/blockdev.sh@782 -- # true 00:43:23.719 23:29:13 blockdev_raid5f -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:23.719 23:29:13 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:43:23.719 23:29:13 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:23.719 23:29:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:23.719 ************************************ 00:43:23.719 START TEST bdev_json_nonarray 00:43:23.719 ************************************ 00:43:23.719 23:29:13 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:23.719 [2024-07-13 23:29:13.096370] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:43:23.719 [2024-07-13 23:29:13.096616] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid183025 ] 00:43:23.978 [2024-07-13 23:29:13.241922] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:23.978 [2024-07-13 23:29:13.305846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:23.978 [2024-07-13 23:29:13.306009] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:43:23.978 [2024-07-13 23:29:13.306057] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:43:23.978 [2024-07-13 23:29:13.306093] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:43:24.235 00:43:24.235 real 0m0.387s 00:43:24.235 user 0m0.192s 00:43:24.235 sys 0m0.095s 00:43:24.235 23:29:13 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:43:24.235 23:29:13 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:24.235 23:29:13 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:43:24.235 ************************************ 00:43:24.235 END TEST bdev_json_nonarray 00:43:24.235 ************************************ 00:43:24.235 23:29:13 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 234 00:43:24.235 23:29:13 blockdev_raid5f -- bdev/blockdev.sh@785 -- # true 00:43:24.235 23:29:13 blockdev_raid5f -- bdev/blockdev.sh@787 -- # [[ raid5f == bdev ]] 00:43:24.235 23:29:13 blockdev_raid5f -- bdev/blockdev.sh@794 -- # [[ raid5f == gpt ]] 00:43:24.235 23:29:13 blockdev_raid5f -- bdev/blockdev.sh@798 -- # [[ raid5f == crypto_sw ]] 00:43:24.235 23:29:13 blockdev_raid5f -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:43:24.235 23:29:13 blockdev_raid5f -- bdev/blockdev.sh@811 -- # cleanup 00:43:24.235 23:29:13 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:43:24.235 23:29:13 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:43:24.235 23:29:13 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:43:24.235 23:29:13 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:43:24.235 23:29:13 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:43:24.235 23:29:13 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:43:24.235 00:43:24.235 real 0m35.490s 00:43:24.235 user 0m49.898s 00:43:24.235 sys 0m4.205s 00:43:24.235 23:29:13 blockdev_raid5f -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:24.235 23:29:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:24.235 ************************************ 00:43:24.235 END TEST blockdev_raid5f 00:43:24.235 ************************************ 00:43:24.235 23:29:13 -- common/autotest_common.sh@1142 -- # return 0 00:43:24.235 23:29:13 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:43:24.235 23:29:13 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:43:24.235 23:29:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:43:24.235 23:29:13 -- common/autotest_common.sh@10 -- # set +x 00:43:24.235 23:29:13 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:43:24.235 23:29:13 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:43:24.235 23:29:13 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:43:24.235 23:29:13 -- common/autotest_common.sh@10 -- # set +x 00:43:26.132 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:43:26.132 Waiting for block devices as requested 00:43:26.132 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:43:26.389 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:43:26.389 Cleaning 00:43:26.389 Removing: /var/run/dpdk/spdk0/config 00:43:26.389 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:43:26.389 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:43:26.389 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:43:26.389 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:43:26.389 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:43:26.389 Removing: /var/run/dpdk/spdk0/hugepage_info 00:43:26.389 Removing: /dev/shm/spdk_tgt_trace.pid123067 00:43:26.389 Removing: /var/run/dpdk/spdk0 00:43:26.389 Removing: /var/run/dpdk/spdk_pid122898 00:43:26.389 Removing: /var/run/dpdk/spdk_pid123067 00:43:26.389 Removing: /var/run/dpdk/spdk_pid123296 00:43:26.389 Removing: /var/run/dpdk/spdk_pid123398 00:43:26.389 Removing: /var/run/dpdk/spdk_pid123431 00:43:26.389 Removing: /var/run/dpdk/spdk_pid123553 00:43:26.389 Removing: /var/run/dpdk/spdk_pid123576 00:43:26.389 Removing: /var/run/dpdk/spdk_pid123706 00:43:26.389 Removing: /var/run/dpdk/spdk_pid123964 00:43:26.389 Removing: /var/run/dpdk/spdk_pid124133 00:43:26.389 Removing: /var/run/dpdk/spdk_pid124212 00:43:26.389 Removing: /var/run/dpdk/spdk_pid124301 00:43:26.389 Removing: /var/run/dpdk/spdk_pid124395 00:43:26.389 Removing: /var/run/dpdk/spdk_pid124479 00:43:26.389 Removing: /var/run/dpdk/spdk_pid124529 00:43:26.390 Removing: /var/run/dpdk/spdk_pid124572 00:43:26.390 Removing: /var/run/dpdk/spdk_pid124638 00:43:26.390 Removing: /var/run/dpdk/spdk_pid124737 00:43:26.390 Removing: /var/run/dpdk/spdk_pid125331 00:43:26.390 Removing: /var/run/dpdk/spdk_pid125387 00:43:26.390 Removing: /var/run/dpdk/spdk_pid125440 00:43:26.390 Removing: /var/run/dpdk/spdk_pid125461 00:43:26.390 Removing: /var/run/dpdk/spdk_pid125533 00:43:26.390 Removing: /var/run/dpdk/spdk_pid125554 00:43:26.390 Removing: /var/run/dpdk/spdk_pid125628 00:43:26.390 Removing: /var/run/dpdk/spdk_pid125649 00:43:26.390 Removing: /var/run/dpdk/spdk_pid125694 00:43:26.390 Removing: /var/run/dpdk/spdk_pid125717 00:43:26.390 Removing: /var/run/dpdk/spdk_pid125769 00:43:26.390 Removing: /var/run/dpdk/spdk_pid125795 00:43:26.390 Removing: /var/run/dpdk/spdk_pid125930 00:43:26.390 Removing: /var/run/dpdk/spdk_pid125974 00:43:26.390 Removing: /var/run/dpdk/spdk_pid126010 00:43:26.390 Removing: /var/run/dpdk/spdk_pid126090 00:43:26.390 Removing: /var/run/dpdk/spdk_pid126163 00:43:26.390 Removing: /var/run/dpdk/spdk_pid126187 00:43:26.390 Removing: /var/run/dpdk/spdk_pid126272 00:43:26.390 Removing: /var/run/dpdk/spdk_pid126314 00:43:26.390 Removing: /var/run/dpdk/spdk_pid126365 00:43:26.390 Removing: /var/run/dpdk/spdk_pid126410 00:43:26.390 Removing: /var/run/dpdk/spdk_pid126449 00:43:26.390 Removing: /var/run/dpdk/spdk_pid126500 00:43:26.390 Removing: /var/run/dpdk/spdk_pid126539 00:43:26.390 Removing: /var/run/dpdk/spdk_pid126592 00:43:26.390 Removing: /var/run/dpdk/spdk_pid126631 00:43:26.390 Removing: /var/run/dpdk/spdk_pid126683 00:43:26.390 Removing: /var/run/dpdk/spdk_pid126720 00:43:26.390 Removing: /var/run/dpdk/spdk_pid126766 00:43:26.390 Removing: /var/run/dpdk/spdk_pid126810 00:43:26.390 Removing: /var/run/dpdk/spdk_pid126856 00:43:26.390 Removing: /var/run/dpdk/spdk_pid126909 00:43:26.390 Removing: /var/run/dpdk/spdk_pid126948 00:43:26.390 Removing: /var/run/dpdk/spdk_pid126999 00:43:26.390 Removing: /var/run/dpdk/spdk_pid127041 00:43:26.390 Removing: /var/run/dpdk/spdk_pid127095 00:43:26.390 Removing: /var/run/dpdk/spdk_pid127134 00:43:26.647 Removing: /var/run/dpdk/spdk_pid127188 00:43:26.647 Removing: /var/run/dpdk/spdk_pid127262 00:43:26.647 Removing: /var/run/dpdk/spdk_pid127372 00:43:26.647 Removing: /var/run/dpdk/spdk_pid127529 00:43:26.647 Removing: /var/run/dpdk/spdk_pid127583 00:43:26.647 Removing: /var/run/dpdk/spdk_pid127621 00:43:26.647 Removing: /var/run/dpdk/spdk_pid128815 00:43:26.647 Removing: /var/run/dpdk/spdk_pid129019 00:43:26.647 Removing: /var/run/dpdk/spdk_pid129199 00:43:26.647 Removing: /var/run/dpdk/spdk_pid129301 00:43:26.647 Removing: /var/run/dpdk/spdk_pid129421 00:43:26.647 Removing: /var/run/dpdk/spdk_pid129471 00:43:26.647 Removing: /var/run/dpdk/spdk_pid129509 00:43:26.647 Removing: /var/run/dpdk/spdk_pid129531 00:43:26.647 Removing: /var/run/dpdk/spdk_pid129987 00:43:26.647 Removing: /var/run/dpdk/spdk_pid130069 00:43:26.647 Removing: /var/run/dpdk/spdk_pid130172 00:43:26.647 Removing: /var/run/dpdk/spdk_pid130218 00:43:26.647 Removing: /var/run/dpdk/spdk_pid131482 00:43:26.647 Removing: /var/run/dpdk/spdk_pid131844 00:43:26.647 Removing: /var/run/dpdk/spdk_pid132029 00:43:26.647 Removing: /var/run/dpdk/spdk_pid132949 00:43:26.647 Removing: /var/run/dpdk/spdk_pid133319 00:43:26.647 Removing: /var/run/dpdk/spdk_pid133498 00:43:26.647 Removing: /var/run/dpdk/spdk_pid134425 00:43:26.647 Removing: /var/run/dpdk/spdk_pid134960 00:43:26.647 Removing: /var/run/dpdk/spdk_pid135139 00:43:26.647 Removing: /var/run/dpdk/spdk_pid137273 00:43:26.647 Removing: /var/run/dpdk/spdk_pid137747 00:43:26.647 Removing: /var/run/dpdk/spdk_pid137948 00:43:26.647 Removing: /var/run/dpdk/spdk_pid140083 00:43:26.647 Removing: /var/run/dpdk/spdk_pid140565 00:43:26.647 Removing: /var/run/dpdk/spdk_pid140758 00:43:26.647 Removing: /var/run/dpdk/spdk_pid142914 00:43:26.647 Removing: /var/run/dpdk/spdk_pid143656 00:43:26.647 Removing: /var/run/dpdk/spdk_pid143851 00:43:26.647 Removing: /var/run/dpdk/spdk_pid146252 00:43:26.647 Removing: /var/run/dpdk/spdk_pid146801 00:43:26.647 Removing: /var/run/dpdk/spdk_pid147012 00:43:26.647 Removing: /var/run/dpdk/spdk_pid149416 00:43:26.647 Removing: /var/run/dpdk/spdk_pid149959 00:43:26.647 Removing: /var/run/dpdk/spdk_pid150162 00:43:26.647 Removing: /var/run/dpdk/spdk_pid152544 00:43:26.647 Removing: /var/run/dpdk/spdk_pid153389 00:43:26.647 Removing: /var/run/dpdk/spdk_pid153594 00:43:26.647 Removing: /var/run/dpdk/spdk_pid153788 00:43:26.647 Removing: /var/run/dpdk/spdk_pid154327 00:43:26.647 Removing: /var/run/dpdk/spdk_pid155281 00:43:26.647 Removing: /var/run/dpdk/spdk_pid155760 00:43:26.648 Removing: /var/run/dpdk/spdk_pid156639 00:43:26.648 Removing: /var/run/dpdk/spdk_pid157228 00:43:26.648 Removing: /var/run/dpdk/spdk_pid158202 00:43:26.648 Removing: /var/run/dpdk/spdk_pid158714 00:43:26.648 Removing: /var/run/dpdk/spdk_pid161561 00:43:26.648 Removing: /var/run/dpdk/spdk_pid162287 00:43:26.648 Removing: /var/run/dpdk/spdk_pid162832 00:43:26.648 Removing: /var/run/dpdk/spdk_pid165934 00:43:26.648 Removing: /var/run/dpdk/spdk_pid166780 00:43:26.648 Removing: /var/run/dpdk/spdk_pid167395 00:43:26.648 Removing: /var/run/dpdk/spdk_pid168766 00:43:26.648 Removing: /var/run/dpdk/spdk_pid169289 00:43:26.648 Removing: /var/run/dpdk/spdk_pid170527 00:43:26.648 Removing: /var/run/dpdk/spdk_pid171044 00:43:26.648 Removing: /var/run/dpdk/spdk_pid172291 00:43:26.648 Removing: /var/run/dpdk/spdk_pid172803 00:43:26.648 Removing: /var/run/dpdk/spdk_pid173645 00:43:26.648 Removing: /var/run/dpdk/spdk_pid173681 00:43:26.648 Removing: /var/run/dpdk/spdk_pid173720 00:43:26.648 Removing: /var/run/dpdk/spdk_pid173766 00:43:26.648 Removing: /var/run/dpdk/spdk_pid173885 00:43:26.648 Removing: /var/run/dpdk/spdk_pid174014 00:43:26.648 Removing: /var/run/dpdk/spdk_pid174227 00:43:26.648 Removing: /var/run/dpdk/spdk_pid174516 00:43:26.648 Removing: /var/run/dpdk/spdk_pid174532 00:43:26.648 Removing: /var/run/dpdk/spdk_pid174581 00:43:26.648 Removing: /var/run/dpdk/spdk_pid174589 00:43:26.648 Removing: /var/run/dpdk/spdk_pid174610 00:43:26.648 Removing: /var/run/dpdk/spdk_pid174630 00:43:26.648 Removing: /var/run/dpdk/spdk_pid174645 00:43:26.648 Removing: /var/run/dpdk/spdk_pid174666 00:43:26.648 Removing: /var/run/dpdk/spdk_pid174686 00:43:26.648 Removing: /var/run/dpdk/spdk_pid174704 00:43:26.648 Removing: /var/run/dpdk/spdk_pid174717 00:43:26.648 Removing: /var/run/dpdk/spdk_pid174744 00:43:26.648 Removing: /var/run/dpdk/spdk_pid174753 00:43:26.648 Removing: /var/run/dpdk/spdk_pid174773 00:43:26.648 Removing: /var/run/dpdk/spdk_pid174789 00:43:26.905 Removing: /var/run/dpdk/spdk_pid174809 00:43:26.905 Removing: /var/run/dpdk/spdk_pid174825 00:43:26.905 Removing: /var/run/dpdk/spdk_pid174845 00:43:26.905 Removing: /var/run/dpdk/spdk_pid174857 00:43:26.906 Removing: /var/run/dpdk/spdk_pid174874 00:43:26.906 Removing: /var/run/dpdk/spdk_pid174915 00:43:26.906 Removing: /var/run/dpdk/spdk_pid174935 00:43:26.906 Removing: /var/run/dpdk/spdk_pid174963 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175029 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175070 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175075 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175118 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175133 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175140 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175192 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175207 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175237 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175252 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175261 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175274 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175289 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175299 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175311 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175328 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175354 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175396 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175411 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175443 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175458 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175469 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175517 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175532 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175569 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175577 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175595 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175608 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175617 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175630 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175642 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175652 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175731 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175788 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175902 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175926 00:43:26.906 Removing: /var/run/dpdk/spdk_pid175967 00:43:26.906 Removing: /var/run/dpdk/spdk_pid176019 00:43:26.906 Removing: /var/run/dpdk/spdk_pid176040 00:43:26.906 Removing: /var/run/dpdk/spdk_pid176061 00:43:26.906 Removing: /var/run/dpdk/spdk_pid176084 00:43:26.906 Removing: /var/run/dpdk/spdk_pid176122 00:43:26.906 Removing: /var/run/dpdk/spdk_pid176140 00:43:26.906 Removing: /var/run/dpdk/spdk_pid176219 00:43:26.906 Removing: /var/run/dpdk/spdk_pid176266 00:43:26.906 Removing: /var/run/dpdk/spdk_pid176311 00:43:26.906 Removing: /var/run/dpdk/spdk_pid176569 00:43:26.906 Removing: /var/run/dpdk/spdk_pid176682 00:43:26.906 Removing: /var/run/dpdk/spdk_pid176717 00:43:26.906 Removing: /var/run/dpdk/spdk_pid176803 00:43:26.906 Removing: /var/run/dpdk/spdk_pid176872 00:43:26.906 Removing: /var/run/dpdk/spdk_pid176911 00:43:26.906 Removing: /var/run/dpdk/spdk_pid177142 00:43:26.906 Removing: /var/run/dpdk/spdk_pid177233 00:43:26.906 Removing: /var/run/dpdk/spdk_pid177322 00:43:26.906 Removing: /var/run/dpdk/spdk_pid177369 00:43:26.906 Removing: /var/run/dpdk/spdk_pid177408 00:43:26.906 Removing: /var/run/dpdk/spdk_pid177478 00:43:26.906 Removing: /var/run/dpdk/spdk_pid177887 00:43:26.906 Removing: /var/run/dpdk/spdk_pid177925 00:43:26.906 Removing: /var/run/dpdk/spdk_pid178224 00:43:26.906 Removing: /var/run/dpdk/spdk_pid178315 00:43:26.906 Removing: /var/run/dpdk/spdk_pid178404 00:43:26.906 Removing: /var/run/dpdk/spdk_pid178452 00:43:26.906 Removing: /var/run/dpdk/spdk_pid178474 00:43:26.906 Removing: /var/run/dpdk/spdk_pid178502 00:43:26.906 Removing: /var/run/dpdk/spdk_pid179789 00:43:26.906 Removing: /var/run/dpdk/spdk_pid179914 00:43:26.906 Removing: /var/run/dpdk/spdk_pid179919 00:43:26.906 Removing: /var/run/dpdk/spdk_pid179936 00:43:26.906 Removing: /var/run/dpdk/spdk_pid180420 00:43:26.906 Removing: /var/run/dpdk/spdk_pid180512 00:43:26.906 Removing: /var/run/dpdk/spdk_pid181403 00:43:26.906 Removing: /var/run/dpdk/spdk_pid182254 00:43:26.906 Removing: /var/run/dpdk/spdk_pid182295 00:43:26.906 Removing: /var/run/dpdk/spdk_pid182333 00:43:26.906 Removing: /var/run/dpdk/spdk_pid182602 00:43:26.906 Removing: /var/run/dpdk/spdk_pid182770 00:43:26.906 Removing: /var/run/dpdk/spdk_pid182859 00:43:27.164 Removing: /var/run/dpdk/spdk_pid182950 00:43:27.164 Removing: /var/run/dpdk/spdk_pid182998 00:43:27.164 Removing: /var/run/dpdk/spdk_pid183025 00:43:27.164 Clean 00:43:27.164 23:29:16 -- common/autotest_common.sh@1451 -- # return 0 00:43:27.164 23:29:16 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:43:27.164 23:29:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:43:27.164 23:29:16 -- common/autotest_common.sh@10 -- # set +x 00:43:27.164 23:29:16 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:43:27.164 23:29:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:43:27.164 23:29:16 -- common/autotest_common.sh@10 -- # set +x 00:43:27.164 23:29:16 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:43:27.164 23:29:16 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:43:27.164 23:29:16 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:43:27.164 23:29:16 -- spdk/autotest.sh@391 -- # hash lcov 00:43:27.164 23:29:16 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:43:27.164 23:29:16 -- spdk/autotest.sh@393 -- # hostname 00:43:27.164 23:29:16 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:43:27.421 geninfo: WARNING: invalid characters removed from testname! 00:44:14.094 23:29:56 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:44:14.094 23:30:01 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:44:15.468 23:30:04 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:44:18.752 23:30:07 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:44:22.037 23:30:10 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:44:24.566 23:30:13 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:44:27.902 23:30:17 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:44:27.902 23:30:17 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:44:27.902 23:30:17 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:44:27.902 23:30:17 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:27.902 23:30:17 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:27.902 23:30:17 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:27.902 23:30:17 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:27.902 23:30:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:27.902 23:30:17 -- paths/export.sh@5 -- $ export PATH 00:44:27.902 23:30:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:27.902 23:30:17 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:44:27.902 23:30:17 -- common/autobuild_common.sh@444 -- $ date +%s 00:44:27.902 23:30:17 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720913417.XXXXXX 00:44:27.902 23:30:17 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720913417.SNemYx 00:44:27.902 23:30:17 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:44:27.902 23:30:17 -- common/autobuild_common.sh@450 -- $ '[' -n v22.11.4 ']' 00:44:27.902 23:30:17 -- common/autobuild_common.sh@451 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:44:27.902 23:30:17 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:44:27.902 23:30:17 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:44:27.902 23:30:17 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:44:27.902 23:30:17 -- common/autobuild_common.sh@460 -- $ get_config_params 00:44:27.902 23:30:17 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:44:27.902 23:30:17 -- common/autotest_common.sh@10 -- $ set +x 00:44:27.902 23:30:17 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:44:27.902 23:30:17 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:44:27.902 23:30:17 -- pm/common@17 -- $ local monitor 00:44:27.902 23:30:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:27.902 23:30:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:27.902 23:30:17 -- pm/common@25 -- $ sleep 1 00:44:27.902 23:30:17 -- pm/common@21 -- $ date +%s 00:44:27.902 23:30:17 -- pm/common@21 -- $ date +%s 00:44:27.903 23:30:17 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720913417 00:44:27.903 23:30:17 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720913417 00:44:27.903 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720913417_collect-vmstat.pm.log 00:44:27.903 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720913417_collect-cpu-load.pm.log 00:44:28.837 23:30:18 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:44:28.837 23:30:18 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:44:28.837 23:30:18 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:44:28.837 23:30:18 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:44:28.837 23:30:18 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:44:28.837 23:30:18 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:44:28.837 23:30:18 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:44:28.837 23:30:18 -- common/autotest_common.sh@722 -- $ xtrace_disable 00:44:28.837 23:30:18 -- common/autotest_common.sh@10 -- $ set +x 00:44:28.837 23:30:18 -- spdk/autopackage.sh@26 -- $ [[ '' == *clang* ]] 00:44:28.837 23:30:18 -- spdk/autopackage.sh@36 -- $ [[ -n v22.11.4 ]] 00:44:28.837 23:30:18 -- spdk/autopackage.sh@36 -- $ [[ -e /tmp/spdk-ld-path ]] 00:44:28.837 23:30:18 -- spdk/autopackage.sh@37 -- $ source /tmp/spdk-ld-path 00:44:28.838 23:30:18 -- tmp/spdk-ld-path@1 -- $ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:44:28.838 23:30:18 -- tmp/spdk-ld-path@1 -- $ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:44:28.838 23:30:18 -- tmp/spdk-ld-path@2 -- $ export PKG_CONFIG_PATH= 00:44:28.838 23:30:18 -- tmp/spdk-ld-path@2 -- $ PKG_CONFIG_PATH= 00:44:28.838 23:30:18 -- spdk/autopackage.sh@40 -- $ get_config_params 00:44:28.838 23:30:18 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:44:28.838 23:30:18 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:44:28.838 23:30:18 -- common/autotest_common.sh@10 -- $ set +x 00:44:28.838 23:30:18 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:44:28.838 23:30:18 -- spdk/autopackage.sh@41 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --enable-lto --disable-unit-tests 00:44:29.096 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:44:29.096 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:44:29.096 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:44:29.096 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:44:29.354 Using 'verbs' RDMA provider 00:44:42.119 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:44:52.127 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:44:52.127 Creating mk/config.mk...done. 00:44:52.127 Creating mk/cc.flags.mk...done. 00:44:52.127 Type 'make' to build. 00:44:52.127 23:30:41 -- spdk/autopackage.sh@43 -- $ make -j10 00:44:52.127 make[1]: Nothing to be done for 'all'. 00:44:52.384 CC lib/log/log.o 00:44:52.384 CC lib/log/log_flags.o 00:44:52.384 CC lib/log/log_deprecated.o 00:44:52.384 CC lib/ut_mock/mock.o 00:44:52.384 CC lib/ut/ut.o 00:44:52.641 LIB libspdk_ut_mock.a 00:44:52.641 LIB libspdk_ut.a 00:44:52.641 LIB libspdk_log.a 00:44:52.641 CC lib/dma/dma.o 00:44:52.641 CC lib/ioat/ioat.o 00:44:52.641 CC lib/util/base64.o 00:44:52.641 CXX lib/trace_parser/trace.o 00:44:52.641 CC lib/util/bit_array.o 00:44:52.641 CC lib/util/cpuset.o 00:44:52.641 CC lib/util/crc16.o 00:44:52.641 CC lib/util/crc32.o 00:44:52.641 CC lib/util/crc32c.o 00:44:52.901 CC lib/vfio_user/host/vfio_user_pci.o 00:44:52.901 CC lib/util/crc32_ieee.o 00:44:52.901 CC lib/util/crc64.o 00:44:52.901 CC lib/vfio_user/host/vfio_user.o 00:44:52.901 LIB libspdk_dma.a 00:44:52.901 CC lib/util/dif.o 00:44:52.901 CC lib/util/fd.o 00:44:52.901 CC lib/util/file.o 00:44:52.901 CC lib/util/hexlify.o 00:44:52.901 LIB libspdk_ioat.a 00:44:52.901 CC lib/util/iov.o 00:44:52.901 CC lib/util/math.o 00:44:52.901 CC lib/util/pipe.o 00:44:53.160 CC lib/util/strerror_tls.o 00:44:53.160 LIB libspdk_vfio_user.a 00:44:53.160 CC lib/util/string.o 00:44:53.160 CC lib/util/uuid.o 00:44:53.160 CC lib/util/fd_group.o 00:44:53.160 CC lib/util/xor.o 00:44:53.160 CC lib/util/zipf.o 00:44:53.419 LIB libspdk_util.a 00:44:53.419 LIB libspdk_trace_parser.a 00:44:53.419 CC lib/rdma_provider/common.o 00:44:53.419 CC lib/rdma_provider/rdma_provider_verbs.o 00:44:53.419 CC lib/json/json_parse.o 00:44:53.419 CC lib/rdma_utils/rdma_utils.o 00:44:53.419 CC lib/json/json_util.o 00:44:53.419 CC lib/json/json_write.o 00:44:53.419 CC lib/conf/conf.o 00:44:53.419 CC lib/vmd/vmd.o 00:44:53.419 CC lib/idxd/idxd.o 00:44:53.419 CC lib/env_dpdk/env.o 00:44:53.677 CC lib/env_dpdk/memory.o 00:44:53.677 LIB libspdk_rdma_provider.a 00:44:53.677 CC lib/env_dpdk/pci.o 00:44:53.678 CC lib/env_dpdk/init.o 00:44:53.678 CC lib/idxd/idxd_user.o 00:44:53.678 LIB libspdk_rdma_utils.a 00:44:53.678 LIB libspdk_conf.a 00:44:53.678 CC lib/vmd/led.o 00:44:53.678 CC lib/env_dpdk/threads.o 00:44:53.678 LIB libspdk_json.a 00:44:53.678 CC lib/env_dpdk/pci_ioat.o 00:44:53.678 CC lib/env_dpdk/pci_virtio.o 00:44:53.678 CC lib/env_dpdk/pci_vmd.o 00:44:53.678 LIB libspdk_vmd.a 00:44:53.936 LIB libspdk_idxd.a 00:44:53.936 CC lib/env_dpdk/pci_idxd.o 00:44:53.937 CC lib/env_dpdk/pci_event.o 00:44:53.937 CC lib/env_dpdk/sigbus_handler.o 00:44:53.937 CC lib/env_dpdk/pci_dpdk.o 00:44:53.937 CC lib/jsonrpc/jsonrpc_server.o 00:44:53.937 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:44:53.937 CC lib/env_dpdk/pci_dpdk_2207.o 00:44:53.937 CC lib/env_dpdk/pci_dpdk_2211.o 00:44:53.937 CC lib/jsonrpc/jsonrpc_client.o 00:44:53.937 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:44:54.194 LIB libspdk_jsonrpc.a 00:44:54.194 CC lib/rpc/rpc.o 00:44:54.451 LIB libspdk_env_dpdk.a 00:44:54.451 LIB libspdk_rpc.a 00:44:54.709 CC lib/trace/trace.o 00:44:54.709 CC lib/trace/trace_flags.o 00:44:54.709 CC lib/trace/trace_rpc.o 00:44:54.709 CC lib/notify/notify.o 00:44:54.709 CC lib/notify/notify_rpc.o 00:44:54.709 CC lib/keyring/keyring.o 00:44:54.709 CC lib/keyring/keyring_rpc.o 00:44:54.709 LIB libspdk_notify.a 00:44:54.709 LIB libspdk_keyring.a 00:44:54.709 LIB libspdk_trace.a 00:44:54.968 CC lib/thread/thread.o 00:44:54.968 CC lib/thread/iobuf.o 00:44:54.968 CC lib/sock/sock.o 00:44:54.968 CC lib/sock/sock_rpc.o 00:44:55.226 LIB libspdk_sock.a 00:44:55.483 LIB libspdk_thread.a 00:44:55.483 CC lib/nvme/nvme_ctrlr_cmd.o 00:44:55.483 CC lib/nvme/nvme_fabric.o 00:44:55.483 CC lib/nvme/nvme_ctrlr.o 00:44:55.484 CC lib/nvme/nvme_ns.o 00:44:55.484 CC lib/nvme/nvme_pcie_common.o 00:44:55.484 CC lib/nvme/nvme_ns_cmd.o 00:44:55.484 CC lib/nvme/nvme_pcie.o 00:44:55.484 CC lib/nvme/nvme_qpair.o 00:44:55.484 CC lib/nvme/nvme.o 00:44:55.484 CC lib/nvme/nvme_quirks.o 00:44:56.049 CC lib/nvme/nvme_transport.o 00:44:56.049 CC lib/nvme/nvme_discovery.o 00:44:56.049 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:44:56.049 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:44:56.049 CC lib/accel/accel.o 00:44:56.049 CC lib/blob/blobstore.o 00:44:56.308 CC lib/blob/request.o 00:44:56.308 CC lib/blob/zeroes.o 00:44:56.308 CC lib/init/json_config.o 00:44:56.308 CC lib/init/subsystem.o 00:44:56.308 CC lib/init/subsystem_rpc.o 00:44:56.308 CC lib/init/rpc.o 00:44:56.567 CC lib/blob/blob_bs_dev.o 00:44:56.567 CC lib/nvme/nvme_tcp.o 00:44:56.567 CC lib/accel/accel_rpc.o 00:44:56.567 LIB libspdk_init.a 00:44:56.567 CC lib/accel/accel_sw.o 00:44:56.567 CC lib/nvme/nvme_opal.o 00:44:56.567 CC lib/nvme/nvme_io_msg.o 00:44:56.567 CC lib/virtio/virtio.o 00:44:56.567 CC lib/virtio/virtio_vhost_user.o 00:44:56.567 CC lib/virtio/virtio_vfio_user.o 00:44:56.567 CC lib/virtio/virtio_pci.o 00:44:56.567 CC lib/event/app.o 00:44:56.825 LIB libspdk_accel.a 00:44:56.825 CC lib/event/reactor.o 00:44:56.825 CC lib/event/log_rpc.o 00:44:56.825 CC lib/event/app_rpc.o 00:44:56.825 CC lib/event/scheduler_static.o 00:44:56.825 LIB libspdk_virtio.a 00:44:56.825 CC lib/nvme/nvme_poll_group.o 00:44:56.825 CC lib/nvme/nvme_zns.o 00:44:56.825 CC lib/nvme/nvme_stubs.o 00:44:56.825 CC lib/nvme/nvme_auth.o 00:44:56.825 CC lib/nvme/nvme_cuse.o 00:44:57.084 LIB libspdk_event.a 00:44:57.084 CC lib/bdev/bdev.o 00:44:57.084 CC lib/bdev/bdev_rpc.o 00:44:57.084 CC lib/bdev/bdev_zone.o 00:44:57.084 CC lib/bdev/part.o 00:44:57.343 CC lib/bdev/scsi_nvme.o 00:44:57.343 CC lib/nvme/nvme_rdma.o 00:44:57.343 LIB libspdk_blob.a 00:44:57.602 CC lib/blobfs/blobfs.o 00:44:57.602 CC lib/blobfs/tree.o 00:44:57.602 CC lib/lvol/lvol.o 00:44:57.860 LIB libspdk_nvme.a 00:44:57.861 LIB libspdk_blobfs.a 00:44:58.119 LIB libspdk_lvol.a 00:44:58.119 LIB libspdk_bdev.a 00:44:58.378 CC lib/nbd/nbd.o 00:44:58.378 CC lib/nbd/nbd_rpc.o 00:44:58.378 CC lib/nvmf/ctrlr.o 00:44:58.378 CC lib/nvmf/ctrlr_bdev.o 00:44:58.378 CC lib/nvmf/ctrlr_discovery.o 00:44:58.378 CC lib/nvmf/nvmf.o 00:44:58.378 CC lib/nvmf/subsystem.o 00:44:58.378 CC lib/nvmf/nvmf_rpc.o 00:44:58.378 CC lib/scsi/dev.o 00:44:58.378 CC lib/ftl/ftl_core.o 00:44:58.378 CC lib/ftl/ftl_init.o 00:44:58.378 CC lib/scsi/lun.o 00:44:58.378 CC lib/scsi/port.o 00:44:58.637 LIB libspdk_nbd.a 00:44:58.637 CC lib/ftl/ftl_layout.o 00:44:58.637 CC lib/ftl/ftl_debug.o 00:44:58.637 CC lib/ftl/ftl_io.o 00:44:58.637 CC lib/scsi/scsi.o 00:44:58.637 CC lib/nvmf/transport.o 00:44:58.637 CC lib/nvmf/tcp.o 00:44:58.637 CC lib/nvmf/stubs.o 00:44:58.637 CC lib/scsi/scsi_bdev.o 00:44:58.637 CC lib/scsi/scsi_pr.o 00:44:58.637 CC lib/ftl/ftl_sb.o 00:44:58.637 CC lib/nvmf/mdns_server.o 00:44:58.637 CC lib/ftl/ftl_l2p.o 00:44:58.896 CC lib/ftl/ftl_l2p_flat.o 00:44:58.896 CC lib/ftl/ftl_nv_cache.o 00:44:58.896 CC lib/nvmf/rdma.o 00:44:58.896 CC lib/nvmf/auth.o 00:44:58.896 CC lib/scsi/scsi_rpc.o 00:44:58.896 CC lib/scsi/task.o 00:44:58.896 CC lib/ftl/ftl_band.o 00:44:58.896 CC lib/ftl/ftl_band_ops.o 00:44:58.896 CC lib/ftl/ftl_writer.o 00:44:58.896 CC lib/ftl/ftl_rq.o 00:44:58.896 CC lib/ftl/ftl_reloc.o 00:44:59.154 LIB libspdk_scsi.a 00:44:59.154 CC lib/ftl/ftl_l2p_cache.o 00:44:59.154 CC lib/ftl/ftl_p2l.o 00:44:59.154 CC lib/ftl/mngt/ftl_mngt.o 00:44:59.154 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:44:59.154 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:44:59.154 CC lib/ftl/mngt/ftl_mngt_startup.o 00:44:59.154 CC lib/vhost/vhost.o 00:44:59.413 CC lib/iscsi/conn.o 00:44:59.413 CC lib/ftl/mngt/ftl_mngt_md.o 00:44:59.413 CC lib/ftl/mngt/ftl_mngt_misc.o 00:44:59.413 CC lib/iscsi/init_grp.o 00:44:59.413 CC lib/iscsi/iscsi.o 00:44:59.413 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:44:59.413 CC lib/iscsi/md5.o 00:44:59.413 CC lib/iscsi/param.o 00:44:59.413 LIB libspdk_nvmf.a 00:44:59.672 CC lib/iscsi/portal_grp.o 00:44:59.672 CC lib/iscsi/tgt_node.o 00:44:59.672 CC lib/iscsi/iscsi_subsystem.o 00:44:59.672 CC lib/iscsi/iscsi_rpc.o 00:44:59.672 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:44:59.672 CC lib/iscsi/task.o 00:44:59.672 CC lib/ftl/mngt/ftl_mngt_band.o 00:44:59.672 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:44:59.672 CC lib/vhost/vhost_rpc.o 00:44:59.672 CC lib/vhost/vhost_scsi.o 00:44:59.672 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:44:59.672 CC lib/vhost/vhost_blk.o 00:44:59.672 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:44:59.931 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:44:59.931 CC lib/ftl/utils/ftl_conf.o 00:44:59.931 CC lib/vhost/rte_vhost_user.o 00:44:59.931 CC lib/ftl/utils/ftl_md.o 00:44:59.931 CC lib/ftl/utils/ftl_mempool.o 00:44:59.931 CC lib/ftl/utils/ftl_bitmap.o 00:44:59.931 LIB libspdk_iscsi.a 00:44:59.931 CC lib/ftl/utils/ftl_property.o 00:44:59.931 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:44:59.931 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:45:00.190 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:45:00.190 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:45:00.190 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:45:00.190 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:45:00.190 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:45:00.190 CC lib/ftl/upgrade/ftl_sb_v3.o 00:45:00.190 CC lib/ftl/upgrade/ftl_sb_v5.o 00:45:00.190 CC lib/ftl/nvc/ftl_nvc_dev.o 00:45:00.190 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:45:00.190 CC lib/ftl/base/ftl_base_dev.o 00:45:00.450 CC lib/ftl/base/ftl_base_bdev.o 00:45:00.450 LIB libspdk_ftl.a 00:45:00.450 LIB libspdk_vhost.a 00:45:00.709 CC module/env_dpdk/env_dpdk_rpc.o 00:45:00.709 CC module/accel/iaa/accel_iaa.o 00:45:00.709 CC module/accel/dsa/accel_dsa.o 00:45:00.709 CC module/accel/error/accel_error.o 00:45:00.709 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:45:00.709 CC module/sock/posix/posix.o 00:45:00.709 CC module/accel/ioat/accel_ioat.o 00:45:00.709 CC module/blob/bdev/blob_bdev.o 00:45:00.967 CC module/keyring/file/keyring.o 00:45:00.967 CC module/scheduler/dynamic/scheduler_dynamic.o 00:45:00.967 LIB libspdk_env_dpdk_rpc.a 00:45:00.967 CC module/accel/ioat/accel_ioat_rpc.o 00:45:00.967 LIB libspdk_scheduler_dpdk_governor.a 00:45:00.967 CC module/accel/dsa/accel_dsa_rpc.o 00:45:00.967 CC module/accel/error/accel_error_rpc.o 00:45:00.967 CC module/keyring/file/keyring_rpc.o 00:45:00.967 CC module/accel/iaa/accel_iaa_rpc.o 00:45:00.967 LIB libspdk_scheduler_dynamic.a 00:45:00.967 LIB libspdk_blob_bdev.a 00:45:00.967 LIB libspdk_accel_ioat.a 00:45:00.967 LIB libspdk_accel_dsa.a 00:45:00.967 LIB libspdk_keyring_file.a 00:45:00.967 LIB libspdk_accel_iaa.a 00:45:01.226 LIB libspdk_accel_error.a 00:45:01.226 CC module/keyring/linux/keyring.o 00:45:01.226 CC module/keyring/linux/keyring_rpc.o 00:45:01.226 CC module/scheduler/gscheduler/gscheduler.o 00:45:01.226 LIB libspdk_sock_posix.a 00:45:01.226 CC module/bdev/delay/vbdev_delay.o 00:45:01.226 CC module/bdev/error/vbdev_error.o 00:45:01.226 CC module/bdev/gpt/gpt.o 00:45:01.226 CC module/bdev/delay/vbdev_delay_rpc.o 00:45:01.226 CC module/bdev/lvol/vbdev_lvol.o 00:45:01.226 CC module/blobfs/bdev/blobfs_bdev.o 00:45:01.226 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:45:01.226 LIB libspdk_keyring_linux.a 00:45:01.226 LIB libspdk_scheduler_gscheduler.a 00:45:01.226 CC module/bdev/malloc/bdev_malloc.o 00:45:01.226 CC module/bdev/gpt/vbdev_gpt.o 00:45:01.485 LIB libspdk_blobfs_bdev.a 00:45:01.485 CC module/bdev/null/bdev_null.o 00:45:01.485 CC module/bdev/error/vbdev_error_rpc.o 00:45:01.485 CC module/bdev/null/bdev_null_rpc.o 00:45:01.485 CC module/bdev/nvme/bdev_nvme.o 00:45:01.485 LIB libspdk_bdev_delay.a 00:45:01.485 CC module/bdev/nvme/bdev_nvme_rpc.o 00:45:01.485 CC module/bdev/malloc/bdev_malloc_rpc.o 00:45:01.485 CC module/bdev/passthru/vbdev_passthru.o 00:45:01.485 CC module/bdev/raid/bdev_raid.o 00:45:01.485 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:45:01.485 LIB libspdk_bdev_gpt.a 00:45:01.485 LIB libspdk_bdev_error.a 00:45:01.485 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:45:01.485 CC module/bdev/raid/bdev_raid_rpc.o 00:45:01.485 CC module/bdev/raid/bdev_raid_sb.o 00:45:01.485 LIB libspdk_bdev_null.a 00:45:01.485 CC module/bdev/raid/raid0.o 00:45:01.485 LIB libspdk_bdev_malloc.a 00:45:01.744 CC module/bdev/raid/raid1.o 00:45:01.744 LIB libspdk_bdev_passthru.a 00:45:01.744 CC module/bdev/split/vbdev_split.o 00:45:01.744 CC module/bdev/split/vbdev_split_rpc.o 00:45:01.744 CC module/bdev/raid/concat.o 00:45:01.744 LIB libspdk_bdev_lvol.a 00:45:01.744 CC module/bdev/raid/raid5f.o 00:45:01.744 CC module/bdev/nvme/nvme_rpc.o 00:45:01.744 CC module/bdev/aio/bdev_aio.o 00:45:01.744 CC module/bdev/aio/bdev_aio_rpc.o 00:45:02.003 CC module/bdev/zone_block/vbdev_zone_block.o 00:45:02.003 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:45:02.003 CC module/bdev/nvme/bdev_mdns_client.o 00:45:02.003 LIB libspdk_bdev_split.a 00:45:02.003 CC module/bdev/nvme/vbdev_opal.o 00:45:02.003 CC module/bdev/nvme/vbdev_opal_rpc.o 00:45:02.003 LIB libspdk_bdev_raid.a 00:45:02.003 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:45:02.003 CC module/bdev/ftl/bdev_ftl.o 00:45:02.003 CC module/bdev/ftl/bdev_ftl_rpc.o 00:45:02.003 CC module/bdev/iscsi/bdev_iscsi.o 00:45:02.003 LIB libspdk_bdev_aio.a 00:45:02.003 LIB libspdk_bdev_zone_block.a 00:45:02.003 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:45:02.275 CC module/bdev/virtio/bdev_virtio_scsi.o 00:45:02.275 CC module/bdev/virtio/bdev_virtio_rpc.o 00:45:02.275 CC module/bdev/virtio/bdev_virtio_blk.o 00:45:02.275 LIB libspdk_bdev_ftl.a 00:45:02.275 LIB libspdk_bdev_iscsi.a 00:45:02.275 LIB libspdk_bdev_nvme.a 00:45:02.545 LIB libspdk_bdev_virtio.a 00:45:02.804 CC module/event/subsystems/vmd/vmd.o 00:45:02.804 CC module/event/subsystems/vmd/vmd_rpc.o 00:45:02.804 CC module/event/subsystems/iobuf/iobuf.o 00:45:02.804 CC module/event/subsystems/sock/sock.o 00:45:02.804 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:45:02.804 CC module/event/subsystems/keyring/keyring.o 00:45:02.804 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:45:02.804 CC module/event/subsystems/scheduler/scheduler.o 00:45:02.804 LIB libspdk_event_keyring.a 00:45:02.804 LIB libspdk_event_vhost_blk.a 00:45:02.804 LIB libspdk_event_vmd.a 00:45:02.804 LIB libspdk_event_iobuf.a 00:45:02.804 LIB libspdk_event_scheduler.a 00:45:02.804 LIB libspdk_event_sock.a 00:45:03.063 CC module/event/subsystems/accel/accel.o 00:45:03.322 LIB libspdk_event_accel.a 00:45:03.322 CC module/event/subsystems/bdev/bdev.o 00:45:03.581 LIB libspdk_event_bdev.a 00:45:03.840 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:45:03.840 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:45:03.840 CC module/event/subsystems/scsi/scsi.o 00:45:03.840 CC module/event/subsystems/nbd/nbd.o 00:45:03.840 LIB libspdk_event_scsi.a 00:45:03.840 LIB libspdk_event_nbd.a 00:45:04.098 LIB libspdk_event_nvmf.a 00:45:04.098 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:45:04.098 CC module/event/subsystems/iscsi/iscsi.o 00:45:04.356 LIB libspdk_event_vhost_scsi.a 00:45:04.356 LIB libspdk_event_iscsi.a 00:45:04.614 CC app/trace_record/trace_record.o 00:45:04.614 CXX app/trace/trace.o 00:45:04.614 CC app/spdk_nvme_perf/perf.o 00:45:04.614 CC app/spdk_lspci/spdk_lspci.o 00:45:04.614 CC app/spdk_nvme_identify/identify.o 00:45:04.614 CC app/iscsi_tgt/iscsi_tgt.o 00:45:04.614 CC app/nvmf_tgt/nvmf_main.o 00:45:04.614 CC app/spdk_tgt/spdk_tgt.o 00:45:04.614 CC examples/util/zipf/zipf.o 00:45:04.614 CC test/thread/poller_perf/poller_perf.o 00:45:04.614 LINK spdk_lspci 00:45:04.872 LINK nvmf_tgt 00:45:04.872 LINK zipf 00:45:04.872 LINK spdk_trace_record 00:45:04.872 LINK iscsi_tgt 00:45:04.872 LINK poller_perf 00:45:04.872 LINK spdk_tgt 00:45:04.872 LINK spdk_trace 00:45:05.131 LINK spdk_nvme_identify 00:45:05.131 LINK spdk_nvme_perf 00:45:07.662 CC examples/ioat/perf/perf.o 00:45:07.921 LINK ioat_perf 00:45:08.488 CC examples/ioat/verify/verify.o 00:45:09.055 LINK verify 00:45:14.322 CC test/thread/lock/spdk_lock.o 00:45:16.854 CC test/dma/test_dma/test_dma.o 00:45:17.821 LINK test_dma 00:45:18.079 LINK spdk_lock 00:45:19.454 CC test/app/bdev_svc/bdev_svc.o 00:45:20.020 LINK bdev_svc 00:45:22.554 CC examples/vmd/lsvmd/lsvmd.o 00:45:23.490 LINK lsvmd 00:45:25.390 CC examples/idxd/perf/perf.o 00:45:26.327 LINK idxd_perf 00:45:31.596 CC app/spdk_nvme_discover/discovery_aer.o 00:45:31.855 LINK spdk_nvme_discover 00:45:33.756 CC app/spdk_top/spdk_top.o 00:45:37.058 LINK spdk_top 00:45:40.343 CC examples/vmd/led/led.o 00:45:40.603 CC examples/interrupt_tgt/interrupt_tgt.o 00:45:41.172 LINK led 00:45:42.108 LINK interrupt_tgt 00:45:54.337 CC app/vhost/vhost.o 00:45:54.337 LINK vhost 00:45:57.624 CC app/spdk_dd/spdk_dd.o 00:45:59.527 LINK spdk_dd 00:46:11.750 TEST_HEADER include/spdk/config.h 00:46:11.750 CXX test/cpp_headers/accel.o 00:46:12.685 CXX test/cpp_headers/accel_module.o 00:46:13.622 CXX test/cpp_headers/assert.o 00:46:14.556 CXX test/cpp_headers/barrier.o 00:46:14.556 CXX test/cpp_headers/base64.o 00:46:15.490 CXX test/cpp_headers/bdev.o 00:46:16.865 CXX test/cpp_headers/bdev_module.o 00:46:17.430 CC test/env/mem_callbacks/mem_callbacks.o 00:46:18.364 CXX test/cpp_headers/bdev_zone.o 00:46:18.364 LINK mem_callbacks 00:46:19.741 CXX test/cpp_headers/bit_array.o 00:46:21.116 CXX test/cpp_headers/bit_pool.o 00:46:22.491 CXX test/cpp_headers/blob.o 00:46:23.868 CXX test/cpp_headers/blob_bdev.o 00:46:24.127 CC test/env/vtophys/vtophys.o 00:46:25.061 CXX test/cpp_headers/blobfs.o 00:46:25.320 LINK vtophys 00:46:26.695 CXX test/cpp_headers/blobfs_bdev.o 00:46:28.070 CXX test/cpp_headers/conf.o 00:46:29.444 CXX test/cpp_headers/config.o 00:46:29.703 CXX test/cpp_headers/cpuset.o 00:46:31.157 CXX test/cpp_headers/crc16.o 00:46:32.536 CXX test/cpp_headers/crc32.o 00:46:33.915 CXX test/cpp_headers/crc64.o 00:46:35.290 CXX test/cpp_headers/dif.o 00:46:36.667 CXX test/cpp_headers/dma.o 00:46:38.045 CXX test/cpp_headers/endian.o 00:46:39.421 CXX test/cpp_headers/env.o 00:46:40.794 CXX test/cpp_headers/env_dpdk.o 00:46:42.170 CXX test/cpp_headers/event.o 00:46:43.547 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:46:43.805 CXX test/cpp_headers/fd.o 00:46:44.741 LINK env_dpdk_post_init 00:46:45.000 CXX test/cpp_headers/fd_group.o 00:46:46.377 CXX test/cpp_headers/file.o 00:46:47.754 CXX test/cpp_headers/ftl.o 00:46:49.657 CXX test/cpp_headers/gpt_spec.o 00:46:50.589 CXX test/cpp_headers/hexlify.o 00:46:51.963 CXX test/cpp_headers/histogram_data.o 00:46:53.865 CXX test/cpp_headers/idxd.o 00:46:54.798 CXX test/cpp_headers/idxd_spec.o 00:46:56.702 CXX test/cpp_headers/init.o 00:46:58.077 CXX test/cpp_headers/ioat.o 00:46:59.457 CXX test/cpp_headers/ioat_spec.o 00:47:00.832 CXX test/cpp_headers/iscsi_spec.o 00:47:02.210 CXX test/cpp_headers/json.o 00:47:03.587 CXX test/cpp_headers/jsonrpc.o 00:47:04.982 CXX test/cpp_headers/keyring.o 00:47:06.354 CXX test/cpp_headers/keyring_module.o 00:47:08.254 CXX test/cpp_headers/likely.o 00:47:09.189 CXX test/cpp_headers/log.o 00:47:10.567 CC examples/thread/thread/thread_ex.o 00:47:10.567 CXX test/cpp_headers/lvol.o 00:47:11.945 LINK thread 00:47:11.945 CXX test/cpp_headers/memory.o 00:47:13.321 CXX test/cpp_headers/mmio.o 00:47:13.899 CXX test/cpp_headers/nbd.o 00:47:14.157 CXX test/cpp_headers/notify.o 00:47:15.532 CXX test/cpp_headers/nvme.o 00:47:16.468 CXX test/cpp_headers/nvme_intel.o 00:47:17.844 CXX test/cpp_headers/nvme_ocssd.o 00:47:18.782 CXX test/cpp_headers/nvme_ocssd_spec.o 00:47:20.157 CXX test/cpp_headers/nvme_spec.o 00:47:21.093 CXX test/cpp_headers/nvme_zns.o 00:47:22.028 CXX test/cpp_headers/nvmf.o 00:47:22.028 CC test/env/memory/memory_ut.o 00:47:22.286 CC test/event/event_perf/event_perf.o 00:47:22.853 LINK event_perf 00:47:23.112 CXX test/cpp_headers/nvmf_cmd.o 00:47:23.680 LINK memory_ut 00:47:23.940 CXX test/cpp_headers/nvmf_fc_spec.o 00:47:24.506 CXX test/cpp_headers/nvmf_spec.o 00:47:25.073 CXX test/cpp_headers/nvmf_transport.o 00:47:26.009 CXX test/cpp_headers/opal.o 00:47:26.945 CXX test/cpp_headers/opal_spec.o 00:47:27.881 CXX test/cpp_headers/pci_ids.o 00:47:28.817 CXX test/cpp_headers/pipe.o 00:47:29.384 CC test/env/pci/pci_ut.o 00:47:29.642 CXX test/cpp_headers/queue.o 00:47:29.901 CXX test/cpp_headers/reduce.o 00:47:29.901 CXX test/cpp_headers/rpc.o 00:47:30.469 LINK pci_ut 00:47:31.037 CXX test/cpp_headers/scheduler.o 00:47:31.974 CC examples/sock/hello_world/hello_sock.o 00:47:31.974 CC test/app/histogram_perf/histogram_perf.o 00:47:31.974 CXX test/cpp_headers/scsi.o 00:47:31.974 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:47:32.542 LINK histogram_perf 00:47:32.802 LINK hello_sock 00:47:32.802 CXX test/cpp_headers/scsi_spec.o 00:47:33.061 LINK nvme_fuzz 00:47:33.629 CXX test/cpp_headers/sock.o 00:47:34.204 CXX test/cpp_headers/stdinc.o 00:47:34.806 CXX test/cpp_headers/string.o 00:47:35.446 CC test/event/reactor/reactor.o 00:47:35.705 CXX test/cpp_headers/thread.o 00:47:36.273 LINK reactor 00:47:36.532 CXX test/cpp_headers/trace.o 00:47:37.100 CXX test/cpp_headers/trace_parser.o 00:47:38.037 CXX test/cpp_headers/tree.o 00:47:38.037 CC test/rpc_client/rpc_client_test.o 00:47:38.037 CXX test/cpp_headers/ublk.o 00:47:38.974 LINK rpc_client_test 00:47:38.974 CXX test/cpp_headers/util.o 00:47:40.351 CXX test/cpp_headers/uuid.o 00:47:41.287 CXX test/cpp_headers/version.o 00:47:41.545 CXX test/cpp_headers/vfio_user_pci.o 00:47:42.481 CXX test/cpp_headers/vfio_user_spec.o 00:47:43.857 CXX test/cpp_headers/vhost.o 00:47:44.792 CXX test/cpp_headers/vmd.o 00:47:45.726 CC test/app/jsoncat/jsoncat.o 00:47:45.726 CXX test/cpp_headers/xor.o 00:47:46.292 CC app/fio/nvme/fio_plugin.o 00:47:46.292 LINK jsoncat 00:47:46.551 CXX test/cpp_headers/zipf.o 00:47:48.454 CC app/fio/bdev/fio_plugin.o 00:47:48.454 LINK spdk_nvme 00:47:50.352 LINK spdk_bdev 00:47:52.247 CC test/accel/dif/dif.o 00:47:53.177 CC test/event/reactor_perf/reactor_perf.o 00:47:53.435 LINK dif 00:47:53.693 CC examples/nvme/hello_world/hello_world.o 00:47:53.952 LINK reactor_perf 00:47:54.519 LINK hello_world 00:48:01.105 CC test/app/stub/stub.o 00:48:01.363 LINK stub 00:48:05.546 CC test/blobfs/mkfs/mkfs.o 00:48:05.805 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:48:06.371 LINK mkfs 00:48:11.638 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:48:11.638 LINK iscsi_fuzz 00:48:11.638 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:48:11.638 CC test/event/app_repeat/app_repeat.o 00:48:13.010 LINK app_repeat 00:48:13.268 LINK vhost_fuzz 00:48:35.195 CC examples/nvme/reconnect/reconnect.o 00:48:36.125 LINK reconnect 00:48:44.242 CC examples/nvme/nvme_manage/nvme_manage.o 00:48:46.141 LINK nvme_manage 00:48:48.042 CC test/lvol/esnap/esnap.o 00:48:50.575 CC test/nvme/aer/aer.o 00:48:52.476 LINK aer 00:48:53.851 CC test/nvme/reset/reset.o 00:48:55.752 LINK reset 00:49:02.334 CC test/event/scheduler/scheduler.o 00:49:03.711 LINK scheduler 00:49:03.969 LINK esnap 00:49:12.081 CC test/nvme/sgl/sgl.o 00:49:12.649 LINK sgl 00:49:20.762 CC examples/nvme/arbitration/arbitration.o 00:49:23.326 LINK arbitration 00:49:33.295 CC examples/nvme/hotplug/hotplug.o 00:49:34.229 LINK hotplug 00:49:34.793 CC test/nvme/e2edp/nvme_dp.o 00:49:36.693 LINK nvme_dp 00:49:41.959 CC examples/nvme/cmb_copy/cmb_copy.o 00:49:42.525 LINK cmb_copy 00:49:50.639 CC examples/nvme/abort/abort.o 00:49:52.541 LINK abort 00:49:55.827 CC test/nvme/overhead/overhead.o 00:49:57.732 LINK overhead 00:50:15.847 CC test/nvme/err_injection/err_injection.o 00:50:15.847 LINK err_injection 00:50:17.227 CC test/nvme/startup/startup.o 00:50:17.227 CC test/nvme/reserve/reserve.o 00:50:18.165 LINK startup 00:50:18.165 LINK reserve 00:50:20.727 CC test/nvme/simple_copy/simple_copy.o 00:50:21.294 CC examples/accel/perf/accel_perf.o 00:50:21.861 LINK simple_copy 00:50:23.239 LINK accel_perf 00:50:26.520 CC examples/blob/hello_world/hello_blob.o 00:50:27.453 LINK hello_blob 00:50:35.571 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:50:35.571 CC test/nvme/connect_stress/connect_stress.o 00:50:36.946 LINK pmr_persistence 00:50:36.946 LINK connect_stress 00:50:40.231 CC test/nvme/boot_partition/boot_partition.o 00:50:40.799 LINK boot_partition 00:50:53.009 CC examples/blob/cli/blobcli.o 00:50:53.009 CC test/nvme/compliance/nvme_compliance.o 00:50:53.943 LINK blobcli 00:50:54.508 LINK nvme_compliance 00:50:57.045 CC test/nvme/fused_ordering/fused_ordering.o 00:50:57.045 CC test/nvme/doorbell_aers/doorbell_aers.o 00:50:57.644 LINK fused_ordering 00:50:58.209 LINK doorbell_aers 00:50:58.775 CC examples/bdev/hello_world/hello_bdev.o 00:50:59.710 LINK hello_bdev 00:51:14.585 CC examples/bdev/bdevperf/bdevperf.o 00:51:15.960 CC test/bdev/bdevio/bdevio.o 00:51:16.897 LINK bdevperf 00:51:17.463 LINK bdevio 00:51:18.399 CC test/nvme/fdp/fdp.o 00:51:19.776 LINK fdp 00:51:26.368 CC test/nvme/cuse/cuse.o 00:51:30.560 LINK cuse 00:52:51.992 CC examples/nvmf/nvmf/nvmf.o 00:52:51.992 LINK nvmf 00:53:00.104 23:38:49 -- spdk/autopackage.sh@44 -- $ make -j10 clean 00:53:00.364 make[1]: Nothing to be done for 'clean'. 00:53:06.928 23:38:55 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 00:53:06.928 23:38:55 -- common/autotest_common.sh@728 -- $ xtrace_disable 00:53:06.928 23:38:55 -- common/autotest_common.sh@10 -- $ set +x 00:53:06.928 23:38:55 -- spdk/autopackage.sh@48 -- $ timing_finish 00:53:06.928 23:38:55 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:53:06.928 23:38:55 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:53:06.928 23:38:55 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:53:06.928 23:38:55 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:53:06.928 23:38:55 -- pm/common@29 -- $ signal_monitor_resources TERM 00:53:06.928 23:38:55 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:53:06.928 23:38:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:53:06.928 23:38:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:53:06.928 23:38:55 -- pm/common@44 -- $ pid=184557 00:53:06.928 23:38:55 -- pm/common@50 -- $ kill -TERM 184557 00:53:06.928 23:38:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:53:06.928 23:38:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:53:06.928 23:38:55 -- pm/common@44 -- $ pid=184559 00:53:06.928 23:38:55 -- pm/common@50 -- $ kill -TERM 184559 00:53:06.928 + [[ -n 2277 ]] 00:53:06.928 + sudo kill 2277 00:53:06.940 [Pipeline] } 00:53:06.959 [Pipeline] // timeout 00:53:06.965 [Pipeline] } 00:53:06.982 [Pipeline] // stage 00:53:06.988 [Pipeline] } 00:53:07.006 [Pipeline] // catchError 00:53:07.016 [Pipeline] stage 00:53:07.019 [Pipeline] { (Stop VM) 00:53:07.035 [Pipeline] sh 00:53:07.312 + vagrant halt 00:53:10.602 ==> default: Halting domain... 00:53:20.591 [Pipeline] sh 00:53:20.870 + vagrant destroy -f 00:53:24.156 ==> default: Removing domain... 00:53:25.553 [Pipeline] sh 00:53:25.899 + mv output /var/jenkins/workspace/ubuntu22-vg-autotest/output 00:53:25.907 [Pipeline] } 00:53:25.923 [Pipeline] // stage 00:53:25.929 [Pipeline] } 00:53:25.945 [Pipeline] // dir 00:53:25.951 [Pipeline] } 00:53:25.968 [Pipeline] // wrap 00:53:25.974 [Pipeline] } 00:53:25.990 [Pipeline] // catchError 00:53:25.998 [Pipeline] stage 00:53:26.000 [Pipeline] { (Epilogue) 00:53:26.015 [Pipeline] sh 00:53:26.296 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:53:48.236 [Pipeline] catchError 00:53:48.238 [Pipeline] { 00:53:48.257 [Pipeline] sh 00:53:48.539 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:53:48.539 Artifacts sizes are good 00:53:48.549 [Pipeline] } 00:53:48.569 [Pipeline] // catchError 00:53:48.583 [Pipeline] archiveArtifacts 00:53:48.591 Archiving artifacts 00:53:48.990 [Pipeline] cleanWs 00:53:49.009 [WS-CLEANUP] Deleting project workspace... 00:53:49.009 [WS-CLEANUP] Deferred wipeout is used... 00:53:49.037 [WS-CLEANUP] done 00:53:49.040 [Pipeline] } 00:53:49.062 [Pipeline] // stage 00:53:49.069 [Pipeline] } 00:53:49.090 [Pipeline] // node 00:53:49.097 [Pipeline] End of Pipeline 00:53:49.152 Finished: SUCCESS